07-28-2023

Impactful Visualizations Through Data Integration

Generic Cyber

Much of our work within the Collection and Analytics Portfolio of National Security Solutions involves helping our customers get the most out of data. Sometimes this means helping architect solutions to acquire the specific data they want to work with our of a sea of information; and sometimes it means writing novel and highly specialized analytics to answer complex questions.

But all-to-often, it means figuring out how to ask straightforward questions of the data the customer already has. At face value, this focus area seems rather simple. After all, if you have the data already, how hard can it be just to look at it and find the answer you need? However, the statement “you have the data” is more deceptive than it seems.

Getting The Data Needed

Our customers tend to have multiple locations and storage types that they may use. We find that data may be located on a shared network drive, dumped in a cloud-based big data lake, accessed through a web service, kept within a database on their (or someone else’s) server, or stored on collaboration tools like SharePoint or Confluence. Even more challenging, the data is rarely found in a single place, and using data from different sources is a necessity to properly answer the questions they’re looking to answer. To make matters more complex, the customer may know exactly where the data is or may only know that it exists within their enterprise not the exact location. These are classic knowledge and data management challenges that teams within Collection & Analytics deal with and are currently excelling in, resulting in new and returning customers asking for our help and driving our organic growth.

Within these knowledge and data management challenges, our teams avoid proposing solutions to replace our customer’s existing tools and capabilities, but rather find ways to better leverage what they have at their disposal. Customers have already been forced to use or invest in technology solutions and adding another product to their tool suite is out of the question. After assessing data types and their associated locations, our teams write minimal amounts of code to aggregate and correlate valuable pieces of data. We’ve accomplished this with technology solutions such as Apache NiFi and Dagster for data pipelines with small bits of custom code written in a variety of languages such as Python, Java, and JavaScript. Data is only moved or duplicated from its existing storage location when significant transformation is required, the data is regularly inaccessible at its current state, or performance requirements dictate that it must be.

To summarize, our team has become experts in creating the glue necessary to bring data together. We’ve been recognized as leaders in integrating multiple capabilities through rapid, lightweight development and engineering activities.

Presenting The Data

Once the data has been correlated, the next step is making real sense of it to answer the customer’s questions.

To do this, our team has found frequent success in using Microsoft Power BI. Built as a data visualization tool originally intended for business intelligence, Power BI is a very flexible low- to no-code solution for rapidly creating meaningful representations of data. While we’ve found traditional business intelligence use cases for Power BI, we’ve also used it for everything from analysis of network traffic in support of defensive cyber operations to the tracking of multi-year project milestones.

By combining our lightweight data pipelines with an off-the-shelf visualization framework, we’ve created repeatable processes for producing impactful visualizations at mission speed and have multiple proven examples of developing dashboards that are built in hours but used for weeks, months, and even years.

Getting The Data Help You Need From Parsons

If data challenges are impeding progress for your organization, Parsons may be an ideal solution for your team and mission. We invite you to reach out to learn more about our capabilities and expertise. 

About The Author

Mr. Silvia’s 12-year professional career has spanned across multiple software and system engineering disciplines including software architecture, full-stack applications, big data, streaming and graph analytics, cloud computing, network forensics, defensive cyber operations, multi-domain data processing, and ML/AI. He is a recognized subject matter expert in analytic capability development, and participates in industry working groups focused on innovation through the automation of data processing. He delivers systems that emphasize successful user interaction. With an MBA degree from University of Maryland Global Campus, he works with both engineering staff and management to better optimize delivery of complex systems.

Be the first to receive updates about Parsons news, events, and innovations. Subscribe Today!

Back to top
facebook-pixel linkedin-pixel linkedin pixel focused image