04-13-2020

5 Ways Data Trends Will Change The Face Of The 2030s

Cyber Eye Looking Into The Future

An Article From The Future

As the “Roaring Twenties (Part Deux)” come to a close, this is the time of year where we guess the future tech mega-trends based on the current trajectory of the recent past. My goal in compiling the following list is to provide a mostly optimistic look at the anticipated innovative breakthroughs of the next decade.

The AI “Warm Winter” Ends

So, it’s official, by the mid-2020s we are “done” with the buzzwords of AI and Deep Learning. (“Real AI” usually referred to CNN (Convolution Neural Network) based tech that had seen a breakout in the 2010s). The limits to CNN based tech were found by the mid-2020s as we saw more and more systems that could process images, text and even situations into their component parts. giving machines the ability to perceive the world more effectively and efficiently than ever- but that’s where the “magic” stopped.

The weakness was the low sophistication of how that information was used for decision-making purposes. This was limited by the nature of the GPU hardware to perform the tensor math needed for allowing CNN’s perception based tasks were available, it was lacking in the capacity to efficiently model neuromorphic networks (neural networks that seek to more accurately model the behavior of biological neurons) and fully connected neural networks. With this bottleneck firmly in place, most breakthroughs in “Deep Learning” stemmed from the addition of layers and new structural features in order to refine the automated feature extraction of CNNs or ways to accelerate the less tidy fully connected layers. This eventually led to segmentation of Deep Learning practices into Deep Perception and the emerging field Deep Cognition.

In 2030, we expect to see the field of Deep Cognition continue to hit its stride in a major way as AI practitioners find new ways to exploit hardware breakthroughs in massively parallel MIMD (Multiple Instruction, Multiple Data, i.e. what “normal CPUs” have been for years).  In contrast to the GPUs of yesteryear (which plowed through well-structured rows of data like a million ox team, all yoked together but covering an impressive amount of ground) these recent advances to CPU and memory technology bring thousands of cores each capable of independent operation- allowing for seemingly limitless and unrestricted capability.

As the potential of these platforms are realized, expect the use of multi-agent systems at scale, a shift toward emergent behavior, and new applications of neuromorphic/spiking neural network implementations that are not bound to the same timing and synchronization principles as GPU based solutions. Expect more claims of “human-like intelligence” and machine “critical thinking” as the hype train picks up speed (and more machines pick up jobs with more sophisticated skillsets).

Rethinking The Network “Edge”

Since the advent of networked computing technology, the locus of computational power has largely been an exercise of data availability and the overcoming of bottlenecks in the system. We saw this in the 2020s as 5G transition allowed mobile edge devices to enjoy unprecedented access to the network. However, the “network” in question formed a sort of “edge cloud,” where networking services and hardware was deployed close to the telecom infrastructure where it could take advantage of the increased speed of the mobile layer- leaving the rest of the internet, well, slow.

This adoption of an edge cloud did wonders to fuel the low latency services we have grown to love (such as AR, VR and online gaming)- when they were available. The “edge cloud” rollout, much like the 5G rollout, was a spotty one, leaving higher population centers with better services and better speed. Because of this level of localization edge cloud deployments had all the challenges of managing the consumer device side, with very few of the advantages of traditional cloud-based high availability deployments. Although the more sophisticated applications allowed a sort of low latency fallback, we ended up with a system where “that app just doesn’t work here.”

In 2030 we should expect some of the next-generation fiber and satellite networks to bridge the throughput/data availability gap, while advances in software and more maturity in multi-tiered cloud deployments are able to maintain the minimum perceptible latency we have grown to enjoy. This will form a more “dynamic cloud” where services and heterogeneous network links (5G, satellite, WiFi, LiFi, and fiber) are continually optimized using automated analytics to anticipate customer demands, promoting higher levels of application availability and usability.

Trustworthiness And Trust

Authentication is one of the oldest concepts in IT: when a message is received, the mechanisms have been present to determine the sender of that message with a great deal of confidence. We also clamped down on “AI” and automated decision systems- we imposed codes of ethics that required the pedigree of information to be well documented and inspectable, a requirement for systems providing decision support for the government and safety sectors. Ironically, one element that has been overlooked through the greatest part of the Information Age is the information itself.

Through the 2020’s the common meme of “a post-truth era” morphed into the concepts of “post-trust” and finally “post-trustworthiness.” It took the events of the 2024 election cycle for the public concern to counterbalance the advantages that politicians reaped from the two-decade-old practice of aggressive (and highly automated) “public thought engineering.” It was time for a new system to ensure that the “I” that the “IT” had long protected was worthy of trust.

Based on the current trends and advancements in Natural Language Processing (NLP) and the dire need for trustworthiness in communication. I would expect to see 2030 bring us more follow-through in the following three areas:

Source Authentication

This means that sources of information would have authentication IDs and key-pairs associated with them, much like personal / server identity information. Any source that was to be included or referenced by greater work- whether imagery, video, audio or text, would need to be marked properly regarding its origin. Authors would make use of tools to ensure that all content is marked properly and will be received well by various “Fidelity Engines” in a similar way that web tools are tested in multiple browsers to ensure proper presentation.

Decentralized Fidelity

In a reader side evaluation process, automated tools would be used to ensure that all derivative or editorialized information that was not a direct quote or reproduction of the source material, would be marked as an interpretation, and therefore source material originating at the work’s author. Automated tools will be able to assist in whether citations and their contexts appropriately align with the intent of the referenced work/author.

Metanarrative Analysis

More sophisticated tools will be used to mine the video, audio, and textual works for “truth claims,” evaluating them against multiple sources for claim authentication as well as an analysis of usual sources and agendas. This will eventually lead to a system being able to determine and inform the reader in situations where a news story is intentionally playing into known biases. We also expect to see improvements in automated logical fallacy detection, allowing for an automated assist to the critical thinking process.

With these three pieces in place, we will begin to see a decentralized framework of trust where work product would be scored by how well supported they were by their sources, the pedigree of their sources, and by evaluated facts. Informative works would also be given agenda markers or clear designations on the nature of the sources used to construct the work.

Virtual And Augmented Reality Continue To Shake-Off Toy Status

In the early 2020s, we really saw Virtual Reality (VR) take off with the advent of new self-contained battery-powered headsets that made good use of the high bandwidth, low-latency wireless afforded by the 5G transition. Sure, the use case was mostly games as the tech suffered from the “Atari Problem” but as industries such as large scale commercial and residential construction started to apply the tech for conducting pre-construction walk-throughs, VR proved to be valuable in managing customer expectations in a way never thought possible. Expect VR to play a greater role in the 2030s as a risk reducer for large investments in large scale manufacturing and infrastructure projects. Additionally, expect the increased platform maturity to lead to more immersive training simulations for first responders, manned/unmanned flight and work tasks involving complex machinery.

Augmented Reality (AR) solutions, which overlay the real world with elements from a virtual one, are also getting less ugly/cumbersome with continued computer miniaturization. Lightweight eyeglass and (experimental) contact lens-based “always-on” AR devices on the rise- changing the face of consumer applications such as entertainment, social, shopping, and personal navigation while minimally “changing the appearance” of the user. AR’s industrial applications are sure to include systems that provide on-demand training, work instruction and situational awareness to construction and manufacturing personnel while intelligent agents assist and assess the quality of work product earlier than previously thought possible. For example, when a worker pours a foundation, secures a beam, or solders a circuit board, situations where the results violate quality tolerances are automatically evaluated and reported to the worker, alongside automated root-cause analysis and suggested course of corrective action.

In the realm of established VR and AR usage, expect continued advancement for research and data science applications with the development of new and better ways to visualize data, with a focus toward a meaningful presentation of near-real-time streaming data. In fact, I’ll make the bold prediction that it will become more of an odd sight for a corporate executive to lack the constant adornment of some sort of AR device- if not to keep themselves plugged into the heartbeat of the second to the second world of an ever-changing market, PR and internally gathered metrics, as well as to send the signal that they are in complete control of their enterprise (think the blackberry craze of the 2000’s- but on steroids).

We Get Serious About Managing Our Personal Data Footprint

If there is an internet era adage that the 2010s and 2020s taught us, it’s that when the user is given the choice between convenience and security, convenience always wins. As social networking giants face steeper fines from government watchdog agencies around the globe, the major social networking players like Facebook, Twitter and BaiduGlobal craft new policies that achieve technical compliance with the letter of the law- all while brazenly abusing the concept of opt-in/opt-out systems to deny convenience features to users who chose to withhold data.

Another issue that finally entered the public consciousness is the issue of behavioral tracking data. Although advertisers have long since been using various behavioral signals (such as mouse movements, time spent on websites, how often the user scrolls and preference for certain types of content) for advertisement purposes, we have started to see advertisers, governments, and other interested parties apply more refined machine learning to these kinds of problems, oftentimes marrying their approaches with natural language processing methods. This rendered traditional privacy solutions, such as VPNs, ineffective. Because of the sophistication of behavioral tracking methods, reliance on VPNs and proxy services for anonymity can be likened to a person who changes their route to work and expects to not be recognized when they enter the building.

By the mid-2020s we started to see a new form of identity theft: Deep Mimicry. This method usually takes the form of the buying and selling behavioral models for the purpose of using neural networks (much like StyleGANs, to be specific) to emulate the user in video, voice, text, and usage/behavioral patterns.

In order to deal with issues of this nature, we will likely see the emergence of personal data management services. Resembling the credit protection agencies that emerged in the 2000’s- these services will scan the internet, looking for abnormal data usage patterns, evidence personal data trafficking and the use of Deep Mimicry. Interestingly enough, Deep Mimicry will also find a more legitimate use in “anonymity as a service” offerings, where an artificial “identity avatar” to serve as an adaptive “digital mask” for one’s apparent behavior and language patterns (we should expect people to have a lot of fun with this part).

Well, there you have it. If you thought the last decade was a wild ride, you can surely expect the next to bring a great deal of growth and growing pains as the use and abuse of data impact our lives in more ways than thought possible. Now, if only we could imagine what the world will look like when we enter the 2050s…

About the author

Joey P. has 16 years of experience providing software and systems engineering solutions for mission critical Department of Defense, intelligence community, and commercial efforts. Joey currently serves as a Director of Technology where he guides the growth of technical capabilities, conducts forward-leaning product development, and ensures rapid and successful transition of new capabilities into RF focused product lines.

Be the first to receive updates about Parsons news, events, and innovations. Subscribe Today!

Back to top
facebook-pixel linkedin-pixel linkedin pixel focused image