This article is published in Aviation Week & Space Technology and is free to read until Feb 08, 2025. If you want to read more articles from this publication, please click the link to subscribe.
Artificial intelligence is sweeping across the aerospace industry. But knowing where it is truly making a difference—rather than just hand-waving—can be difficult. To understand the state of the art of artificial intelligence better, where it is headed and the most important pieces of the software, Space and Emerging Technologies Editor Garrett Reim spoke with technologists at leading aerospace organizations. Here are some questions we asked and summaries of the answers provided.
What forms of artificial intelligence are being tested or used in operational systems today?
Despite the nonstop talk about artificial intelligence (AI) these days, testing and operational adoption within the aerospace industry is nascent, mostly focused on non-safety-critical applications, such as data analysis, or computer vision tasks, such as object detection.
For example, the National Oceanic and Atmospheric Administration is using an AI system to classify sea ice from synthetic aperture radar imagery, and others are using deep-learning techniques to detect satellite maneuvers for space situational awareness, says Elizabeth Davison, associate principal director of integrated data and applications at The Aerospace Corp.
Elsewhere, autonomous aircraft startup Merlin Labs is testing computer vision to help pilots by recognizing runway markings or obstacles, Chief Technology Officer Tim Burns says. Merlin Labs is also using natural language processing to allow its autonomous flight system to communicate with humans in air traffic control.
NASA is using AI to detect and filter “the signal from the noise in large data sets,” says David Salvagnini, chief data officer and chief artificial intelligence officer. For example, the agency is using AI to “explore the cosmos through applications like Exo-Miner, a neural network leveraging supercomputers to identify exoplanets from data gathered by NASA’s Kepler spacecraft and K2,” he adds.
Of course, the maintenance, repair and overhaul industry has used machine learning for predictive maintenance for years. But adoption is also growing within manufacturing as aerospace companies use computer vision and machine-learning programs to detect component flaws and production problems.
“One current process under investigation involves collecting and interpreting real-time images and video to enable teams to provide insight into potential plan optimizations and sources of production slowdowns,” says Trevor Johnson, project executive for Acubed, Airbus’ Silicon Valley innovation center.
Companies including Reliable Robotics, a startup developing an autonomous Cessna Caravan, note that AI systems, despite their seemingly superhuman abilities, are unable to demonstrate compliance with existing FAA regulations and technical standards due to their proneness to error.
“We believe that deterministic software and systems are crucial,” Reliable Robotics CEO Robert Rose says, adding that the National Airspace System is not equipped to handle nondeterministic systems.
What forms of AI will be used in operational systems in 3-5 years?
In 3-5 years, humans will still make decisions, but AI tools will speed the process, Lockheed Martin Chief AI and Digital Officer Mike Baylor says. In other instances, operators may debut agentic AI systems—programs that can make decisions, plan and execute toward a specific goal with less human supervision, he adds.
AI also could help manage and fuse intelligence, surveillance and reconnaissance from swarms of uncrewed air vehicles, says Christian Gutierrez, Shield AI’s vice president of Hivemind engineering, the company’s autonomy software. “With human oversight, these systems will enable autonomous platforms to share information and coordinate actions as a unit, even in communications- and GPS-denied environments,” he says.
“Prototypes of advanced AI-driven tools are anticipated to emerge, enabling commanders to shape electromagnetic spectrum operations,” L3Harris Technologies Chief Technology Officer Andrew Puryear says, pointing to optimizing communications, jamming or data collection in contested radio frequency areas.
What forms show great promise but are still at least 10 years away?
A decade from now, AI could take on an even greater role, including decision-making responsibilities previously reserved for humans. “Strategic autonomy, where systems make high-level decisions and adapt objectives in real time, holds great promise,” Gutierrez says. “This shift will significantly reduce the need for human intervention, allowing operators to manage larger fleets of systems while minimizing risk to personnel.”
AI systems that combine general intelligence with adaptive decision-making have big potential, too, Burns says. “These could enable fully autonomous decision-making in novel or unpredictable situations, such as coordinating multiple unmanned systems in complex, dynamic environments,” he adds.
“The AI required for advanced weapon-target pairing and optimization already exists, enabling systems to optimize execution based on adjustable parameters like probability of hit or kill, fratricide risk, cost, time to replenish and supply chain impacts,” Puryear says. “The real challenge is linking the kill chain to the logistical chain.”
The future aerospace factory likely will deploy advanced robotics that are set up and optimized using AI, Johnson says. “Given the current pace of development, the limiting factor for operational fielding will likely be the ability to synthesize new technology into highly rigorous production and design environments.”
Giving AI greater autonomy also will require infusing it with morals, a task that may be difficult and “will require significant breakthroughs to address trust and accountability issues,” Burns notes.
NASA says the state of AI in a decade is anyone’s guess.
“AI research and deployment is moving so quickly that NASA isn’t going to venture predictions about AI use in 10 years,” Salvagnini says. “That’s too far out to make reliable well-informed predictions in a rapidly changing field.”
What is the biggest misconception about AI?
AI is not likely to become “sentient,” Lockheed’s Baylor says. “AI algorithms allow for faster, more resilient decisions. These can be layered to imitate sentience, but not create it.”
Another misconception is that AI will take all human jobs, multiple aerospace technologists say. “There may be some direct displacement in places like customer call centers, but the real risk is being replaced by someone who uses AI, not AI itself,” Baylor says.
AI is not a general tool, but its uses can vary greatly, Northrop Grumman Chief AI Architect Ebenezer Dadson notes. “AI cannot simply be sprinkled on top as a garnish,” he says. “AI, when deployed responsibly, presents as a system, subsystem or component. The operationalization of AI is both art and science, and requires significant expertise to be done effectively and responsibly.”
DARPA considers it a misconception that AI would improve every system. “In reality, AI should only be used when it’s the only suitable solution for a specific problem,” says Lt. Col. Ryan Hefron, DARPA program manager for Air Combat Evolution and Artificial Intelligence Reinforcements. “If a simpler, more efficient approach works, use it.”
It is important to establish appropriate levels of trust with AI systems, adds Davison of The Aerospace Corp. “AI systems are inherently imperfect and often nondeterministic, so they will make errors,” she says.
Indeed, there are other areas where AI is lacking. “While AI excels in predictable tasks and structured environments, it requires robust algorithms, extensive training data and adaptive methodologies to perform effectively in uncertain or rapidly changing conditions,” says Gutierrez of Shield AI.
The U.S. Air Force does not see AI replacing human decision-making in high-stakes situations. “The primary goal of AI is to augment human decision-making, not automate it,” the service says.
It is also problematic that AI has become a catch-all buzzword, says Rose of Reliable Robotics. “To advance the discussion on AI, industry should work through standards-developing organizations and trade associations to develop precise definitions for the types of AI being considered for aviation.”
Finally, the idea that AI will achieve superintelligence and threaten an apocalypse is too speculative, says Puryear of L3Harris.
“While large language models (LLM) have demonstrated emerging capabilities, they are not steps toward [artificial general intelligence],” he says. “LLMs excel at tasks like predicting the next word but lack critical features necessary for [artificial general intelligence] such as reasoning, hierarchical planning and a deep understanding of the physical world.”
What areas will be difficult for AI?
AI systems are only as strong as the data on which they are trained, and high-quality data can be hard to come by. For example, absent war, there is a lack of data on real-life military situations to train AI, aerospace technologists say.
“Another challenge is understanding the nuances of human-to-human communication, like sarcasm, silence and words like ‘fine,’ where tone is as important as context,” Baylor notes.
AI also lacks deeply human skills such as creative problem-solving in novel situations, Davison says. “Tasks requiring deep subject matter expertise, intuition or emotionally intelligent decision-making will be particularly difficult for AI,” she explains.
As with technologies before it, adversaries will develop techniques to neutralize or impede AI-powered equipment. “As we increase utilization of AI technology in the battlefield, we will see advancements in evading those technologies,” Gutierrez says. “Our challenge will be to overcome the current pace of fielding upgrades to the warfighter.”
With all the ambiguity and complexity that come with operational use of AI, ethical boundaries in particular will be hard to establish and then integrate within the technology. “Frameworks like the [Defense Department’s] Principles for Ethical Use of AI are helpful,” Baylor says.
The U.S. Air Force notes that all those challenges come together on the battlefield. “Areas that may prove difficult for AI include human battlefield decisions, ethical decision-making and novel situations, among others,” the service says.
Humans are likely to keep an edge on the battlefield, as they “excel at reasoning and planning because of their deep understanding of the physical world and its constraints,” Puryear says. “While some AI systems have incorporated reasoning capabilities, they struggle with contextual reasoning, particularly in novel or dynamic situations.”
Lastly, NASA points out that AI lacks the capacity for self-reflection. “Current AI models can have trouble determining their own biases or producing consistent answers and explaining how they sourced and derived their answers,” Salvagnini says. “That’s where it’s critical to have humans serving as adjudicators and fact-checkers for the work these systems produce.”