A version of this article appears in the June 16 edition of Aviation Week & Space Technology.

As unmanned systems evolve rapidly from remote piloting through automated flying to autonomous decision-making, civil aviation will not escape unscathed. Beyond unmanned aircraft, the technology is expected to find its way into aircraft cockpits and air traffic control centers to increase efficiency and safety.

Ensuring safe and reliable behavior by systems that can adapt to their environment is the barrier to increasing autonomy in civil aviation, says a report by the U.S. National Research Council (NRC), commissioned by NASA’s Aeronautics Research Mission Directorate The report identifies key barriers and provides a national research agenda for enabling the introduction of autonomy into civil aviation.

“[Increasingly autonomous] systems have the potential to improve safety and reliability, reduce costs and enable new missions,” states the report. But deploying such systems “is not without risk,” and failure to implement them “in a careful and deliberate manner” could have the opposite effect, it warns.

The critical challenge is being able to assure that adaptive and nondeterministic systems, which can modify their behavior in response to the external environment, are safe and reliable. Key barriers to meeting that challenge, the report says, include the lack of accepted design and test practices for decision-making by advanced autonomous systems and existing validation and verification (V&V) and certification processes that are insufficient to engender trust in those systems.

Nondeterministic systems can make decisions in real time to improve performance, but do not always respond the same way to identical inputs, the report notes. Current certification criteria and processes and V&V methods can only cope with deterministic systems.

“Autonomy is growing with computing power, and bringing with it a host of new issues,” says Mike Francis, chief of advanced programs at United Technologies Research Center. One is the impact on air traffic management, which is today physics-based, when machines can make their own decisions, he says.

Today certifying software is based on “testing inputs and outputs with a pass/fail mentality,” he says. “We drill through every line of code to ensure it performs as promised. A deterministic outcome is assumed. But a different kind of software is coming that learns from the past and changes its behavior, . . . for physical functions [and] mission-level decisions. Today’s certification approach will not work at all. Certification needs to be more like that for pilots and crew, which is less purely objective.”

The report identifies eight high-level research projects, involving NASA and other government agencies, to address the barriers to increased autonomy. The top four focus on the behavior of adaptive/nondeterministic systems; operation without continuous human oversight; modeling and simulation; and validation, verification and certification.

Research into characterizing and limiting the behavior of advanced systems would include developing performance criteria (such as stability, robustness and resilience) and methods beyond input-output testing; and determining the roles humans play in limiting that behavior.

Research into human oversight would include looking at requirements for supervision as a function of the missions, capabilities and limitations of autonomous aircraft; and developing systems that respond safely to failures, mitigate high-risk situations, and detect and avoid threats without oversight.

Modeling and simulation research should encompass its use within advanced autonomous systems; for coaching adaptive systems and human operators during training; to assess safety and cybersecurity; to increase trust and confidence in systems, and to assist in accident investigations.

Recognizing that the level of aviation safety within national airspace is built on FAA requirements for formal verification, validation and certification, the NRC recommends research to define requirements for intelligent software and systems; improve test fidelity; define new design requirements; and propose new certification standards.