Aviation has been built around humans since before the origins of powered flight, but unmanned technology is opening new design spaces in unexpected ways. Now shaped by the strengths and weaknesses of pilots and controllers, how aircraft are flown and air traffic managed could change dramatically in coming decades as autonomy becomes understood, accepted and, eventually, trusted.

“Aviation has been very successful with a -humancentric paradigm, the idea that it is humans that save the day,” says Danette Allen, chief technologist for autonomy at NASA Langley Research Center. Even with the Northrop Grumman RQ-4 Global Hawk—arguably the most automated of today’s unmanned aircraft—“the human is still on or in the loop for situational awareness, just in case they have to jump in and solve problems,” she says.

But autonomy means machines making decisions, not humans, and behaving in ways that are not painstakingly pre-planned and pre-programmed. It requires safe and trusted systems than can perceive their environment for situational awareness and assessment, make decisions on uncertain and inaccurate information, act appropriately, learn from experience and adapt their behavior. “In Washington, autonomy has become the ‘A’ word. It has become a negative,” says Rose Mooney, executive of the Mid-Atlantic Aviation Partnership, one of six civil-UAS test sites established by the FAA.

“There is a paradigm shift from automated to autonomous: automation is relegation; autonomy is delegation,” says Allen. Where automation is machine-based execution that involves deterministic, or pre-determined behavior, autonomy is machine-based decision-making and involves non-deterministic stochastic and emergent behavior. “Autonomicity, or self-awareness, is a step beyond. The system can monitor its own state and self-configure, self-optimize, self-protect and self-heal,” she says.

Among programs pushing the frontiers of autonomy, Allen identifies NASA’s Robonaut, the Naval Research Laboratory’s Shipboard Autonomous Firefighting Robot and the Defense Advanced Research Projects Agency’s (Darpa) disaster-response Robotics Challenge, which are developing limbed robots that could become functional crewmembers. The Office of Naval Research’s Autonomous Aerial Cargo/Utility System will turn any helicopter into an unmanned resupply vehicle and Darpa’s Automated Labor In-Cockpit Automation System is a drop-in kit that will learn to fly an aircraft then take that experience to another platform, she says.

There are many technical and social barriers, objective and subjective, to autonomous systems, but “certifiable trust” has been identified as the biggest challenge. “When we certify avionics, we test every input, every path. When we certify pilots, we decide if they will probably do the right thing, but we do not test every response. It is more about behavior and probability.” For humans, interpersonal trust is based on “information, integrity, intelligence, interaction, intent and intuition,” says Allen, arguing this will be difficult to establish with a machine. “We will need new methods of verification and validation.”

A recent NASA-commissioned National Research Council report on autonomy research for civil aviation highlighted a cross-cutting challenge to increasing autonomy in aircraft: how to ensure that adaptive systems enhance safety and efficiency. “How do we achieve trust in non-deterministic systems?” asks Yuri Gawdiak, of NASA’s aeronautics strategy, architecture and analysis office. To do so, he says, “Humans are tested every step of the way.”

“Autonomy is growing with computing power and bringing a whole host of new issues,” says Mike Francis, chief of advanced programs at United Technologies Research Center. As machines begin to make decisions, it shows up the inadequacy of the current regulatory approach. “Certification has its roots about 110 years ago. It is based in physics and derives trust from science. It involves the testing of inputs and outputs and is a pass/fail mentality,” he says.

“Certification of pilots and crews is less purely subjective. It involves situational assessment, looking for the rationale for decisions and deriving a set of acceptable outcomes,” Francis continues. “[With software] we drill through every string of code. Every path is tested to ensure it performs as promised. A deterministic outcome is assumed. But a different kind of software is coming, with emergent behavior, that will be able to learn from the past and change,” he says. “Our certification approach will not work at all. We will need to define ‘intelligent software’ and certify it more like pilots and crews are tested.”

Autonomy is most often talked about in the context of unmanned aircraft, but it is likely to find much wider use, from enabling safe landing after the incapacitation of a single pilot, through reduced-crew operation of commercial and military transports to on-demand air transport, and even in deep space for close-proximity operations at distances where communications delays prevent teleoperation.

Perhaps the most controversial and challenging use of autonomy will be in the cockpit, to allow single-pilot operations (SPO) in commercial air transport. There are several reasons the idea is being looked at; crew cost is a key factor.“Fuel is 25.4% of U.S. airline costs, labor is 27%, we need to tackle both,” says Parimal Kopardekar, manager of NASA’s NextGen concepts and technology development project.

Crew costs are also higher relatively for smaller aircraft, and are a limiting factor for regional jets and on-demand air taxis, says Ken Goodrich, a NASA Langley research engineer. Enabling single-pilot operations could return or expand commercial service to small communities and thin markets, he says.

The airline industry last went through a reduction in flight-deck manning in the 1980s, when the two-crew cockpit was introduced with the Boeing 757 and 767. “But two to one is not the same as three to two,” says Goodrich. “The incapacitation of one pilot will result in an unmanned aircraft, which the public is not ready to accept.”

Medical reliability is set by the 1% rule, which establishes the risk threshold for pilot incapacitation at 1% per year. In reliability terms this is a failure rate of 1 in 10 million flight hours, or 10-6, and not adequate for a function critical to aircraft safety, which regulations says must meet a 10-9 threshold. U.S. airlines experience around 10 incapacitations a year over the course of 50 million flight hours for a reliability of 2 x 10-7, but that is still almost two orders of magnitude too low, says Goodrich.

Another challenge is that regulations already allow two pilots. “A single pilot would require a change of regulation in Part 121, and that is no small task,” says Goodrich. “Two was demonstrated statistically to be as safe as three. Two to one is not an apples-to-apples comparison.” Accident rates with Cessna Citation business jets flown single-pilot, for example, are 3.4 times higher than with two crew, he says. “We need a 70% reduction in accident rate to achieve equivalent safety.”

Research into SPO for Part 121 commercial operations at NASA Ames and Langley is focused on retaining the pilot in command as the final authority, supported by human-centered automation. The SPO concept combines onboard automation with off-board collaboration: a single pilot flying the aircraft with the help of automation while having access to a ground pilot in emergencies.

NASA is developing an integrated single-crew flight-deck design that includes crew-state management, to monitor physiology and behavior and provide feedback to the pilot, the automation and the ground station; crew tasking, alerting and planning, to provide a challenge-and-response interaction between human and automation to keep the pilot involved; and a simplified flight control system.

One of the key technologies being pursued is the haptic flight control system (HFCS), in which the pilot commands the aircraft solely through the stick and throttle. This is a “point-and-shoot” interface that provides simplified trajectory management while enabling automated response to hazards. The system can act autonomously if required, and perform an emergency landing overseen by a ground pilot.

The analogy is a horse and rider. The aircraft (the horse) has a specific type of intelligence and can do certain core tasks well on its own. “It can autonomously maintain safety, but not necessarily efficiently perform a mission,” says Goodrich. “Its behavior is biased toward safety, and it needs human input to operate efficiently.” Like a rider on a horse, the pilot uses force and touch to communicate with the aircraft.

The HFCS integrates autopilot, autothrottle and flight management systems (FMS) into the primary controls and displays. Stick and throttle are the pilot’s single points of contact with the automation. The default autonomy emphasizes safety and provides integrated envelope and hazard protection. 

All FMS route information is presented on the display. The pilot points the aircraft at a waypoint, selects it, pulls a trigger on the stick and the automation flies the aircraft according to the procedure in the database. To arrive at the next waypoint at a certain time, the pilot points the aircraft and moves the throttle while watching the predicted arrival time, adjusts the speed for the desired time and pulls the trigger on the throttle.

Automation will not fly the entire route: The pilot must be in the loop whenever major changes are made to the trajectory. Humans become complacent with normally reliable pre-programmed automation, Goodrich says, so to monitor the aircraft over long periods they need to be engaged at regular intervals, and the simple task of pointing the aircraft at the next goal provides that engagement.

The HFCS has its origins in work by NASA to develop a control system that would make it easier and safer to fly small personal aircraft, and its developers acknowledge the system looks like a technological step back from today’s automated airliner cockpits. But in a simulator study involving 24 pilots who compared it with totally manual and fully automated flight, they strongly preferred the HFCS.

Force-based interaction can be tailored to reflect strength of will, allowing the pilot to overpower the automation if necessary. The self-preservation functionality built into the autonomy in part replaces the cross-checking and error detection provided by a second crewmember. The pilot stays in the loop, exercising manual control skills to manage the automation and so prevent complacency. And the core algorithms are relatively deterministic, making them easier to certificate, Goodrich says.

NASA is looking at a technique called run-time assurance as a way of certifying adaptive control software. The concept is used in the research flight control system in NASA’s Boeing -F/A-18 test aircraft. “In that case we restrict the flight envelope, so the monitoring involves ensuring that we stay in the approved envelope and reverting to the production [flight control] system in the event of a violation,” says Curtis Hanson, a principal investigator.

“RVAC [reliable, verifiable adaptive control] is an attempt at defining run-time assurance for inner-loop flight controls. The idea is it is easier to monitor control-system gains than control-surface commands, because gains can adapt more slowly and are easier to bound in terms of traditionally defined gain and phase margins. We can monitor the adaptive gains and make sure they behave reasonably,” he says. Another program is looking at run-time assurance for outer-loop (autopilot) flight control.

“By separating the software, the monitoring software and inner-loop controller are safety critical, but can be tested using traditional methods. The adaptive controller is not safety critical, because if it does something stupid it can be overridden—which is good, as we don’t know how to certify it,” Hanson says. 

NASA sees single-pilot operation being part of an air traffic management system beyond the NextGen airspace modernization program now under way, that could be deployed after 2030 to enable “system-wide autonomous optimized airspace.” In addition to SPO, research programs aimed at this timeframe are AutoMax for air traffic management and UAS Traffic Management for low-altitude airspace.

“SASO [safe, autonomous systems operations] describes a future beyond NextGen that is more highly autonomous, to provide greater affordability, flexibility, scalability and resilience,” says John Cavolowsky, NASA Airspace Operations and Safety Program manager. “It is an architecture that can take advantage of beneficial autonomous behavior to enable new aircraft configurations and business models,” he says.

Future airspace operations are expected to be much more diverse based on increased use of unmanned aircraft, launch vehicles, personal aircraft, high-altitude wind turbines and other new types of operation. This increase in density and mixed equipage cannot be accommodated simply by adding more humans because of cost, says Kopardekar. “We need autonomous characteristics for safety, efficiency and scalability, as high complexity leads to less than optimal decision-making, while high workloads limit system capacity and throughput.” But AutoMax is about seeking the highest “justifiable” levels of autonomy and autonomocity, not enabling “automation for automation’s sake,” he emphasizes.

Meanwhile, some experts warn that the government and industry collectively will be unable to develop, test and evaluate autonomous and robotic systems to ensure they operate safely and effectively because they will not be able to attract the right engineers. “There is a big storm rising, and it is being ignored by the Defense Department and FAA. Who is going to do this work?” asks Missy Cummings, a professor of engineering at Duke University and expert on autonomy. “The qualified people in government that can design an effective test-and-evaluation program do not exist. There are not enough people to staff the FAA’s six UAS test sites; not in the military, not in the government,” she says.

“These people are not inside the government or industry. They are out there, but working for Google, Oracle and others—companies with a 40% R&D spend,” Cummings notes, comparing their rate of research and development investment with the aerospace and defense industry’s average of around 5% of revenues. “The government does not understand the difference between autonomous systems and unmanned. They know nothing about test and evaluation for autonomous systems. A deterministic approach will not work,” she says. “Meanwhile China is pouring billions into the development of probabilistic and stochastic software systems.”

“Achieving high levels of trusted autonomy is a multibillion-dollar challenge that will take more than just aviation to achieve,” says Mark Moore, an aerospace engineer at NASA Langley working on enabling on-demand air transport through autonomy and other technologies. “Aerospace will not lead this. It will take 20-30 years and by then there will be millions of driverless cars operating, collecting data the FAA will never get [from aviation].

“Other industries are evolving at an incredible pace and we are not,” Moore says. “We need to find a way to embrace outside industries. If there is ever going to be any chance of certifying trusted autonomy, we are going to have to embrace Google, Facebook, the automotive industry and others.”