A version of this article appears in the August 25 issue of Aviation Week & Space Technology.

Billion-dollar, decade-long initiatives in the U.S. and Europe to map and simulate the entire human brain will change information technology fundamentally, and aerospace is unlikely to remain untouched. Advances in neurotechnology are already having an impact, as methods of monitoring the brain are applied to improving the performance of pilots, air traffic controllers and system operators.

“We are already seeing promising results from initial studies,” says Santosh Mathan, principal scientist at Honeywell Labs in Seattle. “A lot of our work focuses on neural sensing—sensing brain activity—with the aim of improving human performance.

“We are in this line of research because our technology is used in challenging task contexts—systems that support soldiers, or pilots in advanced flight decks,” he says. “Computers are being adopted in unconventional settings, but humans always remain a crucial component, and there are many vulnerabilities of humans that can cause the whole system to fail.”

Areas of concern include information overload. “You can overwhelm a person with processing so much information that they are unable to perform the task,” Mathan says. Another is attention. “Are we creating systems that allow our users to stay engaged and remain a critical part of the system, or are they outside the loop and contributing to the system failing?”

Designing systems without considering human limitations can have several consequences, he says. For operators these include higher training costs and loss of efficiency and safety. For manufacturers they include higher certification and support costs.

Tools now used to make sure system designs have a low impact on users tend to involve behavioral observation, Mathan says—putting people in a realistic task context, observing their performance and making an inference about how effective the system is. This is time-consuming, requires domain experts and can be costly.

“We use subjective ratings a lot. Pilots use the system and provide a questionnaire response, but there are all kinds of biases related to retrospection, sensitivities about what you disclose, and these subjective issues get in the way,” he says. “So we are interested in tools that are objective, automated, fine-grained and can give us insight into the cognitive state of the user as they interact with the systems we design.”

Research shows brain activity can be a source of this information, Mathan says. Examples include functional magnetic resonance imaging of the brain of an individual performing low- and high-difficulty tasks. Many more regions of the brain are active during a difficult task. “When performing a task that is familiar and well-practiced, the regions active are just those necessary to perform the motor aspects of the job. It’s all automated. But the moment it is unfamiliar or more difficult, there is a lot more reasoning happening,” he says. But clinical imaging equipment used for this research is impractical for system development, so work has centered on obtaining brain-activity information with sensors that are more practical. “Our efforts have focused on using EEG [electroencephalography] technology as the basis for making inferences about cognitive state,” Mathan says.

As currents flow through the billions of neurons in the brain they set up electrical fields, and voltages associated with these can be detected at the surface of the scalp. “You can sense those minor voltage fluctuations and make some inferences about what’s going on inside the brain,” he says.

Ten years ago, a lab system resembled a swim cap with many electrodes and wires, making it difficult for the test subject to move. “We are beginning to see and use systems that are much more practical,” Mathan says. A wireless EEG system from Advanced Brain Monitoring (ABM), for example, has the circuitry integrated into thin plastic strips and fits under a helmet. 

As the hardware evolves, Honeywell is focusing on developing algorithms and software that can take data from these systems and make inferences about the subject’s cognitive state. Raw EEG data is contaminated with artifacts such as eye blinks and must be cleaned up, reduced to the subset relevant to workload, cognitive effort or attention, and classified, for example, as high or low alertness.

The goal is to build a model of brain activity that outputs a single-value estimate for workload, attention or other state of interest. “Then we can use that—either in design time, to figure out what kind of impact our system is having on the user, or in an operational setting in real time, to detect states in which a person is vulnerable, and put in automated assistance,” he says.

Honeywell’s research began under Darpa’s Augmented Cognition program, which focused on developing sensing technologies to determine when soldiers or pilots were so occupied that adding tasks would overwhelm them. The goal was to develop a way to determine when people are interruptible, Mathan says.

Honeywell demonstrated its system could classify 8 out of 10 EEG samples accurately. “If we can collect data over 5-10 seconds, we can get some very reliable estimates about a person’s cognitive state,” he says. But the states looked at were extremes; soldiers either overwhelmed or waiting for action. “Those two states are very distinct from each other. Our interest since then has been to see if we can make more subtle distinctions, the kind of variation in workload or attention we might see in a cockpit.”

Lab work has included tests involving corporate pilots flying an aircraft in either low-workload cruise or high-workload approach conditions. Their subjective ratings of difficulty were gathered from questionnaires and, typical of such human-factors evaluations, “the data were quite noisy; there was a lot of variability among the pilots,” he says. “[But with] EEG we get a measure that is both specific as well as sensitive to the differences in cognitive effort . . . and is easy and cheap to collect.”

Another example was a simulator study to evaluate the effectiveness of synthetic vision in helping helicopter pilots land in brown-out conditions. This involved three scenarios with different levels of difficulty in terms of obstacles to avoid. There was “huge variability” in pilots’ ratings after flying the three scenarios, while with EEG the overall pattern was similar but the variability was substantially less, and the cognitive state could be measured in real time, Mathan says.

Research into attention included a study with Oxford University involving users detecting targets in a burst of imagery. “It’s a task where targets are rare, and people have to maintain their vigilance,” he says. This study showed an inverse relationship between the subject’s “alpha” EEG signature—the brain’s idle signal—and the subjective rating of how attentive they were. “This could be useful to detect states when people are vulnerable to lapses of attention, and potentially avoiding situations where operators fall asleep or pilots lose attention,” he says.

Another focus of neurotechnology research is new modes of interaction with computers. One example is the Honeywell Image Triage System (HITS), which tries to address the problem of massive amounts of imagery coming from many different sources, in the military and medical sectors. “The challenge across all these image-analysis domains is there is a lot of data, but it is difficult to extract information from it because analysts have to be highly trained and are rare,” he says.

Part of the problem is the tools. An analyst goes through imagery methodically and systematically to detect targets. “We use this kind of deliberate reasoning for a lot of things, but we are also able to make some very rapid judgments reliably, as when we are driving and see an obstacle. We are able to detect something and respond to it before our mind has deliberately thought about it,” Mathan says. HITS taps into this ability to make quick perceptual judgments. The system takes a large satellite image, breaks it into chips a few hundred pixels across and presents them to the user in bursts, say 10 chips a second for 5 sec. followed by a break.  At the same time, the system is measuring brain activity to detect the event-related potential signal. “This is a signal that tells us [subjects] have seen something of interest in the high-speed presentation,” he says.

At the end of the scan, brain activity combined with overt physical responses is used to generate probability maps indicating where likely targets are. “It is like a triage process. You do this high-speed scan, find regions of interest, then the analyst can spend most of their time doing the deliberate reasoning on these hot spots where targets are most likely,” he says.

Compared with computer vision techniques, Mathan says, humans can bring in prior knowledge to interpret whether something is a target. Computer vision is used to deal with the fact that, when presenting images at high rates, “you have to stare at a spot, and if the target is there you see it, if it’s not you don’t,” he says. So Honeywell created a hybrid system. Computer vision does a quick sweep of the image to identify what region of a chip most resembles a target and makes small adjustments to center each tile on the most likely object. “When you use the super-sensitive computer-vision algorithm to make slightly more informed decisions about how to break the image up it works really well,” he says.

Honeywell has also applied neurotechnology to hands-free control. Speech recognition does not work as well in environments with high ambient noise, like cockpits, or in a job with lot of talking, like an air traffic controller, Mathan says. Eye trackers are challenged in cockpits, where there are changes of lighting. So researchers have looked for ways to exploit brain signals as a means to control devices.

One is to use imagined motor movements. Thinking carefully about moving an arm, without moving it, causes parts of the brain controlling that limb to become active, and these signals can be used to execute a left/right or up/down command. But it takes a lot of training to perform correctly, he says. Honeywell has been looking at another way, using a signal called the “steady-state visual evoked response potential” that is induced by an external stimulus.

This involves displaying visual commands in the form of patterns that flash at slightly different frequencies. When a person looks at one of the commands, it evokes brain activity. “If it is flashing at 10 Hz, you will see a 10 Hz response in the visual areas of their brain. By looking at the response, you can figure out which of the many commands [subjects] are looking at, and you can take this and perform an action. All the pilot has to do is glance at a command and it gets actuated,” he says. “We see this as being relevant in cockpit interaction tasks that are not safety-critical, where we can tolerate some noise and it does not have to be very fast—flipping a page, switching a frequency or panning a map.”

Honeywell has tested the technique in its Boeing 737 simulator. Commands such as “left turn” are displayed as icons flickering at different frequencies. When the pilot looks at the icon, a green outline appears to show it is active, a command goes to the autopilot and the aircraft begins to turn left. To end the turn, the pilot looks at the wings-level command. In the simulator, this allowed the pilot to fly left/right and up/down. “In single-pilot situations when your hands are busy, and you don’t want to have to reach out or look up something, a simple command panel with flashing icons is an alternate means,” he says.

Under the European Union-funded Brainflight program, researchers led by Munich Technical University (TUM) are studying use of different brain-computer interfaces (BCI) to control aircraft. In flight simulator tests the team has shown brain-controlled flight is feasible, with seven subjects having varying levels of cockpit experience—down to none whatsoever—navigating and landing with surprising accuracy.

The goal is to develop a system enabling pilots to learn to fly an aircraft using brain activity, then automate it in a way that releases the pilot’s higher cognitive functions for other tasks. “This would reduce workload and increase safety.  In addition, pilots would have more freedom to manage other manual tasks in the cockpit,” says Tim Fricke, who heads the project at TUM.

In the Brainflight approach, pilots learn to fly via operant conditioning, receiving visual and tactile feedback on aircraft behavior as they work to control its flight via brain activity. The project is evaluating both active and reactive brain-computer interfaces using an EEG sensor. An active BCI uses consciously controlled brain activity, which allows the pilot to choose when to send commands, but is less reliable than a reactive BCI, which uses EEG signals evoked by external stimulation.

The EEG signals are triggered by tactile stimulation. In one approach, tactors are placed on specific parts of the body and vibrated simultaneously at different frequencies that can be observed in the area of sensorimotor cortex associated with each body part. Signal intensity varies with the attention focused on each tactor. In another approach, multiple tactors are activated one after the other in random order, the one the pilot pays most attention to evoking the largest signal.

Examples of tactile feedback include displaying an artificial horizon on the wearer’s body, providing a warning and direction signal when the aircraft deviates from its planned flight path, indicating the direction of a waypoint in three dimensions, and vibrating all tactors to warn of a critical situation. Researchers at Tufts University are measuring mental workload with a headband that senses blood flow and oxygenation in the brain by bouncing light off the scalp. This can tell whether a computer user is overstressed or relaxed and ready to take on more tasks, and has been used in air-traffic simulations to detect if a controller is overworked and scale back workload. Managing the number of unmanned aircraft one operator can control is another application.

The U.S. Air Force Research Laboratory has completed proof-of-concept testing of non-invasive brain stimulation to help imagery analysts, cybersecurity specialists and UAV operators fight fatigue. The research shows that mild transcranial direct-current stimulation can extend alertness and accelerate learning. ABM says its wireless EEG sensor has been used in studies to compare the effectiveness of live flight and simulator training. “Pilots say they need live flight. The Air Force says why, where is the return on investment?” the company says. EEG and cardiac sensors are used to develop a workload metric that allows researchers to look at what happens in flight and whether it can be replicated in the simulator. “If the pilot is not engaged, the workload is low. No new neural pathways are being built and there is no new learning. If the workload metric is high, you have a good idea they are learning.”

“Ten years ago a lot of people would have laughed us out of the room,” says Mathan. “You could not conceive of EEG being taken outside a clinical setting. Now you can use it in practically any setting. Ten years from now, for safety-critical applications where we are trying to figure out if a pilot is falling asleep, we will be able use these kinds of sensors embedded in much more practical form factors. A pilot’s headset could be adapted so that you could unobtrusively figure out whether their attention level is adequate for the phase of flight they are in,” he adds. 

 

—With John Croft in Seattle.