NASA researchers, together with the U.S. Air Force Research Laboratory (AFRL) are planning demonstrations of an autonomous unmanned aircraft system (UAS) capable of planning, launching, navigating and refueling itself.

Called Traveler, the project is aimed at developing trustworthy autonomy during an initial demonstration flight outside of restricted airspace later this year. If successful, an even more ambitious test in 2017 is targeted at flying a portion of an autonomous mission without a safety pilot. The FAA supports the plan and aims to use data collected during the program to help formulate future standards for UAS operations.

The Traveler vision is a vehicle that would launch independently in response to a medical emergency call to go, for example, to the aid of a victim trapped in an inaccessible location in the wilderness. On receiving a 911 call the vehicle itself would plan the route, file a flight plan, self-launch once medical supplies were loaded, safely navigate to the victim, land and deliver the supplies. En route the vehicle would also organize a location to refuel if necessary. On landing it would also launch and set up communications between the victim and medical personnel.

The demonstrations will be conducted using a modified commercial BirdsEyeView Aerobotics FireFLY6 vertical-takeoff-and-landing (VTOL) UAV. Dubbed “Elissa,” the flying wing aircraft has a wingspan of 60 in., weighs up to 9 lb. and is configured with three sets of pivoting rotors. The building blocks of the vehicle’s autonomous capability are based on features developed for the auto-ground collision avoidance system (Auto-GCAS) and later auto-air collision avoidance system (Auto-ACAS) created by NASA, AFRL and Lockheed Martin. It also builds on an improved collision avoidance system tested on small UAS and a Cirrus SR22 general aviation aircraft.

Initial development, testing and evaluation of the software and processing system are being undertaken using a smaller quadcopter modified with an expandable variable-autonomy architecture (EVAA) processor. The heart of the system is an Odroid-XU3 processor like that used in the Samsung smartphone, coupled to the flight controls. “This is our piece of hardware which runs all of the logic, it runs everything,” says Mark Skoog, principal investigator of automatic systems at NASA Armstrong Flight Research Center. “Auto-GCAS is in there along with a dynamic inflight route planner and a flight executive that manages the system. Auto-GCAS is also now running an obstacle database as well. We just got to flight test this to understand how it was working. Now this is where our real effort is going to be focused,” he adds.

Intelligence. Analysis. Insight.

This story is a selection from the March 14, 2016 issue of Aviation Week & Space Technology. New content posted daily online.

Subscribe now or browse the current issue

The EVAA operates with modular software and functionally partitioned modules, each of which is limited to a single safety function. The system also provides a rapid assessment of vehicle situational hazards such as weather, other aircraft, geofences, terrain and obstacles. “EVAA is about all the safety elements, and the ‘moral compass’ elements, Auto-GCAS, air collision avoidance, a forced landing system and geofences. It also has health monitoring and protection from loss-of-control, all managed by a flight executive,” says Skoog.

Following receipt of a mission request (or distress call in the example of a medical emergency), the UAV will access Google Maps via the Internet. “It will get routing options and put them through EVAA to evaluate if any are appropriate. It will then go through safety systems and assess the risks, the level of those risks and whether they are within acceptable tolerances. If they are, then it will build a flight plan and fill out an electronic flight request,” says Skoog.

For the demonstration the request will go to NASA flight operations. “They will respond back to Elissa with an approved takeoff time. The vehicle will then text us to ask us to set it outside at an appropriate time,” he adds. “It will take all the safety systems that we have been thinking about running in real time and use that for mission planning as well. So we can evaluate the safety of the mission planning in preflight as well as real time in flight. That way as we get into the real challenges of dynamic flight environments such as weather, etc., we can replan; we know if it passes the appropriate risk elements we plan just as a pilot would. It stays within constraints.”

The vehicle will have a forced landing system which activates in the event of an inflight failure. “It will have a full risk map for where the safest place to land is. It will also say, ‘If I don’t have that option I’m going to crash. In that case where’s the safest place to crash?’” adds Skoog.

The FAA is supportive of the pioneering program because it recognizes that NASA’s rigorous experimental approach will generate much-needed hard empirical data. “Some folks argued adamantly that the FAA would never let them do this,” Skoog recalls. “But I was at an FAA headquarters briefing about this and they said, ‘This is what we need. What do you need to do this?’” The FAA is standing up an ASTM (standards) committee “to capture all the lessons learned and best practices to then publish it to the world,” he adds.

NASA and AFRL are building a low-altitude UAS test range at Armstrong “to be able to go to the FAA and make a safety case for this with validated data,” says Skoog. “It is basically an obstacle course and includes the old space shuttle hangar. Nobody cares if we run into it, and it’s got a 140 ft. tower [similar to] a cellphone tower.” The site, which will be “a fake little town,” will also include telephone poles and other obstacles. The area will also be mapped in detail to provide true source data before testing begins.