From locating balloons to re-assembling documents, from crowd-designed vehicles to disaster-response robots, researchers are using challenges to draw ideas from those who would never normally do business with the Pentagon.

For the Defense Advanced Research Projects Agency (Darpa), it is as much about how as what and why. The agency has used challenges to research topics ranging from social networking for intelligence-gathering to crowd-sourcing for design collaboration. And even if a challenge fails, there can be value.

The 2009 Network Challenge looked at how social media could help solve broad problems—in this case, finding 10 balloons at undisclosed locations across the U.S. A Massachusetts Institute of Technology (MIT) team found them all in 9 hr. thanks to a network of nearly 4,400 volunteers, each promised a share of the $40,000 prize if they helped to locate a balloon.

A similar challenge in 2012, the $40,000 CLIQRQuest, looked at how well social media would work if there was no advance publicity to help teams form. The contest closed after two weeks with no one team having found all seven codes on posters around the U.S. The best team found three.

Darpa has tried different crowd-sourcing challenges, to tap into an ideas pool wider than any single contractor can muster. These range from successful challenges to reconstruct shredded documents and design a military vehicle, to an unsuccessful flyoff for a small unmanned aircraft.

The agency has used competitions to enlist public ingenuity in support of its programs. These include Dangerous Waters, an anti-submarine warfare (ASW) game in which players developed tactics for shadowing an evasive submarine using a robotic ship, with the best to be used in its ASW Continuous Trail Unmanned Vessel program.

But Darpa still stages “old-fashioned” technology-demonstration challenges. These range from a series of contests, with prizes totaling $5.5 million, for autonomous ground vehicles to its latest $2 million challenge to build robots that can work alongside humans in disaster zones.

Darpa's first Grand Challenge for autonomous vehicles, over 150 mi. of road in the Mojave Desert in 2004, failed to produce a winner. But a year later, five vehicles completed the 131-mi. course. And in 2007's Urban Challenge, six unmanned vehicles successfully navigated city streets, negotiating intersections and avoiding other vehicles. These led directly to Google's driverless car program, headed by the leader of the Stanford University team that built Stanley, winner of the 2005 Grand Challenge.

Darpa's Robotics Challenge (DRC) is more ambitious. “What we've seen in disaster after disaster is there are often clear limitations to what humans can accomplish in the early stages,” says program manager Gill Pratt. “Darpa believes robots can work where and when humans cannot.”

That no two disasters are exactly alike underpins the DRC. “It is adaptability and compatibility with humans we are after, in three ways,” he says: compatibility with environments engineered for humans, even when degraded; ability to use tools designed for humans, from screwdrivers to vehicles; and ability to be supervised by people with little or no robotics training.

“Success in the DRC would mark a leap forward for the field of robotics,” says Pratt. While the challenge is “raising the bar very high,” he believes the foundations for success are in place.

Industry can produce task-specific robots, but what the DRC is trying to create, says Pratt, is the ability to take that technology out of the laboratory into the real world. “[Secondly] we want the capabilities to do individual tasks to be joined together . . . so a robot can deal with a situation that does not unfold exactly as planned.”

Third is to develop control interfaces that allow a robot to become an extension of what the human is trying to do, through supervised autonomy. “What we are trying to do is move the field of supervising robots from tele-operation, where we give step-by-step commands, to task-level autonomy, where we give a command like 'Open the door' or 'Climb the stairs,' and have the robot complete those tasks by itself.”

With live competitions planned for December this year and again in 2014, the Robotics Challenge allows teams to compete with their own robots, or with software only, running on Darpa-supplied robots. “Building robots that can actually physically complete all of the DRC tasks is very difficult,” says Pratt. “We need expertise from both hardware and software domains, and didn't want to preclude participation by any team based on limited resources or expertise in robotic hardware.”

Why a challenge? “We approached the DRC with the understanding that current robots are limited in their application to defense missions and civilian tasks in similar ways,” he says. Existing robots tend to have specialized and limited functionality, and are expensive, complicated to operate, with limited autonomy, mobility, dexterity, strength and endurance.

Overcoming those limitations requires substantial investment. “The DRC provides funding, incentives and a rallying point for the robotics community,” while also looking outside traditional research communities, Pratt says. “If we're successful with the DRC in developing these foundational properties, we will expand the horizon for what is possible with robots in the defense, commercial and civilian sectors.”

Pittsburgh-based Carnegie Mellon University (CMU) has teams on both the hardware and software tracks of the DRC. “We are not in the business of taking on every challenge. They have to match with our expertise,” says Tony Stentz, director of the National Robotics Engineering Center at CMU.

The university competed in both Grand Challenges and won the Urban Challenge, but had been working in vehicle autonomy since the early 1980s. “We're no stranger to mobile machines able to manipulate things, so the Robotics Challenge is a match, but more of a stretch because we don't have direct experience in humanoid robots,” he says.

For CMU, the Grand and Urban challenges were valuable exercises, both for published papers and intellectual property (IP). “The two major teams, Carnegie Mellon and Stanford, have seen members hired by Google, which has continued the work,” says Stentz.

Darpa's Robotics Challenge “is achievable, but very challenging,” he says. “It's hard enough to build a robot and a car that can drive itself, but really hard to develop a robot that can drive a car.” But the challenge is on the right technology path, he thinks.

“When robotics got started, there was an opinion that in unstructured environments the machines would be tele-operated. Then the pendulum swung the other way, to fully autonomous systems with no human intervention,” Stentz says. “That's a tough problem, when the robot can't turn to a human for help. There is a more recent trend towards mixed systems, with some level of human control and some level of autonomy, and all the research is into what is the right mix.”

Not all Darpa challenges have succeeded. UAVForge, a contest to crowd-design a small perch-and-stare unmanned aircraft, did not produce a winner, but generated useful lessons, the agency says. The challenge involved submitting designs for online voting, with the highest-ranked UAVs going forward to a fly-off. Northwest UAV (NWUAV) was to build a batch of the winning design for use in a military exercise.

“UAVForge demonstrated the willingness of a global community of non-traditional developers to participate in a compelling challenge,” says program manager Jim McCormick. “The ability to build a community, foster collaboration and overcome inhibitions such as IP concerns proved valuable. The ability to recognize and filter sources of bias in crowd voting was surprisingly important and well-received.”

Of the finalists in the May 2012 fly-off, Team Halo from the U.K.'s Middlesex University scored highest, but none could complete the mission. “Elements of UAVForge were always going to be very difficult, if not impossible: for example, a 2-mi. ingress, observation for up to 3 hr. then a 2-mi. egress, all non-line-of-sight,” says Stephen Prior, who led the team.

“One rule was you could not score any points for advanced behaviors if the baseline objectives weren't met. However, these baseline objectives were already difficult to complete,” says Ruud Knoops, with another finalist, TeamAtmos from Delft University of Technology in the Netherlands. “The fact that eight out of the 12 teams weren't even able to reach the observation site says a lot about the overall difficulty.”

“These systems were valued at under $10,000, which is an incredibly small price for a system that could perform this challenge. We found out Darpa had tested a commercial state-of-the-art UAS over the same course and it failed to complete the task,” says Prior. “The main lesson learned is that even though small UAVs and cameras are readily available off the shelf, it takes more than that to deliver something practical for use in the field,” says McCormick.

Team Halo, from the Autonomous Systems Laboratory that later moved to Southampton University, entered UAVForge after failing to compete in the U.K. Defense Ministry's 2008 Grand Challenge because of control problems. “The motivation was to prove our UAV was world class,” says Prior.

TeamAtmos got started as an aerospace-engineering graduation project to design a system capable of competing in UAVForge. There was no intent to build a UAV, but “when the assignment was finished, we decided that we would take this project to the next level and actually build the UAV we had designed theoretically,” says Knoops.

Some useful information was gleaned from debate on the online portal, and “there was a great deal of camaraderie during the flyoff, with an 'us against them' feel,” Prior says. “We all wanted someone to win the prize, and more importantly the follow-on manufacturing deal with NWUAV. I think they should have awarded the manufacturing contract and follow-on military exercise even if they chose not to award the prize money.”

Competing was valuable, says Prior. “If UAVForge hadn't existed, we might not have developed the system as far as we have,” he says. “It's a shame we couldn't get to work with NWUAV and Darpa; however, we are U.K.-based and it's maybe more of a shame that we had to go to the U.S. to compete in such an event.”

Tap on the icon in the digital edition of AW&ST for more details and images of Darpa challenges, or go to AviationWeek.com/innovation