The US Army is creating robots that can follow orders
Military robots have always been pretty dumb. The PackBot the US Army uses for inspections and bomb disposal, for example, has practically no onboard intelligence and is piloted by remote control. What the Army has long wanted instead are intelligent robot teammates that can follow orders without constant supervision.
That is now a step closer. The Army’s research lab has developed software that lets robots understand verbal instructions, carry out a task, and report back. The potential rewards are tremendous. A robot that can understand commands and has a degree of machine intelligence would one day be able to go ahead of troops and check for IEDs or ambushes. It could also reduce the number of human soldiers needed on the ground.
“Even self-driving cars don’t have a high enough level of understanding to be able to follow instructions from another person and carry out a complex mission,” says Nicholas Roy of MIT, who was part of the team behind the project. “But our robot can do exactly that.”
Roy has been working on the problem as part of the Robotics Collaborative Technology Alliance, a 10-year project led by the Army Research Laboratory (ARL). The project team included researchers from MIT and Carnegie Mellon working alongside government institutions like NASA’s Jet Propulsion Laboratory and robotics firms such as Boston Dynamics. The program finished last month, with a series of events to show off what it had achieved. A number of robots were put through their paces, showing off their manipulation skills, mobility over obstacles, and ability to follow verbal instructions.
The idea is that they are able to work with people more effectively—not unlike a military dog. “The dog is a perfect example of what we’re aiming for in terms of teaming with humans,” says project leader Stuart Young. Like a dog, the robot can take verbal instructions and interpret gestures. But it can also be controlled via a tablet and return data in the form of maps and images so the operator can see exactly what is behind the building, for example.
The team used a hybrid approach to help robots make sense of the world around them. Deep learning is particularly good at image recognition, so algorithms similar to those Google uses to recognize objects in photos let the robots identify buildings, vegetation, vehicles, and people. Senior ARL roboticist Ethan Stump says that as well as identifying whole objects, a robot running the software can recognize key points like the headlights and wheels of a car, helping them work out the car’s exact position and orientation.
Once it has used deep learning to identify an object, the robot uses a knowledge base to pull out more detailed information that helps it carry out its orders. For example, when it identifies an object like a car, it consults a list of facts relating to cars: a car is a vehicle, it has wheels and an engine, and so on. These facts need to be hand-coded and are time-consuming to compile, however, and Stump says the team is looking into ways to streamline this. (Others are looking at similar challenges: DARPA’s “Machine Common Sense” (MCS) program is combining deep learning with a knowledge-base-centered approach so a robot can learn and show something like human judgment.)
Young gives the example of the command “Go behind the farthest truck on the left.” As well as recognizing objects and their locations, the robot has to decipher “behind” and “left,” which depend on where the speaker is standing, facing, and pointing. Its hard-coded knowledge of the environment gives it further conceptual clues as to how to carry out its task.
The robot can also ask questions to deal with ambiguity. If it is told to “go behind the building,” it might come back with: “You mean the building on the right?”
“We have integrated basic forms of all of the pieces needed to enable acting as a teammate,” says Stump. “The robot can make maps, label objects in those maps, interpret and execute simple commands with respect to those objects, and ask for clarification when there is ambiguity in the command.”
When it came to the final event, a four-wheeled Husky robot was used to demonstrate how well the software allowed robots to understand instructions. Two of the three demonstrations went off perfectly. The robot had to be rebooted during the third when its navigation system locked up.
“We did overhear the comment that if the robot hadn’t failed, it would have seemed like the demo was canned, so I think there was an appreciation that we were showing a system actually doing something,” says Stump.
As with military dogs, Young says, trust is the key to getting robots and humans to work together. Soldiers will need to learn the robot’s capabilities and limitations, and at the same time, the machine will learn the unit’s language and procedures.
But two other big challenges remain. First, the robot is currently too slow for practical use. Second, it needs to be far more resilient. All AI systems can go wrong, but military robots have to be reliable in life-and-death situations. These challenges will be tackled in a follow-on ARL program.
The Army’s work could have an impact in the wider world, the team believes. If autonomous robots can cope with complex real-world environments, work alongside humans, and take spoken instruction, they will have a myriad of uses, from industry and agriculture to the domestic front. However, the military involvement in the project raises concerns for roboticists such as Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence.
“Current AI and robotics systems are brittle and prone to misunderstanding—think Alexa or Siri,” says Etzioni. “So if we put them in the battlefield, I sure hope we don’t give them any destructive capabilities.”
Etzioni cites a number of issues associated with autonomous military robots, such as what happens when a robot makes a mistake or is hacked. He also wonders whether robots intended to save lives might make conflict more likely. “I’m opposed to autonomous robo-soldiers until we have a strong understanding of these issues,” he says
Source: MIT Technology Review