By Edwin L. Aguirre
Robots can do a lot of things – assemble cars, search for bombs, cook a meal or assist in surgery. But something they can’t do is tell you how they are doing.
Researchers from UMass Lowell and several other universities are aiming to change that. With funding from the U.S. Department of Defense’s Multidisciplinary University Research Initiative (MURI), robotics experts from Carnegie Mellon University, UML, Brigham Young University and Tufts University are working together to give humanoid robots and other autonomous systems the ability to assess themselves in terms of how well they can perform a given task or why they cannot complete the job.
This real-time feedback is vital as robots become increasingly autonomous and are tasked with jobs in remote, hostile or dynamic environments with minimal human supervision or intervention.
The project – called SUCCESS, which stands for Self-assessment and Understanding of Competence and Conditions to Ensure System Success – is one of 24 grants awarded nationwide this year through the highly competitive MURI program. The grant is worth a total of $7.5 million over a period of five years. UMass Lowell’s share of the funding is $1.2 million.
“Our goal is to develop methods and metrics that would enable autonomous systems to assess their own performance,” says computer science Prof. Holly Yanco, who is the principal investigator for UML and director of the university’s New England Robotics Validation and Experimentation (NERVE) Center at 110 Canal St. in Lowell.
“The project will greatly improve human-robot interaction overall,” notes Yanco.
Unlike people, who use their senses, logic and experience to judge whether they are able to do something or not, or predict how well they can do it – whether it is lifting a heavy object, climbing a staircase, finding a missing screw or fixing a leaky faucet – robots currently don’t have that means or capability.
“Our goal is to develop methods and metrics that would enable autonomous systems to assess their own performance.”
-Prof. Holly Yanco
“Robots can’t gauge how well they are able to perform a task, how the job is progressing or tell you what their limitations or capabilities are,” says Yanco. “They can’t tell you, ‘I don’t think I can do that part, but I can do this instead.’”
She adds that robots are really bad in asking for help or explaining their predicament. “Right now, we can’t get answers to questions like ‘Why did you do that?’ ‘Why can’t you do it?’ and ‘How did you get to this point?’”
Yanco and her co-investigators will test self-assessment approaches using search tasks that rely on the robot’s knowledge and dexterity, such as maneuvering around obstacles to investigate hidden items or manipulating objects to reveal their contents.
Integration work and testing will be performed at Carnegie Mellon’s Robotics Institute in Pittsburgh and UMass Lowell’s NERVE Center. A pair of Baxter robots – two-armed industrial machines with animated faces – will be used to perform assembly tasks, problem-solving scenarios and games.
By looking at a robot’s past performance, researchers will be able to predict how well it will perform in the future.
“We plan to build up a database of robot self-assessments and proficiency,” says Yanco. “Hopefully, the study will lead to better human-robot teamwork and increase the level of trust, expectation and efficiency between the two.”
The robot’s search tasks can be readily scaled up for real-world applications, such as deploying swarms of micro-drones to map buildings or disaster areas and conducting urban search and rescue.