Interdisciplinary Research Funded by $3M DARPA Grant
By Brooke Coupal
Imagine that you are a doctor managing the emergency room of a large hospital. You suddenly get a call that there has been a mass shooting at a concert a few miles away. In 20 minutes, you will be responsible for triaging over 200 patients with a range of injuries. You barely have enough staff or resources, and the hospital policies are not designed for a situation this dire.
“When people respond to emergencies, many decisions they face are quite predictable. They’re trained on them, and there’s policy,” says Neil Shortland, associate professor in the School of Criminology and Justice Studies. “But every now and then, they get stuck with a really tough decision that they’ve never trained for and never experienced, and they don’t have any guidance as to what the right thing to do is. Although these decisions are rare, they occur in the most extreme situations with the highest stakes.”
Shortland and an interdisciplinary team of UMass Lowell researchers are looking into using artificial intelligence (AI) to make those difficult decisions. The team consists of Computer Science Asst. Prof. Ruizhe Ma, Electrical and Computer Engineering Asst. Prof. Paul Robinette, Philosophy Chair and Assoc. Prof. Nicholas Evans and Holly Yanco, professor and chair of the Miner School of Computer & Information Sciences.
The researchers are working in partnership with Soar Technology, a Michigan-based business that builds intelligent systems for defense, government and commercial applications. The Defense Advanced Research Projects Agency is funding the project with a $3 million grant through its Small Business Innovation Research Program, with $1.2 million going to UML and $1.8 million to Soar Technology.
Modeling Human Behavior
The goal of the research is to find the best human attributes that AI can mirror when making difficult decisions in extreme environments, like a battlefield.
“We’re harnessing the essence of a person by modeling them as their best self,” says Shortland, the project’s principal investigator.
Human judgment is fallible. Even if someone is highly qualified to make a decision, their judgment can be skewed by biases, hunger, tiredness, stress and other factors, Shortland says.
“AI eliminates those issues,” he says. “It can be the best version of a person each time.”
AI also helps increase the number of decision-makers in situations like mass shootings, where instead of having just one doctor assessing victims, dozens of robots could be deployed to evaluate the victims after being programmed with AI that models the doctor’s decision-making processes.
To study the best human attributes for different decision-making scenarios, the researchers will expose people to emergency situations using a computer research tool developed by Shortland called the Least-worst Uncertain Choice Inventory For Emergency Responses (LUCIFER). They will then measure how a person’s psychological traits and values impact their decisions.
“When we identify the key decision-maker attributes, we will be able to, to some extent, quantify a decision process and develop AI decision systems tailored to specific needs and environments,” Ma says.
A scenario that the research team is focusing on is triaging patients. Using LUCIFER, test subjects will be presented with visuals of patients with various injuries and pulses before determining if they are OK, if they are eventually going to need medical assistance, if they need help right away or if they are deceased.
“We will examine how different traits impact people’s willingness to give certain tags,” Shortland says.
The researchers are also developing a 3D simulation that immerses test subjects in triage scenarios.
“The triage micro-world will allow us to evaluate the progress of the overall project,” says Robinette, who is designing the 3D simulation with his students.
“It will help us see if what we’re finding in our LUCIFER studies transitions into a more real-world environment,” Shortland adds.
For the research project, the team will be utilizing on-campus resources, like the Misinformation Influence Neuroscience and Decision-making (MIND) Lab and the New England Robotics Validation and Experimentation (NERVE) Center, while tapping the researchers’ range of skills and expertise.
“Interdisciplinary teams are required to push research out of the lab and toward the real world, where it can save lives,” Robinette says. “I’m looking forward to the great things we can all do together.”