By David Perry
One grant will fund research on right-wing domestic terrorism and the other will support a study on the ethics of using artificial intelligence (AI) to supplant an increasingly stressed and aging American fighting force.
, a professor in the School of Criminology and Justice Studies
and director of the graduate program in security studies, will explore right-wing terrorism in the three-year, $998,599 grant from the National Institute of Justice for his project, “Characterizing and Preventing Domestic Terrorism: A Mixed-Methods Study of Five Far-Right Terrorist Groups.”
, assistant professor of philosophy
, is teaming up with Neil Shortland
, assistant professor in the School of Criminology and Justice Studies, to examine “The Ethics of Warfighter Participation in the Development and Testing of Artificial Intelligence-driven Performance Enhancements.” The grant, from the U.S. Department of Defense, is also spread over three years and totals $1,112,804.
“We’re trying to figure out how, when and if the U.S. Armed Forces can test performance enhancements on soldiers, with a focus on enhancements that use AI, like ‘brain-computer interfaces,’” Evans says.
There will be opportunities for UML students to work on both projects, the professors say.
“With this grant, we all win,” says Perliger. “Law enforcement gets the knowledge it needs, we get to improve our research and database, and the students get paid and have a chance to do research at a very high level.”
Perliger, who oversees the largest database of right-wing extremist violent incidents in the country, says law enforcement “doesn’t have a very complete understanding” of many far-right terrorist groups. They understand elements of several groups, he says, but lack deeper knowledge of plotting attacks.
The grant is especially timely in light of the surge of activity and visibility of right-wing extremist groups in recent years.
The first two years will be used to collect and analyze research into the groups. In the third year, Perliger will lead a team into police departments to teach new training models and programs to deal with right-wing extremism.
The Ethics of Artificial Intelligence In War
Evans, who specializes in bioethics and the ethics of technology, says he and Shortland will explore the ethics and decision-making processes surrounding performance-enhancement in soldiers.
“We’re looking at how people in the military decide when to test and use performance enhancements in soldiers,” says Evans, who notes that performance enhancements can include special diets or training regimens used by Olympic athletes. “But it could also include special drugs to keep soldiers awake, or even chips in their brains to allow them to communicate with computers.
“What we want to know is, how do different members of the military – not just the U.S but the militaries in England and Israel, too – how do they weigh the risks and benefits when using performance enhancement with soldiers?” says Evans.
Evans said the study will assess a wide range of situations when soldiers’ performance could be enhanced through artificial intelligence, “everything from improving nutrition to putting chips in the head of soldiers so they can fly drones with their mind. There’s a very big ethical question there: When is such a thing permissible?”
When Evans visited a special operations headquarters in Florida, he learned that aging special forces fighters were becoming harder to replace. To fill the ranks, Evans says not only were some notoriously rigorous qualifying standards loosened, but candidates were not as fit as previous classes.
“Special forces is a pretty extreme job, even by military standards,” says Evans.
Evans and Shortland plan on hiring doctoral students to assist with the project.