Should your self-driving car protect you, the “driver” or owner, at all costs? Or should it steer you into a ditch – potentially causing serious injury – to avoid hitting a school bus full of children?
Those are the kinds of questions that preoccupy Asst. Prof. of Philosophy Nicholas Evans
, who teaches engineering ethics and studies the ethical dilemmas posed by emerging technologies, including drones and self-driving vehicles.
“You could program a car to minimize the number of deaths or life-years lost in any situation, but then something counterintuitive happens: When there’s a choice between a two-person car and you alone in your self-driving car, the result would be to run you off the road,” Evans says. “People are much less likely to buy self-driving vehicles if they think theirs might kill them on purpose, and be programmed to do that.”
Now Evans has won a three-year, $556,650 National Science Foundation grant
to construct ethical answers to questions about autonomous vehicles (AVs), translate them into decision-making algorithms for AVs and then test the public health effects of those algorithms under different risk scenarios using computer modeling.
He will be working with two fellow UML faculty members: Heidi Furey
, a lecturer in the Philosophy Department
, and Asst. Prof. of Civil Engineering Yuanchang Xie
, who specializes in transportation engineering. The research team also includes Ryan Jenkins, an assistant professor of philosophy at California Polytechnic State University, and experts in public health modeling at Gryphon Scientific.
Although the technology of AVs is new, the ethical dilemmas they pose are age-old, such as how to strike the balance between the rights of the individual and the welfare of society as a whole. That’s where the philosophers come in.
“The first question is, ‘How do we value, and how should we value, lives?’ This is a really old problem in engineering ethics,” Evans says.
He cited the cost-benefit analysis that Ford performed back in the 1970s, after engineers designing the new Pinto realized that its rear-mounted gas tank increased the risk of fires in rear-end crashes. Ford executives concluded that redesigning or shielding the gas tanks would cost more than payouts in lawsuits, so the company did not change the gas tank design.
Most people place a much higher value on their own lives and those of their loved ones than car manufacturers or juries do, Evans says. So at least one economist has proposed a “pay-to-play” model for decision-making by AVs, with people who buy more expensive cars getting more self-protection than those who buy bare-bones self-driving cars.
While that offends basic principles of fairness because most people won’t be able to afford the better cars, “it speaks to some basic belief we have that people in their own cars have a right to be saved, and maybe even saved first,” Evans says.
Understanding how computers “think” – by sorting through thousands of possible scenarios according to programmed rules and then rapidly discarding 99.99 percent of them to arrive at a solution – can help create better algorithms that maintain fairness while also providing a high degree of self-protection, he says. For example, the self-driving car approaching the school bus could be programmed to first discard all options that would harm its own passenger, then sort through the remaining options to find the one that causes least harm to the school bus and its occupants, he says.
Although it’s not quite that simple – most people would agree that a minor injury to the AV’s occupant is worth it to prevent serious injuries to 20 or 30 schoolchildren – it’s a good starting point for looking at how much risk is acceptable and under what circumstances, he says.
Evans and his team also will look at other issues, including the role of insurance companies in designing algorithms and the question of how many AVs have to be on the road before they reduce the overall number of accidents and improve safety.
The NSF also asked Evans and his team to look at cybersecurity concerns with AVs. Today’s cars are vulnerable to hacking through unsecured Bluetooth and Wi-Fi ports installed for diagnostic purposes, but large-scale hacking of self-driving cars is potentially much more dangerous.
There are also important privacy questions involving the data that an AV’s computer collects and stores, including GPS data and visual images from the car’s cameras, Evans says.