Asst. Prof. Nicholas Evans to Lead $556,000 Study
08/08/2017
By Katharine Webster
Should your self-driving car protect you, the 鈥渄river鈥 or owner, at all costs? Or should it steer you into a ditch 鈥 potentially causing serious injury 鈥 to avoid hitting a school bus full of children?
Those are the kinds of questions that preoccupy Asst. Prof. of Philosophy听Nicholas Evans, who teaches engineering ethics and studies the ethical dilemmas posed by emerging technologies, including drones and self-driving vehicles.
鈥淵ou could program a car to minimize the number of deaths or life-years lost in any situation, but then something counterintuitive happens: When there鈥檚 a choice between a two-person car and you alone in your self-driving car, the result would be to run you off the road,鈥 Evans says. 鈥淧eople are much less likely to buy self-driving vehicles if they think theirs might kill them on purpose, and be programmed to do that.鈥
Now Evans has won a to construct ethical answers to questions about autonomous vehicles (AVs), translate them into decision-making algorithms for AVs and then test the public health effects of those algorithms under different risk scenarios using computer modeling.听
He will be working with two fellow UML faculty members: Heidi Furey, a lecturer in the Philosophy Department, and Asst. Prof. of Civil Engineering听Yuanchang Xie, who specializes in transportation engineering. The research team also includes Ryan Jenkins, an assistant professor of philosophy at California Polytechnic State University, and experts in public health modeling at Gryphon Scientific.
Although the technology of AVs is new, the ethical dilemmas they pose are age-old, such as how to strike the balance between the rights of the individual and the welfare of society as a whole. That鈥檚 where the philosophers come in.
鈥淭he first question is, 鈥楬ow do we value, and how should we value, lives?鈥 This is a really old problem in engineering ethics,鈥 Evans says.
He cited the cost-benefit analysis that Ford performed back in the 1970s, after engineers designing the new Pinto realized that its rear-mounted gas tank increased the risk of fires in rear-end crashes. Ford executives concluded that redesigning or shielding the gas tanks would cost more than payouts in lawsuits, so the company did not change the gas tank design.
Most people place a much higher value on their own lives and those of their loved ones than car manufacturers or juries do, Evans says. So at least one economist has proposed a 鈥減ay-to-play鈥 model for decision-making by AVs, with people who buy more expensive cars getting more self-protection than those who buy bare-bones self-driving cars.
While that offends basic principles of fairness because most people won鈥檛 be able to afford the better cars, 鈥渋t speaks to some basic belief we have that people in their own cars have a right to be saved, and maybe even saved first,鈥 Evans says.
Understanding how computers 鈥渢hink鈥 鈥 by sorting through thousands of possible scenarios according to programmed rules and then rapidly discarding 99.99 percent of them to arrive at a solution 鈥 can help create better algorithms that maintain fairness while also providing a high degree of self-protection, he says. For example, the self-driving car approaching the school bus could be programmed to first discard all options that would harm its own passenger, then sort through the remaining options to find the one that causes least harm to the school bus and its occupants, he says.
Although it鈥檚 not quite that simple 鈥 most people would agree that a minor injury to the AV鈥檚 occupant is worth it to prevent serious injuries to 20 or 30 schoolchildren 鈥 it鈥檚 a good starting point for looking at how much risk is acceptable and under what circumstances, he says.听
Evans and his team also will look at other issues, including the role of insurance companies in designing algorithms and the question of how many AVs have to be on the road before they reduce the overall number of accidents and improve safety.
The NSF also asked Evans and his team to look at cybersecurity concerns with AVs. Today鈥檚 cars are vulnerable to hacking through unsecured Bluetooth and Wi-Fi ports installed for diagnostic purposes, but large-scale hacking of self-driving cars is potentially much more dangerous.
There are also important privacy questions involving the data that an AV鈥檚 computer collects and stores, including GPS data and visual images from the car鈥檚 cameras, Evans says.