The Moral Dilemmas of Self-Driving Cars

Results from a new survey reveal cultural differences in whom participants prefer to be spared in fatal accidents.
Image
crosswalk

Image credits: Air Force photo by Margo Wright

Charles Q. Choi, Contributor

(Inside Science) -- If your car was in a lethal accident, would you prefer for it to kill one innocent bystander or five? By posing variations of this problem online to volunteers nearly 40 million times, scientists now have insights on how moral preferences regarding such dilemmas vary across the globe, new findings that may help guide how driverless cars act in the future.

Google, Tesla and other major companies aim to make driverless cars a reality, which they suggest could reduce accidents caused by human error. However, fatal accidents that such autonomous vehicles have already experienced -- such as the deadly collision in May of a self-driving Uber car with a pedestrian -- suggest they will not only need to navigate roads, but potentially also the dilemmas posed by accidents with unavoidable deaths. For example, should a driverless car hit a pregnant woman or swerve into a wall and kill its four passengers?

One famous thought experiment that seems perfectly suited to help address this challenge is the so-called trolley problem from British philosopher Philippa Foot. The original scenario had you imagine you were driving a trolley whose brakes had failed, and you had the choice to divert the runaway tram onto either a track where one victim would die or another where five would. The many variations that exist of this problem can help people explore whom they might choose to live or die.

In 2016, scientists launched the Moral Machine, an online survey that asks people variants of the trolley problem to explore moral decision-making regarding autonomous vehicles. The experiment presents volunteers with scenarios involving driverless cars and unavoidable accidents that imperiled various combinations of pedestrians and passengers. The participants had to decide which lives the vehicle would spare or take based on factors such as the gender, age, fitness and even species of the potential victims.

For 18 months, the researchers gathered nearly 40 million such decisions from 233 countries and territories worldwide. They found there were a number of moral preferences shared across the globe, including saving the largest number of lives, prioritizing the young, and valuing humans over animals. Those spared the most often were babies in strollers, children, and pregnant women.

The results also revealed that ethics varied between different cultures. For instance, volunteers from Latin America as well as France and its former and current overseas territories strongly preferred sparing women, the young and the athletically fit. Moreover, in developed countries with strong laws and institutions, jaywalkers were saved less often than people who obeyed traffic laws.

"When we checked to see if results varied by country, we were struck by the variation," said study co-author Iyad Rahwan, a computer scientist at the Massachusetts Institute of Technology in Cambridge.

The scientists also found controversial moral preferences. For example, overweight people were about 20 percent more likely to die than fit people, and homeless people had a roughly 40 percent greater chance of dying than executives, as did people who obeyed traffic signals compared to those who jaywalked.

The scientists cautioned their results might not represent the moral preferences of people in general. For instance, the study's participants were volunteers able to afford Internet access, and were also not selected using careful sampling techniques. "It's an interesting question as to who are the right people to make these kinds of decisions," said computer scientist Vincent Conitzer at Duke University in Durham, North Carolina, who did not participate in this study.

The public can freely explore the Moral Machine results online country by country. The scientists hope their findings will prove useful to governments contemplating regulations on autonomous vehicles and companies programming such machines. "Having said that, the experts don't have to cater to the public's interest, especially when they find these preferences problematic," said study lead author Edmond Awad, a computer scientist at MIT.

Neuroscientist Jana Schaich Borg at Duke University, who did not take part in this research, noted that self-driving car engineers may feel frustrated that they may have to design autonomous vehicles to consider the kinds of rare cases the Moral Machine focused on. Still, "I think it's important to work on these edge cases at the same time as the more mainstream engineering problems," she said. "Society has to trust self-driving cars in order for them to ultimately realize their potential for saving lives, and they won't do that if the cars behave in ways that conflict with their moral values."

The researchers agreed that driverless cars will not often face such life-or-death cases. Still, "they will be constantly making decisions that redistribute risk away from some people and towards others," said study co-author Azim Shariff, a psychologist at the University of British Columbia in Vancouver. "Consider an autonomous car that is deciding where to position itself in a lane -- closer to a truck to its right, or a bicycle lane on its left. If cars were always programmed to be slightly closer to the bicycle lane, they may slightly reduce the likelihood of hitting other cars, while slightly increasing the likelihood of hitting cyclists."

"The precise positioning of the car will appear to cause no ethical dilemmas whatsoever, most of the time," Shariff continued. "But over millions or billions of these situations, either more cyclists will die, or more passengers will die. So similar trade-offs as participants make in the black-and-white scenarios of the Moral Machine experiment emerge at the statistical level -- something we call the statistical trolley problem."

This research may shed light on moral trade-offs that autonomous machines may have to make in other areas. "One example is robot caregivers, who may have to resolve some different moral trade-offs between safety, privacy and autonomy," Awad said.

The scientists detailed their findings online Oct. 24 in the journal Nature.

Author Bio & Story Archive

Charles Q. Choi is a science reporter who has written for Scientific American, The New York Times, Wired, Science, Nature, and National Geographic News, among others.