Caution: Self-Driving Cars Ahead

Researchers examining the future of self-driving cars believe we can't expect to take our eyes off the road any time soon.
Image
driver-free car

Image credits: BMW AG

Rights information: CC BY-SA 3.0

Katharine Gammon, Contributor

(Inside Science) -- The potential of driverless cars seems extraordinary: no more awkwardly waiting for others to proceed at four-way intersections, no more struggling with parallel parking and no more dealing with drunk drivers. But according to many researchers, for the near future, people will still need to actively monitor the automated systems and the road ahead, even if for most of the ride, the cars "drive themselves."

Not Yet Fully Automatic

When systems become automated, the common expectation is that humans won’t have to do anything, said Michael Nees, a psychologist at Lafayette College in Easton, Pennsylvania. "But that’s actually not the case. Usually what happens is that people go from manual control to monitoring automation."

Automated driving technology is no exception. In a fatal accident in March this year, where a self-driving Uber car hit and killed a pedestrian in Tempe, Arizona, video from the car’s dashboard camera showed the driver glancing down at her lap as the accident happened.

“The idea that automation is so reliable that you can take a nap or watch TV -- that’s far off in the future,” said Nees.

For the time being, he said, it’s better to think of the car’s automation as driver assistance.

“You can imagine a scenario where vehicle assumes lateral and longitudinal control, and the car monitors the people inside, to make sure they are still paying attention,” said Nees.

For example, the car could include sensors to monitor an operator’s hands, eyes and voice. It could also turn over manual control every so often to make sure a driver is still engaged and able to respond if needed. The role of a human driver would still be essential but slightly redefined. Drivers wouldn't use their bodies as much, but they would have to keep watch vigilantly. “Instead of using your arms and hands, your workload is just different: overseeing automated systems,” said Nees.

Sending Out The Wrong Signals

Some of the words used to discuss automated driving systems may give people the wrong impression, said Nees. “If you look at the branding, the word autopilot incurs criticism,” he said. While carmaker Tesla insists that the word autopilot was not intended to market their cars as completely self-driving, Nees’s research shows that people tend to associate it with higher level of automation. A better term? Nees suggests semi-autonomous, which more accurately describes the cars’ abilities.

But even when we fully understand the limitations of these technologies, we can still be lulled into becoming too reliant on them. In a recent study published in the journal Human Factors, test subjects were tasked with “driving” a simulated automated vehicle and monitoring the roadway. The longer the people drove, the fewer cars they were able to avoid and the slower their reaction times became.

According to Eric Greenlee, psychologist at Texas Tech University and author of the paper, the participants went from detecting about 80 percent of the hazards at the beginning of the drive to only 50 percent by the end. “The current technology -- which expects drivers to monitor the road for long periods of time -- isn’t safe,” he said.

There could be some technological solutions for this as well. For example, a car could track a driver’s eye gaze and pupil size -- both markers for mental load or stress -- and dial up its safety settings so the vehicle would make more conservative decisions. Those decisions would make the cars safer, but could also result in a slower, jerkier ride as the car may brake more frequently.

Challenges Beyond Technology

According to a report by the U.S. the National Transportation Safety Board on the Uber crash in Arizona, even though the vehicle’s computer system had determined it needed to brake to avoid a crash, the vehicle's built-in emergency braking system was disabled by Uber for a smoother ride while the car was in autonomous mode.
 

So who gets to decide an autonomous or semi-autonomous car’s settings: the owner? A regulating body? The carmaker? These open questions create new legal and ethical challenges for our society. “It would be imaginable that you could purchase a safety setting that prioritizes people inside the car instead of people outside the car for an upgrade of $10,000,” said Leon Sutfeld, a researcher at the University of Osnabruck in Germany who studies ethical decisions in road traffic scenarios.

Consider the trolley problem, a thought experiment in ethics where a person is asked to decide whether to change the path of a speeding trolley, thereby possibly sacrificing one person in order to save two lives. The way individual owners, lawmakers and companies approach these ethical dilemmas could affect how engineers design the decision-making algorithms in these self-driving cars. There are even questions concerning nonhuman lives. “Different parts of the world have different valuations of animals, and cars would have to adjust their moral behavior to the place where they are,” said Sutfeld.

True self-driving technology will eventually come, in a matter of decades -- and it will be game-changing, said Sutfeld. “Once the technology is proven to be safe, and safer than human drivers, there is a moral imperative to use that technology to save lives.”

Author Bio & Story Archive

Katharine Gammon is a freelance science writer based in Santa Monica, California, and writes for a wide range of magazines covering technology, society, and animal science.