The Murky Ethics of Driverless Cars

A new study explores a moral dilemma facing the creators of self-driving vehicles: In an accident, whose lives should they prioritize?

By Tom Jacobs

Trolly problem of self-driving car. (Illustration: Iyad Rahwan)

So you’re driving down a dark road late at night when suddenly a child comes darting out onto the pavement. Instinctively, you swerve, putting your own safety in jeopardy to spare her life.

Very noble of you. But would you want your driverless vehicle to do the same?

That question, which can be found idling at the intersection of technology and ethics, is posed in the latest issue of Science. A variation on the famous trolley dilemma, it won’t be theoretical for long: Self-driving vehicles are coming soon, and they will need to be programmed how to respond to emergencies.

A research team led by Iyad Rahwan of the Massachusetts Institute of Technology argues that this poses a huge challenge to their creators. In a series of studies, it finds people generally agree with the “utilitarian” argument — the notion that cars should be programmed to spare as many lives as possible.

However, when asked what they would personally purchase, they tended to prefer a vehicle that prioritized the safety of its riders. And a theoretical government regulation that would mandate a spare-the-greatest-number approach significantly dampens their enthusiasm for buying a driverless car.

“Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today.”

“Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today,” the researchers write. “For the time being, there appears to be no way to design algorithms that would reconcile moral values and personal self-interest.”

Rahwan and colleagues Jean-Francois Bonnefon and Azim Shariff describe six studies, all conducted online via Amazon’s Mechanical Turk. In the first, the 182 participants “strongly agreed that it would be more moral for autonomous vehicles to sacrifice their own passengers when this sacrifice would save a greater number of lives overall.”

Another study found this still held true even when the passengers were described as “you and a family member,” as long as it meant saving the lives of multiple pedestrians. The 451 participants, however, “indicated a significantly lower likelihood of buying the autonomous vehicle when they imagined the situation in which they and their family member would be sacrificed for the greater good.

In still another study, the 393 participants “were reluctant to accept government regulation” that would mandate programming the cars to ensure the fewest lives were lost. “Participants were much less likely to consider purchasing an autonomous vehicle with such regulation than without.”

That suggests such regulations “could substantially delay the adoption” of driverless cars, the researchers write. That would be unfortunate, they note, since these cars are much safer than those driven by humans, and more lives will be saved as more of them are on the road.

Altogether, the results suggest people approve of self-driving cars “that sacrifice their passengers for the greater good, and would like others to buy them — but they would themselves prefer to ride in autonomous vehicles that protect their passengers at all costs.”

A dilemma indeed. If you’d like to explore the specific ethical questions in more detail— which may or may not clarify your thinking — you may do so at http://moralmachine.mit/edu.

Or you can just give it some serious thought while you sit in traffic.

||

Related Posts