How Self-Driving Cars Navigate Moral Decisions

self-driving-cars

Driving is pretty easy 80 percent of the time. Things get complicated in the other 20 percent. Within that 20 percent, drivers sometimes have to make split-second decisions with life or death consequences. Consider these crash scenarios:

  • An insured swerves to avoid a jaywalker pushing a stroller and hits a concrete barrier.
  • An insured swerves to avoid a concrete barrier and hits a jaywalker.
  • An insured swerves to avoid a jaywalker and hits a car.

In all of these situations, figuring out comparative negligence for the crashes could take some work. But what if instead of a human driver, these crashes involved a self-driving car? If the car had been programed to make these decisions, would the programmer be liable? And should the cars be programed to behave differently?

The Moral Machine

Questions like these are presented in the Moral Machine, an online study conducted by MIT. Participants are given a crash scenario in which they can swerve or stay in their lane. Either way, the imminent crash will claim at least one life.

Participants can try to save the passengers versus the pedestrians, humans versus pets, young versus old, and law-abiding individuals versus jaywalkers and criminals. They can also try to save the greatest number of lives in each situation.

After you go through a few scenarios, your results are summarized and compared to the choices made by other people.

A Peek into Human Values

An article published in Nature looked at the data produced by 40 million decisions made by millions of people living in 233 different countries and territories. The analysis shows that people tend to value dogs over cats, most humans over pets and contributing members of society over criminals.

The analysis also reveals cultural differences. In many countries, the young were spared more than the old, but this was not the case everywhere. How jaywalkers were perceived also varied.

The Implications for Autonomous Vehicles

Participating in the Moral Machine feels like playing a game, but the real-world implications are serious. Driving can quickly become a dangerous situation in which people have to make life-or-death decisions. When self-driving cars are involved, programmers have to decide ahead of time how the autonomous vehicle will handle the decisions.

Liability becomes another issue. If a self-driving car hits someone, is the car maker liable?

And this isn’t some hypothetical question for a distant future. Self-driving cars are already here. Reuters reports that Waymo’s self-driving taxis will start collecting fares in Arizona. For now, there will be a backup driver in case of emergencies. Meanwhile, California has approved testing for self-driving cars without a backup driver.

There has already been a fatality as well. In March, a self-driving Uber struck and killed a pedestrian in Arizona. According to the New York Times, there was a backup driver, but he was not focused on the road at the time of the crash.

As self-driving cars become more common, these sticky issues of liability, morality and safety will all need to be addressed. So will your underwriting rules. Does your policy administration system empower you to adjust your rates and rules easily? If not, take a look at Silvervine. We’ll equip you to tackle the future of insurance.