Ethical issues with AI | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Ethical issues with AI

0:00

WORLD Radio - Ethical issues with AI


NICK EICHER, HOST: Today is Tuesday, November 13th. Thank you for turning to WORLD Radio to help start your day. Good morning. I’m Nick Eicher.

MARY REICHARD, HOST: And I’m Mary Reichard.

Think about all the little decisions you make when driving.

And as you think about it, I think you’ll pretty quickly realize you don’t really think.

You react.

In an emergency, those reactions often reflect moral choices: Do you endanger your passengers by running off the road to avoid hitting a pedestrian?

These real-time decisions are difficult enough for human drivers.

But imagine you’re an engineer designing artificial intelligence to pilot a self-driving car.

How do you go about creating a machine to make split-second, life-or-death decisions?

EICHER: That’s the question a team of researchers from MIT posed in an online experiment they started back in 2014.

They called it the “Moral Machine.” It’s an internet-based game that let millions of people around the world make decisions about how self-driving cars ought to prioritize human life.

This went on over a four-year period. The researchers collected more than 40 million decisions from players in more than 200 countries.

REICHARD: The journal Nature published the results of the experiment. WORLD Radio technology reporter Michael Cochrane is here now to talk about the key findings.

Michael, tell us more about the scenarios this experiment posed.

MICHAEL COCHRANE, REPORTER: Sure. The researchers created a number of variations of a classic moral dilemma called the “trolley problem.”

REICHARD: The “trolley problem?” What’s that?

COCHRANE: Yeah, here’s the scenario: You are an observer and you see an out of control trolley speeding down the tracks about to hit and kill a group of five people. But you as the outside observer have access to a lever that could shunt the trolley onto another set of tracks, but there is one person on that track who might be struck and killed. So, what do you do? Pull the lever and end one life while sparing five? That’s the dilemma.

REICHARD: So, what kind of variations did the researchers come up with for the Moral Machine experiment?

COCHRANE: They posed a number of scenarios based on nine basic comparisons: Should the self-driving car prioritize humans over pets, passengers over pedestrians, more lives over fewer, women over men, young over old, fit over sick, higher social status over lower, and law-abiders over law-breakers. The last category was whether the car should swerve or stay on course.

REICHARD: Can you give an example of a typical comparison?

COCHRANE: There are so many possibilities, but there were things like, should the car save a homeless person with his dog instead of a businessman. Or whether the car should continue straight and run into three elderly pedestrians, or by swerving into a barricade, kill three kids in the backseat. I’ll admit, the scenarios seem somewhat contrived and exaggerated, but there is an element of truth in them. Dilemmas do occur. And this experiment generated fascinating results—in many cases showing wide differences in responses based on culture and economics.

REICHARD: Right. Let’s talk about those findings. What do they say?

COCHRANE: There were two major results according to the paper’s authors. The first concerned the general categories of preferences across all respondents and the second had to do with clusters of countries. Here’s co-author Jean-Francois Bonnefon of the Toulouse School of Economics from a video produced by the journal Nature:

BONNEFON: The main results of the paper for me are first, the big three in people’s preferences, which is: save the human, save the greater number, save the kids. And the second most interesting finding was the clusters. The clusters of countries with different moral profiles.

REICHARD: OK, so he’s saying humans have priority over animals, saving more people has priority over fewer and saving kids was more important than saving adults?

COCHRANE: Generally, yes. But the authors noted significant differences that were culturally based around three clusters of countries: The first cluster had many western countries. The second included many eastern nations such as China and Japan, and the third cluster had Latin American countries and also some from former French colonies. Again, here’s Jean-Francois Bonnefon:

BONNEFON: The cultural differences we find are sometimes hard to describe because they’re multidimensional. But some of them are very striking, like the fact that eastern countries do not have a strong preference for saving young lives. Eastern countries seem to be more respectful of older people, which I thought was a very interesting finding. I was also struck by the fact that French and the French sub-cluster was interested in saving women. That was, yeah, I’m still not quite sure what’s going on here!

REICHARD: You know, the older I get, the more I see just how the west reveres youth over the elderly. That’s an obvious east/west divide. Michael, you mentioned that there were some questions related to socio-economic status. How did the responses to those questions differ?

COCHRANE: Responses to those types of questions actually surprised the researchers. On one side they put male and female business executives and on the other side they put a homeless person. They found that the higher the economic inequality in a country, the more people were willing to prioritize the executives’ lives over the homeless people. So, this work provides some significant insight into how morals change across cultures.

REICHARD: What will be the implications of this experiment for self-driving cars, do you think?

COCHRANE: The authors hope the results of the Moral Machine experiment will help the developers of AI technology think more deeply about the notion of an ethical machine. They believe the discussion should move from simple life or death dilemmas to discussions of risk and liability. For example, who’s legally liable if a self-driving car causes death or injury? The vehicle owner? The vehicle manufacturer? The software developer? It could also help inform future laws and regulations related to artificial intelligence.

REICHARD: So many questions and the answers might be more frightening that leaving it to chance. Michael Cochrane is our science and technology correspondent. As always Michael, thanks for your reporting.

COCHRANE: You’re very welcome, Mary.


(Associated Press/Photo by Eric Risberg, FILE) An Uber driverless car heads out for a test drive in San Francisco in 2016. 

WORLD Radio transcripts are created on a rush deadline. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of WORLD Radio programming is the audio record.

COMMENT BELOW

Please wait while we load the latest comments...

Comments