Pulling the trigger | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Pulling the trigger

0:00

WORLD Radio - Pulling the trigger

Autonomous attack drones are no longer science fiction, and the conversation about the ethics involved shouldn’t be either


South Korea's military drones fly in formation during a South Korea-U.S. joint military drill at Seungjin Fire Training Field in Pocheon on May 25. Getty Images/Photo by Yelim Lee/AFP

MARY REICHARD, HOST: Today is Thursday, September 28th. Thank you for turning to WORLD Radio to help start your day.

Good morning. I’m Mary Reichard.

MYRNA BROWN, HOST: And I’m Myrna Brown. Coming next on The World and Everything in It: killer robots.

This month, a series of hearings on Capitol Hill brought artificial intelligence fears back in the spotlight. That, after a year of intense debate over the implications of AI products like ChatGPT.

REICHARD: But, that’s not the only arena of concern. Recent innovations in battlefield drones are heating up the debate over the ethics of autonomous weapons. WORLD Features Reporter Grace Snell has the story.

SOUND: [Slaughterbots powering up, whirring sound]

GRACE SNELL, REPORTER: Imagine this scene: A swarm of attack drones whines overhead, darkening the sky. They swoop over the countryside scanning the ground for targets.

NEWS ANCHOR: The nation is still recovering from yesterday’s incident, which officials are describing as some kind of automated attack…

It’s a scene from a sci-fi film. The robots hunt their victims using facial recognition technology. When they find a profile match, they fire.

NEWS ANCHOR: Authorities are still struggling to make sense of an attack on university students worldwide which targeted some students and not others. [Static] Stay away from crowds when indoors, keep windows covered with shutters. [Static] Protect your family, stay inside…

This is a hypothetical scenario—the plot of a dystopian short film called “Slaughterbots.” But, the film’s creators argue it’s not that far-fetched. They aired the film at the United Nations in 2017. It expresses their real-world fears about weapons able to target and kill people on their own—using things like sensors, radar, or AI algorithms.

Experts disagree about whether governments or defense contractors are already using these kinds of systems. But, it is clear that the necessary tech does already exist.

AUDIO: ["Slaughterbots" clip]

RUSSELL: This short film is more than just speculation. It shows the results of integrating and miniaturizing technologies that we already have…

Stuart Russell is a computer scientist and professor at UC Berkeley.

RUSSELL: I’ve worked in AI for more than 35 years. Its potential to benefit humanity is enormous, even in defense, but allowing machines to choose to kill humans will be devastating to our security and freedom.

Since Slaughterbots’ debut, fears about AI-driven weapons have continued to grow. At the heart of the debate is something called a “lethal autonomous weapons system.”

Exactly what that is, though, is murky. It doesn’t have an internationally-agreed on definition. And the tech is always evolving. But here’s one working definition:

PHILPOT-NISSEN: Any kind of weapon that would, would operate would itself make the decision whether or not to attack without ultimately having an operator pulling the trigger.

Jennifer Philpot-Nissen works on disarmament issues for the World Council of Churches. It’s a member of a coalition called the Campaign to Stop Killer Robots. They’re fighting for a global ban on these kinds of fully autonomous weapons.

PHILPOT-NISSEN: What we’re hearing in the UN meetings is that most states now agree that these weapons should be regulated. But it’s the methods of which they are regulated that they don’t agree on. Most of them are saying, “Well, they need regulating, but we will self regulate,” and we just don’t believe that that’s a sufficient response.

Countries like the United States, China, and Russia view these systems as key to future national security. They say autonomy is inevitable and necessary to keep pace with adversaries.

Clint Hinote spent 35 years in the U.S. Air Force. He flew fighters in the Middle East and worked as a military planner and strategist. For the last five years of his career, Hinote headed up efforts at the Pentagon to develop future Air Force strategy.

Hinote argues these weapons aren’t all that different from systems the military has used for decades. Things like the AMRAAM air-to-air missile—a radar-guided system that zeroes in on targets after launch.

HINOTE: What we’re seeing is more of an evolution in using machines for warfare. And not necessarily a revolution.

Hinote says these machines can’t actually make decisions—at least not in a moral sense.

HINOTE: In the Christian tradition, you know, we think that God gives us the ability to have free will. And humans can’t give machines that free will.

Autonomous weapons act according to programmed rules. Humans are the ones who set up those rules, so humans are still ultimately responsible.

HINOTE: So if we put a machine in a place where it has a sensor, and it’s looking, and if it engages with lethal force, that is a human decision. And it is carried out by a machine…

But, battlefield scenarios are complex and volatile and the outcomes of AI algorithms can be difficult to predict. What if these machines don’t react like they’re supposed to?

Hinote says it’s not a question of whether machines will make mistakes, but rather…

HINOTE: Will the level of mistakes that machines make be less than or equal to what we could expect 19-year-olds who are away from home for the first time they have a gun and they’re scared?” They make a lot of mistakes too. And so it’s important for us to at least have a conversation about the comparison.

But Philpot-Nissen says there’s no practical way to hold people accountable for the mistakes of machines. For her, the moral issue at stake is much more cut and dried.

PHILPOT-NISSEN: God has given us the gifts of empathy and decision-making and love which we can never replicate into a machine. So to delegate our decisions about taking a life into a machine Yeah, it’s hard to imagine that something that God would ever countenance.

Clint Hinote doesn’t think an international ban is realistic. But he does agree the laws of war could use an update “in light of the vast use of autonomy.” Hinote isn’t in uniform anymore. But he said he’d jump at the chance to work on this issue.

HINOTE: There’s a large middle ground there that I think mature countries could come to an agreement upon, but it’s gonna take some willingness to sit down and actually talk it out. I’m hoping that happens soon.

Reporting for WORLD, I’m Grace Snell.


WORLD Radio transcripts are created on a rush deadline. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of WORLD Radio programming is the audio record.

COMMENT BELOW

Please wait while we load the latest comments...

Comments