Are killer robots the future of warfare? | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Killer robots

New battlefield weapons challenge traditional wartime ethics


You have {{ remainingArticles }} free {{ counterWords }} remaining. You've read all of your free articles.

Full access isn’t far.

We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.

Get started for as low as $3.99 per month.

Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.

LET'S GO

Already a member? Sign in.

In 2017, the dystopian short film Slaughterbots premiered at the United Nations’ weapons convention. Delegates watched as palm-sized drones swarmed the screen—breaking into college classrooms and using facial ­recognition scans to identify targets. The robots dispatched their targets with a single shot to the forehead. Onscreen, actors playing TV news anchors decried the fictional death toll: 8,300.

The end of the production featured a warning from University of California Berkeley ­professor Stuart Russell, who admitted artificial intelligence technology has great possible benefits. But he added a strong warning: “Allowing machines to choose to kill humans will be ­devastating to our security and freedom.”

The film about “killer robots” was science fiction. But the potential real-world applications of AI are no longer future fantasy. In March 2021, a UN report on Libya contained what some experts argue is the first documented evidence of a lethal autonomous weapons system attack. The report details how Turkish-backed government forces hunted retreating opposition ­fighters using weapons “programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget, and find’ capability.”

Fire-and-forget refers to a weapon’s ability to guide itself to a target after launch. Essentially, it conveys the idea that a commander can fire a weapon and then “forget” about it. The weapon finds the target on its own.

Some critical facts about the Libya attack remain unclear. The report does not specify whether the weapons system actually killed anyone or whether it was operating in its autonomous mode during the attack. But it is clear that the weapons system had that capability.

Since that UN report, concerns over the lethal applications of AI have only grown. Debates on the ethics of lethal autonomous weapons systems—­popularly called “killer robots”—still rage in the UN’s Convention on Certain Conventional Weapons. But militaries around the world aren’t waiting for a conclusion before pressing forward. The Pentagon requested $1.8 billion to invest in AI development in fiscal year 2024.

The fault line of the debate divides those who see these weapons as potentially beneficial, as well as inevitable, and those who find them morally unacceptable. People on both sides of the issue agree human oversight is key, but they disagree on what that means practically.

Clint Hinote served 35 years in the U.S. Air Force before retiring in June as a three-star general. Hinote spent the last five years of his career at the Pentagon as the senior leader at Air Force Futures—developing strategies “for tomorrow’s Airmen.”

Hinote said buzzwords like “artificial intelligence” and “autonomy” generate a lot of fear and misinformation. Military strategists using these terms aren’t referring to machines with free will or moral agency in a human sense. Instead, those terms refer to algorithms trained by large sets of data to accomplish specific tasks.

Complicating the debate is the fact that no internationally agreed-upon definition of lethal autonomous ­weapons systems actually exists yet. Autonomy in weapons has been advancing for decades and experts ­disagree about when to formally ­categorize a system as autonomous. The Pentagon definition includes any systems “that, once activated, can select and engage targets without further intervention by an operator.”

Hinote’s definition of autonomy includes weapons the military has used for decades, like the AMRAAM missile, a medium-range, air-to-air “fire-and-forget” weapon in use since the 1990s. He also groups some warship defense systems into this autonomous category. “What we’re seeing is more of an ­evolution in using machines for warfare and not necessarily a revolution,” Hinote said.

And that means autonomy will be integrated “almost ubiquitously across the battlespace” of tomorrow. Hinote envisions a future of autonomous drones, unmanned tanks, and autonomously sailing submarines.

THAT’S WHERE KILLER ROBOTS COME IN. Hinote said the future military will likely field systems with “some freedom to be able to make what we might call decisions.” They will identify targets, such as enemy warships, according to a programmed rule set. “And if certain criteria meet those rules, then they’ll engage”—with lethal force.

That’s exactly what groups like Human Rights Watch are concerned about. Mary Wareham is the advocacy director of the nonprofit’s Arms Division. She’s spent her career working to protect civilians from “indiscriminate and inhumane weapons” like land mines, cluster munitions, and incendiaries. In the past decade, she added lethal autonomous weapons systems to the list.

Wareham has a host of concerns about the technology, but they all boil down to the ethics of “outsourcing killing to machines.” She said that’s the “red line” that concerns lots of people—from “peace laureates to faith leaders.”

But not all people of faith agree these systems are inherently immoral. Hinote is a Christian who sees these weapons as inevitable but thinks a Biblical worldview can help inform their ethical use. In that sense, he thinks they’re not any different from regular weapons wielded by people who must decide when and why to pull the trigger.

“In the Christian tradition, we think that God gives us the ability to have free will,” he said. “And humans can’t give machines that free will.”

HUMAN RIGHTS WATCH issued its first report on “killer robots” in 2012. A year later, the group launched the Campaign to Stop Killer Robots—a global coalition of NGOs calling for a preemptive ban on fully autonomous weapons systems. Wareham served as the campaign’s founding coordinator.

Diplomatic talks on the weapons started at the Convention on Certain Conventional Weapons within six months. Wareham estimates between 15 and 20 discussion rounds have followed, with more than 120 countries participating. Delegates met again in March and May and drafted a series of nonbinding prohibitions.

Wareham said the campaign’s primary goal is to forge a treaty outlining new international law for lethal autonomous weapons systems. But she said, “The only outcome that these talks can agree to is to keep talking.”

Still, that hasn’t stopped proliferation. The United States, China, and Russia are leading the charge among countries developing autonomous weapons, but nations like Iran and Turkey have recently joined the list.

Wareham said countries like the United States and Russia don’t want restrictions on their arms development. Opposition from these world leaders is one of the biggest barriers to creating an international treaty.

The Pentagon rolled out its policy on autonomous weapons—a document called DoD Directive 3000.09—a few weeks after the Campaign to Stop Killer Robots launched. The policy’s stated top priority is for autonomous weapons to “be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” Military leaders say that’s enough to keep weapons in line with existing laws of war. The Department of Defense updated the policy again in January 2023.

Wareham said that document is “an important pledge” from the Pentagon but is not an adequate response to the growing threat. She pointed out it doesn’t apply to other government departments, like the CIA, that also have access to autonomous weapons.

Hinote agrees with Wareham that the Geneva Conventions could use an update “in light of the vast use of autonomy.” But he said it’s unlikely Washington would sign a treaty to that effect. The U.S. did not sign either the 1997 Mine Ban Treaty or the 2008 Convention on Cluster Munitions.

Hinote said U.S. officials support these treaties in theory, “but in practice, we would say because the adversary has these capabilities, we believe we have to preserve the right to use autonomy in weapons, for example.”

Pentagon policy is designed to keep weapons in line with existing laws of war. But that doesn’t apply to other government departments, like the CIA.

HINOTE BELIEVES IT IS POSSIBLE to operate lethal autonomous weapons systems in line with existing international law and the principles of just war theory. He said the U.S. military will not allow machines to make big-­picture decisions like whether an “action is proportional to the desired good.” And he said while it may look like machines are making decisions, they aren’t really exercising true autonomy in a moral sense because they can only act within parameters created by humans.

“If we give machines the ability to exercise lethality in some sort of autonomous sense, it is because we have set the rule base up,” Hinote said. “And we are still responsible for what those machines do.”

Computers make mistakes, he notes, but adds that people do too. He said even trained fighters can react out of fear and anxiety in combat situations. Strategists should consider how machine error rates compare with those of service members.

These are the factors delegates will debate at the UN Convention on Certain Conventional Weapons conference in November. But Wareham doesn’t expect to see much change.

In 2018, UN Secretary-General António Guterres expressed his support for a ban on lethal autonomous weapons systems, calling them “politically unacceptable and morally repugnant.” Wareham expects Guterres to raise the issue at the General Assembly meeting this year, but she said a treaty isn’t likely in that forum, either.

Wareham said a third option to ­create international law is to “leave UN auspices completely and just do it in the capitals of the countries who really want this to happen.” That’s the route diplomats pursued with both the land mine and cluster munitions treaties. But Wareham said no country has yet stepped forward to take the lead on hosting such a summit.

Meanwhile, the Campaign to Stop Killer Robots is gaining momentum among small countries afraid of landing in the crosshairs of global superpowers. Wareham estimates about 90 countries are now at a point where they want to move from talking to negotiating international law.


Grace Snell

Grace is a staff writer at WORLD and a graduate of the World Journalism Institute.

COMMENT BELOW

Please wait while we load the latest comments...

Comments