Teaching U.S. military robots right from wrong | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Teaching U.S. military robots right from wrong


It’s been more than 70 years since legendary science fiction author Isaac Asimov, in his anthology I Robot, introduced the world to the idea that robots might be programmed to behave ethically. Now, it seems reality has caught up with Asimov’s fictional future. With more and more unmanned systems performing a variety of functions, from surveillance to logistics to weapons delivery, the U.S. military is beginning to take formal steps to explore how to build a sense of right and wrong and moral consequence into its autonomous robotic systems.

Paul Bello is the director of the Cognitive Science Program at the Office of Naval Research—an office that has just awarded a $7.5 million grant to a consortium of U.S. universities to study this problem.

“There are rules of engagement and there are international humanitarian laws and those are very explicit sorts of things,” Bello said. One of the main concerns of this grant is that not every situation a soldier or a robot will ever find himself, or itself, in is bounded by those rules. There’s always gray areas where people have to rely on sort of moral common sense.”

Asimov’s three laws of robotics were: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given to it by human beings, except where such orders would conflict with the first law, and a robot must protect its own existence as long as such protection does not conflict with the first or second law.

“But if you read Asimov’s stories, in nearly every story something goes wrong with these robots that are programmed just with these three rather straightforward, hierarchically organized principles,” said Wendell Wallach, chair of the Yale Technology and Ethics Study Group and author of the book, Moral Machines: Teaching Robots Right from Wrong. He spoke about robot ethics during an interview in 2012 with George Mason University’s Mercatus Center. “So if anything, [Asimov] showed us the limitation of any rule-based morality.”

The focus of Bello’s research will be on moral cognition in robotic systems used in non-combat tasks such as search and rescue, firefighting, and first response. But other roboticists have, for many years, been interested in the problem of ethical behavior in robots with weapons. Ronald Arkin is director of the Mobile Robot Laboratory at Georgia Tech and author of the book, Governing Lethal Behavior in Autonomous Robots. In this 2013 lecture, he suggests that robots, because of their ability to adhere to their programming, might actually act more humanely on the battlefield than humans.

“The ways in which these are being explored, at least by our group, have involved something called action-based machine ethics,” he said. “It deals with deontic logic, which is a form of logic of prohibitions, permissions, and obligations and the ways in which you can encode these things in terms of translating the rules of the Geneva Conventions into appropriate constraints.”

Some detractors have argued against such a constraint-based approach. They believe there are too many possible scenarios in which a lethal robot might make a fatal mistake. Arkin believes that, while we can’t explicitly account for every possible situation, we can give artificial intelligence, or AI, the ultimate instruction—when in doubt, don’t.

“You don’t have to encode everything in the laws of war,” he said. “You just have to encode the situations that this particular robot is likely to encounter in this set of circumstances. And you can use something referred to in AI as a closed world assumption. That, if you don’t know what something is, you don’t shoot, basically. You just don’t shoot.”


Michael Cochrane Michael is a World Journalism Institute graduate and a former WORLD correspondent.


An actual newsletter worth subscribing to instead of just a collection of links. —Adam

Sign up to receive The Sift email newsletter each weekday morning for the latest headlines from WORLD’s breaking news team.
COMMENT BELOW

Please wait while we load the latest comments...

Comments