Lethal Battlefield Robots: Sci-Fi or the Future of War?

Warbots don’t exist yet, and the Campaign to Stop Killer Robots hopes to keep it that way.

We're not quite there yet.20th Cenury Fox/Entertainment Pictures/ZUMAPRESS.com


“We are not talking about things that will look like an army of Terminators,” Steve Goose, a spokesman for the Campaign to Stop Killer Robots, tells me. “Stealth bombers and armored vehicles—not Terminators.” Goose, the director of Human Rights Watch’s arms division, has been working with activists and other experts to demand an international ban on robotic military weapons capable of eliminating targets without the aid of human interaction or intervention, i.e., killer robots.

The bluntly titled campaign, which at sounds like something from a Michael Bay flick or Austin Powers, involves nine organizations, including the International Committee for Robot Arms Control. The campaign is spearheading a preemptive push against efforts to develop and potentially deploy fully autonomous killer robots—a form of hi-tech weaponry that doesn’t actually exist yet.

“I’m not against autonomous robots—my vacuum is an autonomous robot,” says Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield and chair of the International Committee for Robot Arms Control (and a fixture on British television). “We are simply calling for a prohibition on the kill function on such robots. A robot doesn’t have moral agency, and can’t be held accountable for crimes. There’s no way to punish a robot.”

The real-life equivalent of Isaac Asimov’s Three Laws of Robotics (which posits that robots may not harm humans, even if they are instructed to do so) is, like killer-robot technology itself, a ways off. In April, the United Nations released a report (PDF) that recommended suspending the development of autonomous weapons until their function and application is discussed more thoroughly. Last December, the Department of Defense issued a directive on weapon systems autonomy, calling for the establishment of “guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.”

Though the Pentagon document stresses the need for human supervision of military robots, critics claim it leaves the door open for the development of autonomous lethal robots that aren’t accountable to meaningful human oversight. “We already don’t understand Microsoft Windows; we’re certainly not going to understand something as complex as a humanlike intelligence,” says Mark Gubrud, a research associate working on robotic and space weapons arms control at Princeton. “Why would we create something like that and then arm it?” Killer robot foes also note that, according to the Pentagon directive, it only takes signatures from two department undersecretaries and the chair of the Joint Chiefs of Staff to green-light the development and use of lethal autonomous technology that targets humans.

“I’m not against autonomous robots—my vacuum is an autonomous robot. We are simply calling for a prohibition on the kill function on such robots.”

Militaries and contractors are already working on combat systems that surpass our current fleet of killer drones by requiring less human control. The US Navy commissioned Northrop Grumman’s X-47B (as yet unarmed) to demonstrate the takeoff and landing capabilities of autonomous unmanned aircraft. Researchers at Carnegie Mellon University have developed an trucklike combat vehicle called the “Crusher,” designed for fire support and medevac, for the Defense Advanced Research Projects Agency. (“This vehicle can go into places where, if you were following in a Humvee, you’d come out with spinal injuries,” said the director of DARPA’s Tactical Technology Office.) The $220 million Taranis warplane, developed by BAE Systems for the United Kingdom, could one day conduct fully autonomous intercontinental missions. And China has been developing its Invisible Sword unmanned stealth aircraft for years.

Yet the technology required for to make an advanced fighting robot is still far from complete. “Our vision and sensing systems on robots are not that good,” Sharkey says. “They might be able to tell difference between a human and a car, but they can be fooled by a statue or a dog dancing on its hind legs, even.”  Experts also say that the technology is nowhere near being able to make crucial distinctions between combatants and noncombatants—in other words, whom it’s okay to kill.

This technological uncertainty has caused some experts to think a preemptive injunction on warbot development is misguided. “We are making legal arguments based entirely on speculation,” says Michael Schmitt, chairman of the international law department at the US Naval War College. (Schmitt recently planned a workshop on the legal issues surrounding killer robots, but sequestration has delayed it.) “Do I have my concerns? Of course. But these systems have not been fielded on the battlefield, nor are they in active development in the US.”

Schmitt argues that existing international law would keep the use of robots from spiraling into a sci-fi nightmare. “If such a system cannot discriminate between civilians and enemy combatants in an environment, then it is therefore unlawful,” he explains. “No one is talking about a George Jetson-type scenario. What we are talking about is going to a field commander and saying, ‘Here’s another system, like a drone, or a frigate, or an F-17.’ If I were a commander, I would know what laws there are, and in what situation I can use it.”

Another side of the debate is over whether killer robots would reduce or increase civilian casualties. The Department of Defense has been funding the research of Georgia Tech roboticist Ronald Arkin, who seeks to design a software system, or “ethical governor,” that will ensure robots adhere to international rules of war. He’s argued that machines will be more effective fighters than humans. “My friends who served in Vietnam told me that they fired—when they were in a free-fire zone—at anything that moved,” Arkin recently told the New York Times. “I think we can design intelligent, lethal, autonomous systems that can potentially do better than that.”

“If a robot commits a war crime, who’s responsible for it?”

Creating an artificial intelligence that could act upon just-war principles or the idea that civilian casualties should be minimized would involve elaborate programming. “That’s kind of what we’re worried about,” says George Lucas, Jr., a professor of ethics and public policy at the Naval Postgraduate School who has worked with Arkin. “Those extraordinarily complex algorithmical systems, they may operate fine 99 percent of the time, but every once and a while they go nuts.” If armed robots are eventually deployed, Lucas says they should be limited to simple and very tightly scripted scenarios, like protecting a no-go zone around a vessel at sea. In a counterinsurgency setting, the sheer number of complicated variables—determining who’s an enemy, ally, or noncombatant—might overwhelm a robot’s capabilities.

The Campaign to Stop Killer Robots suspects that any benefits of battlefield robots might come at the expense of civilians. “Reducing military casualties is a desirable goal, but you shouldn’t do that by putting civilians at risk,” says Goose of Human Rights Watch. “Most roboticists we’ve talked to say we’ll never get to a point that machines will adequately make distinctions between targets, or meet requirements of humanitarian law. Sometimes these decisions require emotions and compassion, and having a machine with attributes necessary for this kind of legal reasoning is not at all likely.”

So far, these questions remain largely hypothetical. But the Campaign to Stop Killer Robots wants to answer them before we find ourselves debating the ethics of a lethal technology that can’t be put back in the box. Should warbots become a reality, who will take the fall for an atrocity committed by a autonomous machine during the course of an operation? “If a robot commits a war crime, who’s responsible for it?” Goose asks. “The commander? The manufacturer? If you can’t hold someone responsible for a war crime, then there’s nothing to deter these war crimes.”

DOES IT FEEL LIKE POLITICS IS AT A BREAKING POINT?

Headshot of Editor in Chief of Mother Jones, Clara Jeffery

It sure feels that way to me, and here at Mother Jones, we’ve been thinking a lot about what journalism needs to do differently, and how we can have the biggest impact.

We kept coming back to one word: corruption. Democracy and the rule of law being undermined by those with wealth and power for their own gain. So we're launching an ambitious Mother Jones Corruption Project to do deep, time-intensive reporting on systemic corruption, and asking the MoJo community to help crowdfund it.

We aim to hire, build a team, and give them the time and space needed to understand how we got here and how we might get out. We want to dig into the forces and decisions that have allowed massive conflicts of interest, influence peddling, and win-at-all-costs politics to flourish.

It's unlike anything we've done, and we have seed funding to get started, but we're looking to raise $500,000 from readers by July when we'll be making key budgeting decisions—and the more resources we have by then, the deeper we can dig. If our plan sounds good to you, please help kickstart it with a tax-deductible donation today.

Thanks for reading—whether or not you can pitch in today, or ever, I'm glad you're with us.

Signed by Clara Jeffery

Clara Jeffery, Editor-in-Chief

We Recommend

Latest

Sign up for our newsletters

Subscribe and we'll send Mother Jones straight to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate