Smart Drones

by Bill Keller, New York Times
click here for original article

IF you find the use of remotely piloted warrior drones troubling, imagine that the decision to kill a suspected enemy is not made by an operator in a distant control room, but by the machine itself. Imagine that an aerial robot studies the landscape below, recognizes hostile activity, calculates that there is minimal risk of collateral damage, and then, with no human in the loop, pulls the trigger.

Welcome to the future of warfare. While Americans are debating the president’s power to order assassination by drone, powerful momentum — scientific, military and commercial — is propelling us toward the day when we cede the same lethal authority to software.

Next month, several human rights and arms control organizations are meeting in London to introduce a campaign to ban killer robots before they leap from the drawing boards. Proponents of a ban include many of the same people who succeeded in building a civilized-world consensus against the use of crippling and indiscriminate land mines. This time they are taking on what may be the trickiest problem arms control has ever faced.

The arguments against developing fully autonomous weapons, as they are called, range from moral (“they are evil”) to technical (“they will never be that smart”) to visceral (“they are creepy”).

“This is something people seem to feel at a very gut level is wrong,” says Stephen Goose, director of the arms division of Human Rights Watch, which has assumed a leading role in challenging the dehumanizing of warfare. “The ugh factor comes through really strong.”

Some robotics experts doubt that a computer will ever be able to reliably distinguish between an enemy and an innocent, let alone judge whether a load of explosives is the right, or proportional, response. What if the potential target is already wounded, or trying to surrender? And even if artificial intelligence achieves or surpasses a human level of competence, the critics point out, it will never be able to summon compassion.

Noel Sharkey, a computer scientist at the University of Sheffield and chairman of the International Committee for Robot Arms Control, tells the story of an American patrol in Iraq that came upon a group of insurgents, leveled their rifles, then realized the men were carrying a coffin off to a funeral. Killing mourners could turn a whole village against the United States. The Americans lowered their weapons. Could a robot ever make that kind of situational judgment?

Then there is the matter of accountability. If a robot bombs a school, who gets the blame: the soldier who sent the machine into the field? His commander? The manufacturer? The inventor?

At senior levels of the military there are misgivings about weapons with minds of their own. Last November the Defense Department issued what amounts to a 10-year moratorium on developing them while it discusses the ethical implications and possible safeguards. It’s a squishy directive, likely to be cast aside in a minute if we learn that China has sold autonomous weapons to Iran, but it is reassuring that the military is not roaring down this road without giving it some serious thought.

Compared with earlier heroic efforts to outlaw land mines and curb nuclear proliferation, the campaign against licensed-to-kill robots faces some altogether new obstacles.

For one thing, it’s not at all clear where to draw the line. While the Terminator scenario of cyborg soldiers is decades in the future, if not a complete fantasy, the militaries of the world are already moving along a spectrum of autonomy, increasing, bit by bit, the authority of machines in combat.

The military already lets machines make critical decisions when things are moving too fast for deliberate human intervention. The United States has long had Aegis-class warships with automated antimissile defenses that can identify, track and shoot down incoming threats in seconds. And the role of machinery is expanding toward the point where that final human decision to kill will be largely predetermined by machine-generated intelligence.

“Is it the finger on the trigger that’s the problem?” asks Peter W. Singer, a specialist in the future of war at the Brookings Institution. “Or is it the part that tells me ‘that’s a bad guy’?”

Israel is the first country to make and deploy (and sell, to China, India, South Korea and others) a weapon that can attack pre-emptively without a human in charge. The hovering drone called the Harpy is programmed to recognize and automatically divebomb any radar signal that is not in its database of “friendlies.” No reported misfires so far, but suppose an adversary installs its antiaircraft radar on the roof of a hospital?

Professor Sharkey points to the Harpy as a weapon that has already crossed a worrisome threshold and probably can’t be called back. Other systems are close, like the Navy’s X-47B, a pilotless, semi-independent, carrier-based combat plane that is in the testing stage. For now, it is unarmed but it is built with two weapons bays. We are already ankle-deep in the future.

For military commanders the appeal of autonomous weapons is almost irresistible and not quite like any previous technological advance. Robots are cheaper than piloted systems, or even drones, which require scores of technicians backing up the remote pilot. These systems do not put troops at risk of death, injury or mental trauma. They don’t get tired or frightened. A weapon that is not tethered to commands from home base can continue to fight after an enemy jams your communications, which is increasingly likely in the age of electromagnetic pulse and cyberattacks.

And no military strategist wants to cede an advantage to a potential adversary. More than 70 countries currently have drones, and some of them are hard at work on the technology to let those drones off their virtual leashes.

“Even if you had a ban, how would you enforce it?” asks Ronald Arkin, a computer scientist and director of the Mobile Robot Laboratory at Georgia Tech. “It’s just software.”

THE military — and the merchants of war — are not the only ones invested in this technology. Robotics is a hyperactive scientific frontier that runs from the most sophisticated artificial intelligence labs down to middle-school computer science programs. Worldwide, organized robotics competitions engage a quarter of a million school kids. (My 10-year-old daughter is one of them.) And the science of building killer robots is not so easily separated from the science of making self-driving cars or computers that excel at “Jeopardy.”

Professor Arkin argues that automation can also make war more humane. Robots may lack compassion, but they also lack the emotions that lead to calamitous mistakes, atrocities and genocides: vengefulness, panic, tribal animosity.

“My friends who served in Vietnam told me that they fired — when they were in a free-fire zone — at anything that moved,” he said. “I think we can design intelligent, lethal, autonomous systems that can potentially do better than that.”

Arkin argues that autonomous weapons need to be constrained, but not by abruptly curtailing research. He advocates a moratorium on deployment and a full-blown discussion of ways to keep humans in charge.

Peter Singer of Brookings is also wary of a weapons ban: “I’m supportive of the intent, to draw attention to the slippery slope we’re going down. But we have a history that doesn’t make me all that optimistic.”

Like Singer, I don’t hold out a lot of hope for an enforceable ban on death-dealing robots, but I’d love to be proved wrong. If war is made to seem impersonal and safe, about as morally consequential as a video game, I worry that autonomous weapons deplete our humanity. As unsettling as the idea of robots’ becoming more like humans is the prospect that, in the process, we become more like robots.

This entry was posted in News. Bookmark the permalink.

Comments are closed.