Don't Kill The Killer-Robots--Just Yet

COMMENTARY Defense

Don't Kill The Killer-Robots--Just Yet

Aug 6, 2015 3 min read
COMMENTARY BY
James Jay Carafano

Senior Counselor to the President and E.W. Richardson Fellow

James Jay Carafano is a leading expert in national security and foreign policy challenges.

Stephen Hawking, Elon Musk and Steve Wozniak headline a list of science luminaries signing on to a letter demanding a universal ban on autonomous weapons. This high profile offensive against nations fielding robotic weapons that allegedly decide for themselves who to attack is the latest in a preemptive effort to block the development of so-called “killer robots.” In May, there was a high-level confab at the UN to assess “technological developments, the ethical and sociological questions that arise from the development and deployment of autonomous weapons, as well as the adequacy and legal challenges to international law and the possible impact on military operations.” There are also advocacy groups mobilizing. All this agitation, however, could well be premature-stifling legitimate research and development.

What most of us know about killer robots we learned from Hollywood.  Reality is far different. Robots are far from ubiquitous on the battlefield, though they are more common today than just a decade ago. Additionally, the military is testing the boundaries of what military robotics can do all the time. The US Navy, for example, recently conducted its first test of an undersea drone launched from a submarine.  In 2013, the Navy successfully launched a pilotless combat aircraft off a carrier at sea. On the other hand, no military has developed a weapons platform that can operate independently in a combat zone, making its own decisions on when to engage a target without a human “in the loop.”

There are several reasons why there is no need to panic over the best policy for governing the use of autonomous weapons. Not the least of which is that autonomous technologies will probably be widely developed and employed for civilian uses, such as driverless cars, long before they are battle ready. Likely as not when they are ready for use as instruments of war there will be plenty of precedents and experience in how to appropriately manage and deploy them.

Further, trying to ban weapons that don’t even exist is questionable practice as a practical matter. Steven Groves, a security and legal analyst, who studied the proposed ban on autonomous weapons called it simply “unworkable,” because of the difficulty of accurately describing what should be properly prohibited. In addition, Groves argued, “advanced weapons are fully capable of complying with the law of armed conflict…particularly its core principles of distinction and proportionality.” There is no evidence to suggest that autonomous weapons would be any different.

Additionally, the autonomous technology debate largely portrays the future of robotics, particularly in the military environment, exactly wrong. The original uses for drones and similar systems were to perform “dull, dirty, and dangerous” tasks, operating in environments out of the close proximity of humans. But the most prevalent future uses of robotic technologies will be in environments where humans and robots are operating in proximity and in cooperation. In fact, “assistive robotics” is likely going to be the most influential development in helping humans think through to role we want robots to play in all kinds of activities from keeping house to seizing a hilltop.

The problem with the Hawking, Musk and Wozniak approach is they get the cost-risk calculation all wrong. By cutting off research and development now we risk losing a great deal of knowledge and understanding in learning what human-robot teams can achieve. On the other hand, by not constraining technologies we engender little risk at present—since Terminators are still just movie characters.

Now is not the time to let our fear of the future to get the better of us.

- James Jay Carafano, Ph.D., is vice president of the Kathryn and Shelby Cullom Davis Institute for National Security and Foreign Policy and the E. W. Richardson Fellow at The Heritage Foundation.

This article originally appeared in Forbes. The original piece can be found at http://www.forbes.com/sites/jamescarafano/2015/08/05/dont-kill-the-killer-robots-just-yet/.