Why the Effort to Ban "Killer Robots" in Warfare Is Misguided

COMMENTARY Defense

Why the Effort to Ban "Killer Robots" in Warfare Is Misguided

Nov 27, 2017 4 min read
COMMENTARY BY

Former Senior Research Fellow for Defense Programs

Thomas Callender focused on defense programs pertaining to Naval warfare and other advanced technologies.
In this case, the real villain of the movie was not the “killer robots,” but the humans that employed them in unlawful and immoral acts of terrorism. iStock

Key Takeaways

There are efforts in an escalating crusade to build global support for a pre-emptive ban on fully autonomous weapons.

Numerous experts in artificial intelligence agree that in many cases, autonomous systems would be better than humans in defensive roles.

Focused efforts should be on ensuring the development and fielding of semi-autonomous and autonomous weapons systems in accordance with the law.

Swarms of palm-sized quadcopters perform kamikaze attacks using tiny explosives to kill selected people using facial-recognition software and social media big-data analysis.

News footage shows attacks on U.S. senators, student protesters, and hundreds of other civilians worldwide.

Is this the trailer for a new science fiction blockbuster movie?

No, this graphic, fictional scenario of a dystopian near-future is the video “Slaughterbots,” produced by the Future of Life Institute, a nonprofit organization fixated on the dangers of artificial intelligence.

This sensationalist short film and accompanying multimedia shock campaign—which even includes a website for the fake defense contractor depicted in the film—are the latest efforts in an escalating crusade to build global support for a pre-emptive ban on fully autonomous weapons.

Leading the charge is a melodramatically named global coalition, the Campaign to Stop Killer Robots, which timed the film’s release to coincide with the first meeting of the United Nations (U.N.) Convention on Certain Conventional Weapons’ Group of Governmental Experts on Lethal Autonomous Weapons Systems. Its meeting concluded earlier this month in Geneva.

What the organizations that seek a ban on lethal autonomous weapons systems are missing is that any such international ban would be symbolic at best, as it would only serve to limit lawful nations from developing autonomous technology that can defend their citizens from rogue state and non-state actors that will still develop and employ “killer robots.”

The main argument presented by these groups is that fully autonomous weapons should never be allowed to select and attack targets without human interaction or intervention.

Additionally, they state—incorrectly—that autonomous weapons will never be able to comply with the law of armed conflict’s principles of “distinction” and “proportionality.”

Distinction is the ability for combatants (human troops or autonomous weapon systems) to discriminate between military and civilian targets, as well as wounded or surrendered combatants. The principle of proportionality prevents combatants from conducting an attack against a military target if the likely “collateral” damage would result in excessive incidental civilian injuries, loss of life, or damage to civilian objects relative to the military advantage gained.

As it applies to the campaign’s own short movie, the “slaughterbots” appeared to display both distinction and proportionality. They only attacked those specific individuals identified as targets by their human controllers and did not kill or injure anyone that didn’t meet those criteria.

In this case, the real villain of the movie was not the “killer robots,” but the humans that employed them in unlawful and immoral acts of terrorism.

No matter how advanced artificial intelligence or lethal autonomous weapons systems become, at some point in their design or employment, humans will have an impact on their lawful and ethical use.

The international community can work together to develop best practices for the responsible development and use of those systems in accordance with the rules of armed conflict, or we can stand by and let unethical state and non-state actors repurpose civilian autonomous systems for violent and unlawful use while focusing on an unachievable ban of lethal autonomous weapons systems.

Ultimately, the issue comes down to the lawful use of a weapon system, no matter how sophisticated the autonomy.

Autonomy in itself is not bad. Just as with any other technology or tool, it can be used for either lawful or unlawful purposes. One only needs to scan the daily news to see an ever-growing list of examples of people using peaceful technology to kill innocent civilians: jihadists driving trucks and cars into crowds of people or Islamic State militants dropping munitions from commercial quadcopters.

No one would argue that trucks or quadcopters should be banned, because it is readily apparent that a human directed the actions.

At the end of his “Slaughterbot” video, Stuart Russell, an artificial-intelligence researcher at the University of California at Berkeley, concedes that the potential for artificial intelligence “to benefit humanity is enormous, even in defense.”

Numerous experts in artificial intelligence—even those that are pushing for regulation of lethal autonomous weapons systems—agree that in many cases, autonomous systems would be better than humans in both defensive roles and in reducing innocent civilian casualties.

For example, these systems can already rapidly analyze vast amounts of information and react to threats faster than humans, and advanced recognition algorithms can identify people even in disguise.

While there are currently no fielded fully autonomous weapons systems, experts agree that the technology to build such a system exists today and is readily available to state and non-state actors.

Even the technologies featured in “Slaughterbots” are available today. Micro-quadcopters that can fly preprogrammed routes are available on Amazon. The iPhone X has facial-recognition software. Algorithms that analyze our social media posts have become ubiquitous.

The genie is already out of the bottle and cannot be put back in.

Rather than push for a ban, the U.N. Convention of Certain Conventional Weapons and the global community should instead focus their efforts on ensuring the development and fielding of semi-autonomous and autonomous weapons systems in accordance with the law of armed conflict.

The U.S. is leading the world in this respect. Current Department of Defense policy requires that autonomous and semi-autonomous weapon systems:

  • “Shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
  • Will undergo rigorous and realistic testing and verification that can ensure the systems will operate as intended in different operational environments.
  • Will be employed by commanders in accordance with the law of armed conflict, applicable treaties, and rules of engagement.

This U.S. policy and the existing laws of armed conflict provide the framework for the international development and fielding of lawful and ethical systems, even as autonomous technology rapidly develops.

This piece originally appeared in The Daily Signal