Autonomous Military Technology: Opportunities and Challenges for Policy and Law

Report Defense

Autonomous Military Technology: Opportunities and Challenges for Policy and Law

August 6, 2014 11 min read Download Report
James Carafano
Senior Counselor to the President and E.W. Richardson Fellow
James Jay Carafano is a leading expert in national security and foreign policy challenges.

Welcome to the Future

The debate over the use of armed drones continues to dominate discussions about the future of war. Yet, a serious squabble is building behind the current controversy like the dark clouds of the next storm gathering on the horizon. Drones conduct operations without the controlling human in proximity to the battlefield. Future autonomous weapons might find and attack targets not only without the human present, but with no human in the decision-making loop at all. At present, cognitive computing is not sufficiently robust to field truly autonomous weapons. In the future, however, militaries will be able to field weapons that can function with less supervision and guidance in an armed conflict, raising a new round of concerns over the legal and ethical implications of “remote” combat.

In national security and defense, autonomous technology has the potential to increase U.S. effectiveness on the battlefield, while simultaneously alleviating current cognitive burdens on leaders, and decreasing damage and loss of life. Understanding the current legal framework will help address concerns over potential problems of employing autonomous technologies in combat environments, but it is unlikely that militarized autonomous technology will create insurmountable challenges.

Why Worry?

While attacks by U.S. armed and remotely piloted aircraft in Afghanistan, Pakistan, and Yemen have engendered great international controversy, most concerns are not about drones, per se. In many cases, detractors are challenging the legitimacy of the U.S. and its decision to wage war with drones.[1] These detractors use drones as a scapegoat. In reality, drone warfare does not raise many interesting or novel issues—it is highly doubtful that their employment will create significant problems regarding ethics or the laws that govern armed conflict.[2]

The idea of increasingly autonomous drones being used for military exercises is extremely divisive. Many doubt that an autonomous system could be sufficiently programed to respond to all the scenarios that can occur on the battlefield. While it is impossible to know if a human could program a perfectly ethical autonomous system, researchers are focused on creating autonomous systems that perform more reliably than human beings in combat situations:

We are not concerned with the question of whether it is even technically possible to make a perfectly-ethical robot, i.e., one that makes the “right” decision in every case or even most cases. Following [roboticist Ronald] Arkin, we agree that an ethically-infallible machine ought not to be the goal now (if it is even possible); rather, our goal should be more practical and immediate: to design a machine that performs better than humans do on the battlefield, particularly with respect to reducing unlawful behavior or war crimes.[3]

The principles of just war theory (or the just war tradition) are the basis of ethics and laws that govern armed conflict, and they accommodate autonomous technologies used in drone warfare. Under the laws of war, appropriate use of force is judged not only by assessing the results of force, but also by the proportionality of employing that force.[4] For example, a drone might kill a non-combatant on the battlefield, which would be tragic. However, if such force had been deemed proportional according to the situation and threat, the tragic death would be both legally and morally acceptable. Considering that it is already possible to program the laws of war into autonomous systems, they will pose no new ethical dilemma if sent to the battlefield; they are extensions of human operators and are subject to the same ethical standards.

Understanding the capabilities and limits of autonomous technology dispels fears of Terminator-like robots taking over the world and refocuses the discussion on the opportunities that these systems present, and the responsibility of lawmakers to set clear guidelines for their development.

Framing Operational Activities. Autonomous weapons do not create an immediate crisis for the laws of armed warfare.

First, truly autonomous systems, which operate using their own experience, assessments, and judgments, will not appear on the battlefield suddenly and without warning. Cognitive computing promises a new generation of machines that mimic the functions of human brains; unlike today’s computers, cognitive computers will operate autonomously, using learning and reasoning to derive new knowledge. The Department of Defense has already explored the use of cognitive computing for autopilots and has tested self-piloting crafts that adapt to changing conditions.[5] The opportunities for autonomous unmanned combat aerial vehicles (UCAVs) are being explored by the United States Air Force with machines such as the FQ-X—which is designed to seek and destroy enemy aircraft with extreme stealth. In the most recent issue of Air & Space Power Journal, U.S. Air Force Captain Michael Byrnes addressed the benefits of self-piloting vehicles.[6] He argues that a tactically autonomous UCAV, operating on the design of John Boyd’s “observe, orient, decide, act” (OODA) loop, brings a new efficiency and lethality to air combat:

A tactically autonomous aircraft … need not seek science-fiction-like self-awareness; within the scope of air-to-air combat, it is an airborne computer that executes the underlying mathematical truths of what human combat pilots do in the cockpit, doing so more quickly and with more precision.[7]

In air-to-air combat, greater tactical leverage is granted to an unmanned aircraft, since certain maneuvers are impossible for pilots to perform safely. An autonomous aircraft also has the ability to instantly compute angle and distance, decreasing the likelihood of damage to the aircraft. Autonomous systems have the advantage in situations like this, where human life is valuable and human computing abilities are limited. Presently, however, these technologies are far from mature enough to deal with the complex, chaotic, and demanding conditions that exist in a real-world battle space.

Current autonomous systems cannot be considered independent in this way. Daniel Howlander addresses this in “Moral and Ethical Questions for Robotics Public Policy”:

A robot’s software (no matter how advanced or developed) must start as a code, and that code will invariably be programmed by a human agent.… A fully moral agent with responsibilities for its own actions would, of course, also have the ability to act immorally—at least in order to be considered fully autonomous.[8]

Autonomous systems are only as independent as their programming permits, and it is still the responsibility of human operators to determine their capabilities in a given situation. Therefore, until organic DNA-based computers leap off the pages of comic books and into battle spaces, human operators will remain in control of autonomous applications.

Right now, autonomous technology is essentially merely an extension of human power in warfare. Just like all other tools, it is subject to the laws of war, and it must adhere to these laws. Ronald Arkin, professor at the Georgia Institute of Technology, argues that there is no intrinsic reason why autonomous systems should be exempt from the battlefield:

We need to put technology to use to address the issues of reducing non-combatant casualties in the battle-space. The judicious application of ethical robotic systems can indeed accomplish that.… [I]t is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of.[9]

Professor Arkin asserts that the laws of war can be programmed into an autonomous system, and observed more effectively than even a human operator.[10] For example, an autonomous system could be programed to minimize non-combatant casualties in an engagement, and could execute this order more effectively, due to faster data analysis and maneuvering capabilities. If this technology was to reach maturation, it would be unethical not to use it in engagements, since it reduces unnecessary loss of life.

Second, there is no great demand for free-thinking robot attack weapons. Free-thinking attack weapons only serve a purpose where taking human decision makers out of the loop would offer a qualitative benefit over the risks of employing them. Currently, simple non-free-thinking autonomous technology has the ability to extend human capabilities by providing indefatigable assistance to operators without degradation in efficiency. These systems will reduce the high cognitive load currently placed on human supervisors, and allow them to delegate tasks that are more effectively carried out by computer.[11]

The United States military is pursuing autonomous capabilities, with plans to develop applications in everything from medicine to combat. On March 7, 2008, the Department of the Army’s Training and Doctrine Command (TRADOC) addressed the future use of autonomous robots in combat casualty care in a report titled “Force Operating Capabilities.” The report states:

Future soldiers will utilize unmanned vehicles, robotics, and advanced standoff equipment to recover wounded soldiers from high-risk areas, with minimal exposure. These systems will facilitate immediate evacuation and transfer under even the most hazardous combat and environmental conditions, [and] provide en route care.[12]

Current Defense Department research in medical technology is focused on telemedicine and surgery—capabilities that would still be manually controlled by a remote human operator.[13] But autonomous systems that navigate battle spaces, identify and retrieve wounded soldiers, and provide life-saving medical care, are the next step in combat medical technology.

Autonomous unmanned systems are not yet common in military operations. Technology has not advanced far enough to make it an efficient or affordable option, and the infrastructure needed to support these systems is only just being developed. However, the U.S. military already employs “near-autonomous” technologies. In July 2013, the semi-autonomous X-47B Unmanned Combat Air System was the first to ever successfully make a carrier-based landing and departure,[14] demonstrating its potential to be fully integrated into operation platforms. The U.S. Navy intends to develop the X-47B to take off, land, and search for targets without manual human guidance; the X-47B will be the test blueprint for a class of carrier-borne stealth drones, which can perform long-range surveillance and strike missions, and be controlled by one operator.[15]

While the U.S. military embraces the prospect of fully autonomous technology, it makes a point to emphasize that human operators will still be responsible for the management and oversight of these systems. In November 2012, the Department of Defense released a policy directive, based on the foundational document Unmanned Systems Integrated Roadmap FY 2013–2038,[16] establishing standards for the development and use of autonomous technologies. The directive clearly states that autonomous systems be designed to allow human operators to exercise appropriate judgment over the use of force.[17] Furthermore:

Semi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator.[18]

Autonomous systems will be designed to have human operators “on the loop” instead of “in the loop.”[19] Currently, human beings remotely operate unmanned vehicles and directly control their actions; moving forward, humans will transition to a supervisory role, monitoring autonomous technologies out in the field, but not immediately directing their every activity. However, at present there are not many combat activities where fully autonomous weapons are justified or required.

Third, in all likelihood, autonomous machines will dominate everyday life long before they become ubiquitous on battlefields. There will likely be a strong market demand for cognitive systems when they are ready for prime time. Automobile manufacturers are already toying with more autonomous-like features for cars.[20]

By the time autonomous weapons are ready for the battlefield the world will likely already have a great deal of experience living with free-thinking machines. The rules of what is and what is not appropriate will likely be fairly well defined for future militaries. Already in the United States, the Defense Department has established a clear policy: It reserves the use of lethal force for human operators, and requires safety mechanisms to be built into autonomous systems to keep them from selecting and targeting humans independently. Regarding autonomous technology in warfare, all systems must observe the laws of war, and will be prohibited from uses that could increase violent conflict or civilian casualties.

At the international level, autonomous technology in military operations has been addressed by the United Nations. In May 2014, experts on Certain Conventional Weapons (CCW) convened in Geneva to address the issue of lethal autonomous systems—or “killer robots.”[21] Supporters of the development of autonomous technology argued that current legal and ethical standards adequately address possible abuses, while detractors hypothesized that increased lethal autonomy would encourage conflict by lowering war costs and increasing opportunities for accidental engagement.[22] The outcomes of these discussions will be submitted to the formal conference on CCW in November 2014, where discussions on possible next steps regarding autonomous weapons will take place.

Some legal experts have suggested treating autonomous systems that are in active employment as agents under the law.[23] In this capacity, autonomous systems would be considered acting enforcers on behalf of an individual or entity; in the event of system abuse or malfunction, the authority that deployed the weapon would be accountable for damage—either due to actively granting permission for undue use of lethal force, neglecting their supervisory role, or failing to take due precautions. For autonomous systems with faulty coding or security features, established product liability laws will be applicable to most foreseeable issues. These seem like simple solutions for absorbing autonomous technology into the established laws governing armed conflicts. However, it may be challenging to implement: Attribution and determining responsible parties might be difficult in practice.

Fourth, armed forces will probably never face a moment where the choice is yes or no to autonomous systems. Rather than disruptive weapons that just one day appear in the ranks—like flintlocks on a battlefield full of spears—autonomous features will likely be integrated into systems over time. Adding the function to independently identify, target, and attack specific humans will likely only come at the very end after a long series of innovation and adoption in autonomous technologies.

Going Forward

Autonomous technology is a new area of capability for the military. With proper research and development, the cognitive burden on human operators will be lessened by autonomous technology, and certain functions will be performed with greater speed, reliability, and precision. Congress can encourage this process through a few select initiatives. Congress should:

  • Insist that military developments be based on suitable, feasible, and morally acceptable mission requirements and realistic appreciation of the levels of technology available.
  • Fund the research and development of autonomous technology. The potential capabilities that autonomous technology offers have not been fully explored, and it would be detrimental to curtail research at this time. Specific focus should be given to fully automating and integrating current operational assets, such as the X-47B, which have proved to be of tactical and strategic value.
  • Review the legislative framework for addressing the current and potential problems of autonomous technology. As autonomous systems become increasingly common, their ambiguous legal status may become a more prominent issue. A sufficient legislative framework should include standards for manufacturing and employing these systems, as well as outlining responsible legal parties in the event of system error or abuse. Investigations have already been published on a number of these issues.[24] While autonomous technology is new, many laws that will govern its use and abuse are well established.

Autonomous technology is a promising area of development, and has the potential to greatly increase U.S. military capacities. Congress should encourage research for autonomous technologies by providing adequate funding, creating clear policies for autonomous capabilities and use, and supporting a flexible but sufficiently rigorous legal framework to govern employment.

—James Jay Carafano, PhD, is Vice President for the Kathryn and Shelby Cullom Davis Institute for National Security and Foreign Policy, and the E. W. Richardson Fellow, at The Heritage Foundation. Irene Dana, a research assistant in the Davis Institute, contributed to this Backgrounder.

[1] James Jay Carafano, “Say What You Want About Drones—They’re Perfectly Legal, The Atlantic, August 26, 2013, http://www.theatlantic.com/international/archive/2013/08/say-what-you-want-about-drones-theyre-perfectly-legal/278740/ (accessed June 27, 2014).

[2] Steven Groves, “Drone Strikes: The Legality of U.S. Targeting Terrorists Abroad,” Heritage Foundation Backgrounder No. 2788, April 10, 2013, http://www.heritage.org/research/reports/2013/04/drone-strikes-the-legality-of-us-targeting-terrorists-abroad.

[3] Patrick Lin, George Bekey, and Keith Abney, “Autonomous Military Robotics: Risk, Ethics, and Design,” California Polytechnic State University, December 20, 2008, http://ethics.calpoly.edu/ONR_report.pdf (accessed June 16, 2014).

[4] For a discussion of potential civil and criminal legal issues posed by the deployment of autonomous military technologies, see Benjamin Kastan, “Autonomous Weapons Systems: A Coming Legal ‘Singularity’?” Journal of Law, Technology & Policy (2013), pp. 54–62, http://illinoisjltp.com/journal/wp-content/uploads/2013/05/Kastan.pdf (accessed July 8, 2014). Liability could be established for violation of an accepted standard of care for employment of autonomous military technologies. Ibid., pp. 66–70. The Federal Tort Claims Act, Foreign Claims Act, Military Claims Act, and Alien Tort Statute, among other United States laws, may govern the question of liability for harm due to such technologies. Ibid., pp. 70–76. Finally, civilian and military personnel who design, produce, or deploy autonomous weapons technologies may face potential criminal liability for involuntary manslaughter or negligence, in certain situations. Ibid., pp. 78–81.

[5] Defense Science Board, “Task Force Report: The Role of Autonomy in DoD Systems,” July 2012, http://www.acq.osd.mil/dsb/reports/AutonomyReport.pdf (accessed June 17, 2014).

[6] Capt. Michael W. Byrnes, USAF, “Nightfall: Machine Autonomy in Air-to-Air Combat,” Air & Space Power Journal (May/June 2014), http://www.airpower.maxwell.af.mil/digital/pdf/articles/2014-May-Jun/F-Byrnes.pdf (accessed July 10, 2014).

[7] Ibid.

[8] Daniel Howlander, “Moral and Ethical Questions for Robotic Public Policy,” Synesis: A Journal of Science, Technology, Ethics, and Policy (2011), http://www.synesisjournal.com/vol2_g/2011_2_G1-6_Howlader.pdf (accessed March 22, 2014).

[9] Jason Mick, “Military Contractors Take a Step Forward Towards Autonomous Killer Robot Swarm,” Daily Tech, March 5, 2013, http://www.dailytech.com/Military+Contractors+Take+a+Step+Forward+Towards+Autonomous+Killer+Robot+Swarm/article30047.htm (accessed June 18, 2014).

[10] Ronald C. Arkin, “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture,” Georgia Institute of Technology, undated, http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf (accessed June 8, 2014).

[11] Defense Science Board, “Task Force Report: The Role of Autonomy in DoD Systems.”

[12] U.S. Army Training and Doctrine Command (TRADOC) Pamphlet No. 525-66, “Force Operating Capabilities,” March 7, 2008, http://www.tradoc.army.mil/tpubs/pams/p525-66.pdf (accessed June 16, 2014).

[13] The Telemedicine & Advanced Technology Research Center, 2009 Annual Report, http://www.tatrc.org/docs/TATRC_report_2009.pdf (accessed June 20, 2014).

[14] Mass Communication Specialist 3rd Class Brandon Vinson, “X-47B Makes First Arrested Landing at Sea,” United States Navy, July 10, 2013, http://www.navy.mil/submit/display.asp?story_id=75298 (accessed June 6, 2014).

[15] John Reed, “Semi-Autonomous Killer Drones from Around the Globe,” Foreign Policy, May 29, 2013, http://complex.foreignpolicy.com/posts/2013/05/29/killer_drones_from_around_the_globe (accessed July 11, 2014).

[16] Office of the Undersecretary of Defense for Acquisition, Technology & Logistics, Unmanned Systems Integrated Roadmap: FY2013-2038, Section 3.3.2, http://www.defense.gov/pubs/DOD-USRM-2013.pdf (accessed June 30, 2014).

[17] Department of Defense, “Autonomy in Weapon Systems,” Directive No. 3000.09, November 21, 2012, http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf (accessed June 7, 2014).

[18] Ibid.

[19] Dan De Luce, “The Next Wave in Robotic War: Autonomous Drones,” Cosmos, September 28, 2012, http://cosmosmagazine.com/news/the-next-wave-robotic-war-autonomous-drones/ (accessed May 27, 2014).

[20] Neil Winton, “Autonomous Cars Like The Google May Be Viable in Less than 10 Years,” Forbes, June 6, 2014, http://www.forbes.com/sites/neilwinton/2014/06/06/autonomous-cars-like-the-google-may-be-viable-in-less-than-10-years/ (accessed June 7, 2014).

[21] U.N. News Centre, “UN Meeting Targets ‘Killer Robots,’” May 14, 2014, http://www.un.org/apps/news/story.asp?NewsID=47794#.U6rmgvldV8E (accessed July 10, 2014).

[22] “‘Killer Robots’ to Be Debated at UN,” BBC, May 9, 2014, http://www.bbc.com/news/technology-27343076 (accessed July 10, 2014).

[23] Ibid.

[24] Lin, Bekey, and Abney, “Autonomous Military Robotics: Risk, Ethics, and Design.”

Authors

James Carafano
James Carafano

Senior Counselor to the President and E.W. Richardson Fellow