Regulating Autonomous Weapons on the Battlefield

A recent meeting in Geneva on the implementation of the Convention on Certain Conventional Weapons focused on regulating autonomous weapons. Autonomous weapons are systems that decide to deploy lethal force without direct human control. Imagine, for instance, drones guided by sensors and preprogrammed algorithms that would choose for themselves the time and place to release their deadly missiles.

There was substantial sentiment at the meeting for banning such weapons. Such a ban would prove an enormous mistake. It would harm the interests of the United States and make for a less peaceful world.

The first problem with such a ban is that it is difficult, if not impossible, to verify.  First, autonomous systems depend on AI programs, which, unlike nuclear weapons, are very easy to hide. Second, autonomy is a matter of degree: limited human oversight would be hard to distinguish from full autonomy. The lack of verifiability will empower rogue nations in the arms race that has characterized military competition from the beginning of civilization. In the world of tomorrow that arms race will be paced by robotics and machine intelligence.

Second, because of the West’s technological superiority, the West in general and particularly the United States have an advantage in developing these weapons. Robotic weapons, even if not yet autonomous, have been among the most successful in the fight against Al-Qaeda and other groups waging asymmetrical warfare against the United States. The predator, for instance, has been successfully targeting terrorists throughout Afghanistan and Pakistan, and more technologically advanced versions are being rapidly developed. This advantage may grow as weapons become more and more infused with the latest developments in artificial intelligence.

If the United States is the best enforcer of rules of conduct that make for a peaceful and prosperous world, this development must also be counted as an advantage. And there are reasons other than national pride for this belief. The United States is both a flourishing commercial republic benefitting from global peace and a hegemon uniquely capable of supplying that public good.  Because it gains a greater share of the prosperity that is afforded by peace than other nations, it has incentives to shoulder the burdens of maintaining world security. Thus, we should be very hesitant to curtail the military reach of the United States conferred by applying advances in AI to the battlefield.

The better course would be to apply the laws of war to autonomous weapons. They should be as liable as humans for indiscriminate killing. In the long run, since they are driven by sensors and dispassionate software, autonomous weapons should be able to discriminate better than weapons under human control. They could then be held to  a higher standard for avoidance of civilian deaths. Because they are robots, attacks on them should elicit a less substantial response that if the attack were on humans, thus often decreasing levels of force deemed proportionate on the battlefield. This course may well result in better outcomes for civilians as well as for civilization.

The lesson here is a more general one. Technological advances even in war have benefits as well as costs. Complete bans on the technology are often based of fear of the unknown and will rarely be the way to balance their costs and benefits.

John O. McGinnis is the George C. Dix Professor in Constitutional Law at Northwestern University. His recent book, Accelerating Democracy was published by Princeton University Press in 2012. McGinnis is also the co-author with Mike Rappaport of Originalism and the Good Constitution published by Harvard University Press in 2013 . He is a graduate of Harvard College, Balliol College, Oxford, and Harvard Law School. He has published in leading law reviews, including the Harvard, Chicago, and Stanford Law Reviews and the Yale Law Journal, and in journals of opinion, including National Affairs and National Review.

About the Author

Comments

  1. gabe says

    My Dear Professor:

    You obviously are not watching this seasons showing of “24″!!!!

    “They could then be held to a higher standard for avoidance of civilian deaths. Because they are robots, attacks on them should elicit a less substantial response that if the attack were on humans, thus often decreasing levels of force deemed proportionate on the battlefield. This course may well result in better outcomes for civilians as well as for civilization.”

    What are you arguing here? A machine will be held accountable? and,
    we can then “punish” it by attacking it – heck, this option is already open to all with the technology, and,
    this will lead to better outcomes? -I thought earlier you asserted that the use of drones was going to lead to better outcomes – so now we are saying that destroying drones will lead to better outcomes – which shall it be?

  2. R Richard Schweitzer says

    Western Civilization has just completed a century of conflict that is only now slowly ebbing to more limited disturbances on his periphery.

    While the movements of peoples and the blending and ending of cultures into the Western Experience may be indicative of the formation of some successive form of civilization, there is also the possibility of renewed extensive conflicts during that blending process arising from conflicting human motivations.

    The scholars who have studied and reported on the effects of the changes in the technologies of warfare and the applications of violence in the development of civilizations and their destruction have never minimized the importance of human motivation.

    The concepts of weaponry that can filter out in human motivation will prove to be invalid. So-called, artificial intelligence is an attempt to mimic human mental procedures without the moderating effects of motivations. In these considerations of new technologies of weaponry, it will not be the details of technology – the devil will be in the applications.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>