Lethal autonomous weapons: re-examining the law and ethics of robotic warfare jai galliott - Downloa

Page 1


LethalAutonomousWeapons:Re-ExaminingtheLaw andEthicsofRoboticWarfareJaiGalliott

https://ebookmass.com/product/lethal-autonomous-weapons-reexamining-the-law-and-ethics-of-robotic-warfare-jaigalliott/

Instant digital products (PDF, ePub, MOBI) ready for you

Download now and discover formats that fit your needs...

Autonomous Vehicle Ethics Ryan Jenkins

https://ebookmass.com/product/autonomous-vehicle-ethics-ryan-jenkins/

ebookmass.com

Disruptive Technology and the Law of Naval Warfare James Kraska

https://ebookmass.com/product/disruptive-technology-and-the-law-ofnaval-warfare-james-kraska/

ebookmass.com

Autonomous Vehicle Ethics: The Trolley Problem and Beyond Ryan Jenkins

https://ebookmass.com/product/autonomous-vehicle-ethics-the-trolleyproblem-and-beyond-ryan-jenkins/

ebookmass.com

The Sun and Its Shade Piper Cj

https://ebookmass.com/product/the-sun-and-its-shade-piper-cj-2/

ebookmass.com

Algebra for JEE (Advanced), 3rd edition G. Tewani

https://ebookmass.com/product/algebra-for-jee-advanced-3rd-edition-gtewani/

ebookmass.com

One Dark Wish Sharon Wray

https://ebookmass.com/product/one-dark-wish-sharon-wray/

ebookmass.com

Building Codes Illustrated: A Guide to Understanding the 2018 International Building Code – Ebook PDF Version

https://ebookmass.com/product/building-codes-illustrated-a-guide-tounderstanding-the-2018-international-building-code-ebook-pdf-version/

ebookmass.com

The Allure of Empire Chris Suh

https://ebookmass.com/product/the-allure-of-empire-chris-suh/

ebookmass.com

Basic Principles and Calculations in Chemical Engineering, 9th Ed. 9th Edition David Himmelblau & James Riggs

https://ebookmass.com/product/basic-principles-and-calculations-inchemical-engineering-9th-ed-9th-edition-david-himmelblau-james-riggs/

ebookmass.com

eTextbook 978-0393918472 Worlds Together, Worlds Apart: A History of the World: From the Beginnings of Humankind to the Present (Concise Edi

https://ebookmass.com/product/etextbook-978-0393918472-worldstogether-worlds-apart-a-history-of-the-world-from-the-beginnings-ofhumankind-to-the-present-concise-edi/ ebookmass.com

Lethal Autonomous Weapons

The Oxford Series in Ethics, National Security, and the Rule of Law

Series Editors

About the Series

The Oxford Series in Ethics, National Security, and the Rule of Law is an interdisciplinary book series designed to address abiding questions at the intersection of national security, moral and political philosophy, and practical ethics. It seeks to illuminate both ethical and legal dilemmas that arise in democratic nations as they grapple with national security imperatives. The synergy the series creates between academic researchers and policy practitioners seeks to protect and augment the rule of law in the context of contemporary armed conflict and national security.

The book series grew out of the work of the Center for Ethics and the Rule of Law (CERL) at the University of Pennsylvania. CERL is a nonpartisan interdisciplinary institute dedicated to the preservation and promotion of the rule of law in twenty-first century warfare and national security. The only Center of its kind housed within a law school, CERL draws from the study of law, philosophy, and ethics to answer the difficult questions that arise in times of war and contemporary transnational conflicts.

Lethal Autonomous Weapons

Re-Examining the Law and Ethics of Robotic Warfare

E dit E d by Jai G alliott

d uncan M ac i ntosh & JE ns david o hlin

Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries.

Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America.

© Oxford University Press 2021

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above.

You must not circulate this work in any other form and you must impose this same condition on any acquirer.

Library of Congress Cataloging-in-Publication Data

Names: Galliott, Jai, author. | MacIntosh, Duncan (Writer on autonomous weapons), author. | Ohlin, Jens David, author.

Title: Lethal autonomous weapons : re-examining the law and ethics of robotic warfare / Jai Galliott, Duncan MacIntosh & Jens David Ohlin.

Description: New York, NY : Oxford University Press, [2021]

Identifiers: LCCN 2020032678 (print) | LCCN 2020032679 (ebook) | ISBN 9780197546048 (hardback) | ISBN 9780197546062 (epub) | ISBN 9780197546055 (UPDF) | ISBN 9780197546079 (Digital-Online)

Subjects: LCSH: Military weapons (International law) | Military weapons—Law and legislation— United States. | Weapons systems—Automation. | Autonomous robots—Law and legislation. | Uninhabited combat aerial vehicles (International law) | Autonomous robots—Moral and ethical aspects. | Drone aircraft—Moral and ethical aspects. | Humanitarian law.

Classification: LCC KZ5624 .G35 2020 (print) | LCC KZ5624 (ebook) | DDC 172/.42—dc23

LC record available at https://lccn.loc.gov/2020032678

LC ebook record available at https://lccn.loc.gov/2020032679

DOI: 10.1093/oso/9780197546048.001.0001

9 8 7 6 5 4 3 2 1

Printed by Integrated Books International, United States of America

Note to Readers

This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is based upon sources believed to be accurate and reliable and is intended to be current as of the time it was written. It is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If legal advice or other expert assistance is required, the services of a competent professional person should be sought. Also, to confirm that the information has not been affected or changed by recent developments, traditional legal research techniques should be used, including checking primary sources where appropriate.

(Based on the Declaration of Principles jointly adopted by a Committee of the American Bar Association and a Committee of Publishers and Associations.)

You may order this or any other Oxford University Press publication by visiting the Oxford University Press website at www.oup.com.

LIST OF CONTRIBUTORS

Bianca Baggiarini is a Political Sociologist and Senior Lecturer at UNSW, Canberra. She obtained her PhD (2018) in sociology from York University in Toronto. Her research is broadly concerned with the sociopolitical effects of autonomy in the military. To that end, she has previously examined the figure of the citizen-soldier considering high-technology warfare, security privatization, neoliberal governmentality, and theories of military sacrifice. Her current work is focused on military attitudes toward autonomous systems.

Deane-Peter Baker is an Associate Professor of International and Political Studies and Co- Convener (with Prof. David Kilcullen) of the Future Operations Research Group in the School of Humanities and Social Sciences at the UNSW Canberra. A specialist in both the ethics of armed conflict and military strategy, Dr. Baker’s research straddles philosophy, ethics, and security studies.

Dr. Baker previously held positions as an Assistant Professor of Ethics in the Department of Leadership, Ethics and Law at the United States Naval Academy and as an Associate Professor of Ethics at the University of KwaZulu-Natal in South Africa. He has also held visiting research fellow positions at the Triangle Institute for Security Studies at Duke University, and the US Army War College’s Strategic Studies Institute. From 2017 to 2018, Dr. Baker served as a panelist on the International Panel on the Regulation of Autonomous Weapons.

Steven J. Barela is an Assistant Professor at the Global Studies Institute and a member of the law faculty at the University of Geneva. He has taught at the Korbel School of International Studies at Denver University and lectured for  l’Université Laval (Québec),  Sciences Po Bordeaux , UCLA, and the Geneva Academy of International Humanitarian Law and Human Rights. In addition to his PhD in law from the University of Geneva, Dr. Barela holds three master’s degrees: MA degrees in Latin American Studies and International Studies, along with an LLM in international humanitarian law and human rights. Dr. Barela has published in respected journals. Finally, Dr. Barela is a series editor for “Emerging Technologies, Ethics and International Affairs” at Ashgate Publishing and published an edited volume on armed drones in 2015.

M.L. (Missy) Cummings received her BS in Mathematics from the US Naval Academy in 1988, her MS in Space Systems Engineering from the Naval Postgraduate School in 1994, and her PhD in Systems Engineering from the University of Virginia in 2004. A naval pilot from 1988–1999, she was one of the US Navy’s first female fighter pilots. She is currently a Professor in the Duke University Electrical and Computer Engineering Department, and the Director of the Humans and Autonomy Laboratory. She is an AIAA Fellow; and a member of the Defense Innovation Board and Veoneer, Inc., Board of Directors.

S. Kate Devitt is the Deputy Chief Scientist of the Trusted Autonomous Systems Defence Cooperative Research Centre and a Social and Ethical Robotics Researcher at the Defence Science and technology group (the primary research organization for the Australia Department of Defence). Dr. Devitt earned her PhD, entitled “Homeostatic Epistemology: Reliability, Coherence and Coordination in a Bayesian Virtue Epistemology,” from Rutgers University in 2013. Dr. Devitt has published on the ethical implications of robotics and biosurveillance, robotics in agriculture, epistemology, and the trustworthiness of autonomous systems.

Nicholas G. Evans is an Assistant Professor of Philosophy at the University of Massachusetts Lowell, where he conducts research on national security and emerging technologies. His recent work on assessing the risks and benefits of dualuse research of concern has been widely published. In 2017, Dr. Evans was awarded funding from the National Science Foundation to examine the ethics of autonomous vehicles.

Prior to his appointment at the University of Massachusetts, Dr. Evans completed postdoctoral work in medical ethics and health policy at the Perelman School of Medicine at the University of Pennsylvania. Dr. Evans has conducted research at the Monash Bioethics Centre, The Centre for Applied Philosophy and Public Ethics, Australian Defence Force Academy, and the University of Exeter. In 2013, he served as a policy officer with the Australian Department of Health and Australian Therapeutic Goods Administration.

Jai Galliott is the Director of the Values in Defence & Security Technology Group at UNSW @ The Australian Defence Force Academy; Non-Residential Fellow at the Modern War Institute at the United States Military Academy, West Point; and Visiting Fellow in The Centre for Technology and Global Affairs at the University of Oxford. Dr. Galliott has developed a reputation as one of the foremost experts on the socio- ethical implications of artificial intelligence (AI) and is regarded as an internationally respected scholar on the ethical, legal, and strategic issues associated with the employment of emerging technologies, including cyber systems, autonomous vehicles, and soldier augmentation. His publications include Big Data & Democracy (Edinburgh University Press, 2020); Ethics and the Future of Spying: Technology, National Security and Intelligence Collection (Routledge, 2016); Military Robots: Mapping the Moral Landscape (Ashgate, 2015); Super Soldiers: The Ethical, Legal and Social Implications (Ashgate, 2015); and Commercial Space Exploration: Ethics, Policy and Governance (Ashgate, 2015). He acknowledges the support of the Australian Government through the Trusted Autonomous Systems

Defence Cooperative Research Centre and the United States Department of Defence.

Natalia Jevglevskaja is a Research Fellow at the University of New South Wales at the Australian Defence Force Academy in Canberra. As part of the collaborative research group “Values in Defence & Security Technology” (VDST) based at the School of Engineering & Information Technology (SEIT), she is looking at how social value systems interact and influence research, design, and development of emerging military and security technology. Natalia’s earlier academic appointments include Teaching Fellow at Melbourne Law School, Research Assistant to the editorial work of the Max Planck Commentaries on WTO Law, and Junior Legal Editor of the Max Planck Encyclopedia of Public International Law.

Armin Krishnan is an Associate Professor and the Director of Security Studies at East Carolina University. He holds a MA degree in Political Science, Sociology, and Philosophy from the University of Munich, a MS in Intelligence and International Relations from the University of Salford, and a PhD in the field of Security Studies also from the University of Salford. He was previously a Visiting Assistant Professor at the University of Texas at El Paso’s Intelligence and National Security Studies program. Krishnan is the author of five books of new developments in warfare, including Killer Robots: The Legality and Ethicality of Autonomous Weapons (Routledge, 2009).

Alex Leveringhaus is a Lecturer in Political Theory in the Politics Department at the University of Surrey, United Kingdom, where he co-directs the Centre for International Intervention (cii). Prior to coming to Surrey, Alex held postdoctoral positions at Goethe University Frankfurt; the Oxford Institute for Ethics, Law and Armed Conflict; and the University of Manchester. Alex’s research is in contemporary political theory and focuses on ethical issues in the area of armed conflict, with special reference to emerging combat technologies as well as the ethics of intervention. His book  Ethics and Autonomous Weapons was published in 2016 (Palgrave Pivot).

Rain Liivoja is an Associate Professor at the University of Queensland, where he leads the Law and the Future of War Research Group. Dr. Liivoja’s current research focuses on legal challenges associated with military applications of science and technology. His broader research and teaching interests include the law of armed conflict, human rights law and the law of treaties, as well as international and comparative criminal law. Before joining the University of Queensland, Dr. Liivoja held academic appointments at the Universities of Melbourne, Helsinki, and Tartu. He has served on Estonian delegations to disarmament and arms control meetings.

Duncan MacIntosh is a Professor of Philosophy at Dalhousie University. Professor MacIntosh works in metaethics, decision and action theory, metaphysics, philosophy of language, epistemology, and philosophy of science. He has written on desire-based theories of rationality, the relationship between rationality and time, the reducibility of morality to rationality, modeling morality and rationality with the tools of action and game theory, scientific realism, and a number of other topics.

x He has published research on autonomous weapon systems, morality, and the rule of law in leading journals, including Temple International and Comparative Law Journal , The Journal of Philosophy, and Ethics.

Bertram F. Malle is a Professor of Cognitive, Linguistic, and Psychological Sciences and Co-Director of the Humanity- Centered Robotics Initiative at Brown University. Trained in psychology, philosophy, and linguistics at the University of Graz, Austria, he received his PhD in psychology from Stanford University in 1995. He received the Society of Experimental Social Psychology Outstanding Dissertation award in 1995, a National Science Foundation (NSF) CAREER award in 1997, and is past president of the Society of Philosophy and Psychology. Dr. Malle’s research focuses on social cognition, moral psychology, and human-robot interaction. He has distributed his work in 150 scientific publications and several books. His lab page is at http://research.clps.brown.edu/SocCogSci.

Tim McFarland is a Research Fellow in the Values in Defence & Security Technology group within the School of Engineering and Information Technology of the University of New South Wales at the Australian Defence Force Academy. Prior to earning his PhD, Dr. McFarland also earned a Bachelor of Mechanical Engineering (Honors) and a Bachelor of Economics (Monash University). Following the completion of a Juris Doctor degree and graduate diplomas of Legal Practice and International Law, Dr. McFarland was admitted as a solicitor in the state of Victoria in 2012.

Dr. McFarland’s current work is on the social, legal, and ethical questions arising from the emergence of new military and security technologies, and their implications for the design and use of new military systems. He is also a member of the Program on the Regulation of Emerging Military Technologies (PREMT) and the Asia Pacific Centre for Military Law (APCML).

Jens David Ohlin is the Vice Dean of Cornell Law School. His work stands at the intersection of four related fields: criminal law, criminal procedure, public international law, and the laws of war. Trained as both a lawyer and a philosopher, his research has tackled diverse, interdisciplinary questions, including the philosophical foundations of international law and the role of new technologies in warfare. His latest research project involves foreign election interference.

In addition to dozens of law review articles and book chapters, Professor Ohlin is the sole author of three recently published casebooks, a co- editor of the Oxford Series in Ethics; National Security, and the Rule of Law; and a co- editor of the forthcoming Oxford Handbook on International Criminal Justice .

Donovan Phillips is a first-year PhD Candidate at The University of Western Ontario, by way of Dalhousie University, MA (2019) and Kwantlen Polytechnic University, BA (2017). His main interests fall within the philosophy of language and philosophy of mind, and concern propositional attitude ascription, theories of meaning, and accounts of first-person authority. More broadly, the ambiguity and translation of law, as both a formal and practical exercise, is a burgeoning area of interest for future research that he plans to pursue further during his doctoral work.

Avery Plaw is a Professor of Political Science at the University of Massachusetts, Dartmouth, specializing in Political Theory and International Relations with a particular focus on Strategic Studies. He studied at the University of Toronto and McGill University and previously taught at Concordia University in Montreal and was a Visiting Scholar at New York University. He has published a number of books, including the Drone Debate: A Primer on the U.S. Use of Unmanned Aircraft Outside of Conventional Armed Conflict (Rowman and Littlefield, 2015), cowritten with Matt Fricker and Carlos Colon; and Targeting Terrorists: A License to Kill? (Ashgate, 2008).

Sean Rupka is a Political Theorist and PhD Student at UNSW Canberra working on the impact of autonomous systems on contemporary warfare. His broader research interests include trauma and memory studies; the philosophy of history and technology; and themes related to postcolonial violence, particularly as they pertain to the legacies of intergenerational trauma and reconciliation.

Matthias Scheutz is a Professor of Computer and Cognitive Science in the Department of Computer Science at Tufts University and Senior Gordon Faculty Fellow in Tuft’s School of Engineering. He earned a PhD in Philosophy from the University of Vienna in 1995 and a Joint PhD in Cognitive Science and Computer Science from Indiana University Bloomington in 1999. He has over 300 peerreviewed publications on artificial intelligence, artificial life, agent-based computing, natural language processing, cognitive modeling, robotics, human-robot interaction, and foundations of cognitive science. His research interests include multi-scale agent-based models of social behavior and complex cognitive and affective autonomous robots with natural language and ethical reasoning capabilities for natural human-robot interaction. His lab page is at https:// hrilab.tufts.edu.

Jason Scholz is the Chief Executive for the Trusted Autonomous Systems Defence Cooperative Research Centre, a not-for-profit company advancing industryled, game- changing projects and activities for Defense and dual use with $50m Commonwealth funding and $51m Queensland Government funding.

Additionally, Dr. Scholz is a globally recognized research leader in cognitive psychology, decision aids, decision automation, and autonomy. He has produced over fifty refereed papers and patents related to trusted autonomous systems in defense. Dr. Scholz is an Innovation Professor at RMIT University and an Adjunct Professor at the University of New South Wales. A graduate of the Australian Institute of Company Directors, Dr. Scholz also possesses a PhD from the University of Adelaide.

Austin Wyatt is a Political Scientist and Research Associate at UNSW, Canberra. He obtained his PhD (2020), entitled “Exploring the Disruptive Impact of Lethal Autonomous Weapon System Diffusion in Southeast Asia,” from the Australian Catholic University. Dr. Wyatt has previously been a New Colombo Plan Scholar and completed a research internship in 2016 at the Korea Advanced Institute of Science and Technology.

Dr. Wyatt’s research focuses on autonomous weapons, with a particular emphasis on their disruptive effects in Asia. His latest published research includes “Charting Great Power Progress toward a Lethal Autonomous Weapon System Demonstration Point,” in the journal Defence Studies 20 (1), 2020.

Introduction

An Effort to Balance the Lopsided Autonomous Weapons Debate

The question of whether new rules or regulations are required to govern, restrict, or even prohibit the use of autonomous weapon systems— defined by the United States as systems that, once activated, can select and engage targets without further intervention by a human operator or, in more hyperbolic terms, by the dysphemism “killer robots”—has preoccupied government actors, academics, and proponents of a global arms- control regime for the better part of a decade. Many civil-society groups claim that there is consistently growing momentum in support of a ban on lethal autonomous weapon systems, and frequently tout the number of (primarily second world) nations supporting their cause. However, to objective external observers, the way ahead appears elusive, as the debate lacks any kind of broad agreement, and there is a notable absence of great power support. Instead, the debate has become characterized by hyperbole aimed at capturing or alienating the public imagination.

Part of this issue is that the states responsible for steering the dialogue on autonomous weapon systems initially proceeded quite cautiously, recognizing that few understood what it was that some were seeking to outlaw with a preemptive ban. In the resulting vacuum of informed public opinion, nongovernmental advocacy groups shaped what has now become a very heavily one-sided debate.

Some of these nongovernment organizations (NGOs) have contended, on legal and moral grounds, that militaries should act as if somehow blind and immune to

Jai Galliott, Duncan MacIntosh, and Jens David Ohlin, Introduction In: Lethal Autonomous Weapons. Edited by: Jai Galliott, Duncan MacIntosh and Jens David Ohlin, © Oxford University Press (2021).

DOI: 10.1093/oso/9780197546048.003.0001.

the progress of automation and artificial intelligence evident in other areas of society. As an example, Human Rights Watch has stated that:

Killer robots—fully autonomous weapons that could select and engage targets without human intervention— could be developed within 20 to 30 years . . .  Human Rights Watch and Harvard Law School’s International Human Rights Clinic (IHRC) believe that such revolutionary weapons would not be consistent with international humanitarian law and would increase the risk of death or injury to civilians during armed conflict (IHRC 2012).

The Campaign to Stop Killer Robots (CSKR) has echoed this sentiment. The CSKR is a consortium of nongovernment interest groups whose supporters include over 1,000 experts in artificial intelligence, as well as science and technology luminaries such as Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, Skype co-founder Jaan Tallinn, and Google DeepMind co-founder Demis Hassabis. The CSKR expresses their strident view of the “problem” of autonomous weapon systems on their website:

Allowing life or death decisions to be made by machines crosses a fundamental moral line. Autonomous robots would lack human judgment and the ability to understand context. These qualities are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack. As a result, fully autonomous weapons would not meet the requirements of the laws of war. Replacing human troops with machines could make the decision to go to war easier, which would shift the burden of armed conflict further onto civilians. The use of fully autonomous weapons would create an accountability gap as there is no clarity on who would be legally responsible for a robot’s actions: the commander, programmer, manufacturer, or robot itself? Without accountability, these parties would have less incentive to ensure robots did not endanger civilians and victims would be left unsatisfied that someone was punished for the harm they experienced. (Campaign to Stop Killer Robots 2018)

While we acknowledge some of the concerns raised by this view, the current discourse around lethal autonomous weapons systems has not admitted any shades of gray, despite the prevalence of mistaken assumptions about the role of human agents in the development of autonomous systems. Furthermore, while fears about nonexistent sentient robots continue to stall debate and halt technological progress, one can see in the news that the world continues to struggle with real ethical and humanitarian problems in the use of existing weapons. A gun stolen from a police officer and used to kill, guns used for mass shootings, and vehicles used to mow down pedestrians— all undesirable acts that could have potentially been averted through the use of technology. In each case, there are potential applications of Artificial Intelligence (AI) that could help mitigate such problems. For example, “smart” firearms lock the firing pin until the weapon is presented with the correct fingerprint or RFID signal. At the same time, specific coding could be embedded in the guidance software in

self- driving cars to inhibit the vehicle from striking civilians or entering a designated pedestrian area.

Additionally, it is unclear why AI and related technologies should not also be leveraged to prevent the bombing of a religious site, a guided-bomb strike on a train bridge as an unexpected passenger train passes over it, or a missile strike on a Red Cross facility. Simply because autonomous weapons are military weapons does not preclude their affirmative use to save lives. It does not seem unreasonable to question why weapons with advanced symbol recognition could not, for example, be embedded in autonomous systems to identify a symbol of the Red Cross and abort an ordered strike. Similarly, the location of protected sites of religious significance, schools, or hospitals might be programmed into weapons to constrain their actions. Nor does it not seem unreasonable to question why addressing the main concerns with autonomous systems cannot be ensconced in existing international weapons review standards.1

In this volume, we bring together some of the most prominent academics and academic-practitioners in the lethal autonomous weapons space and seek to return some balance to the debate. In this effort, we advocate a societal investment in hard conversations that tackle the ethics, morality, and law of these new digital technologies and understand the human role in their creation and operation.

This volume proceeds on the basis that we need to progress beyond framing the conversation as “AI will kill jobs” and the “robot apocalypse.” The editors and contributors of this volume believe in a responsibility to tell more nuanced and somewhat more complicated stories than those that are conveyed by governments, NGOs, industry, and the news media in the hope of attaining one’s fleeting attention. We also have a responsibility to ask better questions ourselves, to educate and inform stakeholders in our future in a fashion that is more positive and potentially beneficial than is envisioned the existing literature. Reshaping the discussion around this emerging military innovation requires a new line of thought and a willingness to move past the easy seduction of the killer robot discourse.

We propose a solution for those asking themselves the more critical questions: What is the history of this technology? Where did it come from? What are the vested interests? Who are its beneficiaries? What logics about the world is it normalizing? What is the broader context into which it fits? And, most importantly, with the tendency to demonize technology and overlook the role of its human creators, how can we ensure that we use and adapt our, already very robust, legal and ethical normative instruments and frameworks to regulate the role of human agents in the design, development, and deployment of lethal autonomous weapons?

Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare therefore focuses on exploring the moral and legal issues associated with the design, development, and deployment of lethal autonomous weapons. The volume collects its contributions around a four-section structure. In each section, the contributions look for new and innovative approaches to understanding the law and ethics of autonomous weapons systems.

The essays collected in the first section of this volume offer a limited defense of lethal autonomous weapons through a critical examination of the definitions, conceptions, and arguments typically employed in the debate. In the initial chapter, Duncan MacIntosh argues that it would be morally legitimate, even morally obligatory, to use autonomous weapons systems in many circumstances: for example,

where pre- commitment is advantageous, or where diffusion of moral responsibility would be morally salutary. This approach is contra to those who think that, morally, there must always be full human control at the point of lethality. MacIntosh argues that what matters is not that weapons be under the control of humans but that they are under the control of morality, and that autonomous weapons systems could sometimes be indispensable to this goal. Next, Deane-Peter Baker highlights that the problematic assumptions utilized by those opposed to the employment of “contracted combatants” in many cases parallel or are the same as the problematic assumptions that are embedded in the arguments of those who oppose the employment of lethal autonomous weapons. Jai Galliott and Tim McFarland then move on to consider concerns about the retention of human control over the lethal use of force. While Galliott and McFarland accept the premise that human control is required, they dispute the, sometimes unstated, assertion that employing a weapon with a high level of autonomous capability means ceding to that weapon control over the use of force. Overall, Galliott and McFarland suggest that machine autonomy, by its very nature, represents a lawful form of meaningful human control.

Jason Scholz and Jai Galliott complete this section by asserting that while autonomous systems are likely to be incapable of carrying out actions that could lead to the attribution of moral responsibility to them, at least in the near term, they can autonomously execute value decisions embedded in code and in their design, meaning that autonomous systems are able to perform actions of enhanced ethical and legal benefit. Scholz and Galliott advance the concept of a Minimally-Just AI (MinAI) for autonomous systems. MinAI systems would be capable of automatically recognizing protected symbols, persons, and places, tied to a data set, which in turn could be used by states to guide and quantify compliance requirements for autonomous weapons.

The second section contains reflections on the normative values implicit in international law and common ethical theories. Several of this section’s essays are informed by empirical data ensuring that the rebalancing of the autonomous weapons debate is grounded in much-needed reality. Steve Barela and Avery Plaw utilize data on drone strikes to consider some of the complexities pertaining to distinguishing between combatants and noncombatants, and address how these types of concerns would weigh against hypothetical evidence of improved precision. To integrate and address these immense difficulties as mapped onto the autonomous weapons debate, they assess the value of transparency in the process of discrimination as a means of ensuring accurate assessment, both legally and ethically. Next, Matthias Scheutz and Bertram Malle provide insights into the public’s perception of LAWs. They report the first results of an empirical study that asked when ordinary humans would find it acceptable for autonomous robots to use lethal force, in military contexts. In particular, they examined participants’ moral expectations and judgments concerning a trolley-type scenario involving an autonomous robot that must decide whether to kill some humans to save others. In the following chapter, Natalia Jevglevskaja and Rain Livoja draw attention to the phenomenon by which proponents of both sides of the lethal autonomous weapons debate utilize humanitarian arguments in support of their agenda and arguments, often pointing to the lesser risk of harm to combatants and civilians alike. They examine examples of weapons with respect to which such contradictory appeals to humanity have occurred and offer some reflections on the same. Next, Jai Galliott

examines the relevance of civilian principle sets to the development of a positive statement of ethical principles for the governance in military artificial intelligence, distilling a concise list of principles for potential consumption for international armed forces. Finally, joined by Bianca Baggiarini and Sean Rupka, Galliott then interrogates data from the world’s largest study of military officers’ attitudes toward autonomous systems and draws particular attention to how socio- ethical concerns and assumptions mediate an officer’s willingness to work alongside autonomous systems and fully harness combat automation.

The third section contains reflections on the correctness of action tied to the use and deployment of autonomous systems. Donovan Phillips begins the section by considering the implications of the fact that new technologies will involve the humans who make decisions to take lives being utterly disconnected from the field of battle, and of the fact that wars may be fought more locally by automata, and how this impacts jus ad bellum . Recognizing that much of the lethal autonomous weapons debate has been focused on what might be called the “micro-perspective” of armed conflict, whether an autonomous robot is able to comply with the laws of armed conflict and the principles of just war theory’s jus in bello, Alex Leveringhaus then draws attention to the often-neglected “macro-perspective” of war, concerned with the kind of conflicts in which autonomous systems are likely to be involved and the transformational potential of said weapons. Jens Ohlin then notes a conflict between what humans will know about the machines they interact with, and how they will be tempted to think and feel about these machines. Humans may know that the machines are making decisions on the basis of rigid algorithms. However, Ohlin observes that when humans interact with chess-playing computers, they must ignore this knowledge and ascribe human thinking processes to machines in order to strategize against them. Even though humans will know that the machines are deterministic mechanisms, Ohlin suggests that humans will experience feelings of gratitude and resentment toward allied and enemy machines, respectively. This factor must be considered in designing machines and in circumscribing the roles we expect them to play in their interaction with humans. In the final chapter of this section, Nicholas Evans considers several possible relations between AWSs and human cognitive aptitudes and deficiencies. Evans then explores the implications of each for who has responsibility for the actions of AWSs. For example, suppose AWSs and humans are roughly equivalent in aptitudes and deficiencies, with AWSs perhaps being less akratic due to having emotionality designed out of them, but still prone to mistakes of, say, perception, or of cognitive processing. Then responsibility for their actions would lie more with the command structure in which they operate since their aptitudes and deficiencies would be known, and their effects would be predictable, which would then place an obligation on commanders when planning AWS deployment. However, another possibility is that robots might have different aptitudes and deficiencies, ones quite alien from those possessed by humans, these meaning that there are trade- offs to deploying them in lieu of humans. This would tend to put more responsibility on the designers of the systems since human commanders could not be expected to be natural experts about how to compensate for these trade- offs.

The fourth section of the book details how technical and moral considerations should inform the design and technological development of autonomous weapons systems. Armin Krishnan first explores the parallels between biological weapons

and autonomous systems, advocating enforced transparency in AI research and the development of international safety standards for all real-world applications of advanced AI because of the dual-use problem and because the dangers of unpredictable AI extend far beyond the military sphere. In the next chapter of this volume, Kate Devitt addresses the application of higher- order design principles based on epistemic models, such as virtue and Bayesian epistemologies, to the design of autonomous systems with varying degrees of human-in-the-loop. In the following chapter, Austin Wyatt and Jai Galliott engage directly with the question of how to effectively limit the disruptive potential of increasingly autonomous weapon systems through the application of a regional normative framework. Given the effectively stalled progress of the CCW-led process, this chapter calls for state and nonstate actors to take the initiative to develop technically focused guidelines for the development, transparent deployment, and safe de- escalation protocols for AWS at the regional level. Finally, Missy Cummings explains the difference between automated and autonomous systems before presenting a framework for conceptualizing the humancomputer balance for future autonomous systems, both civilian and military. She then discusses specific technology and policy implications for weaponized autonomous systems.

NOTE

1. This argument is a derivative of the lead author’s chapter where said moral-benefit argument is more fully developed and prosecuted: J. Scholz and Jai Galliott, “Military.” In Oxford Handbook of Ethics of AI, edited by M. Dubber, F. Pasquale, and S. Das. New York: Oxford University Press, 2020.

1 Fire and Forget: A Moral Defense of the Use of Autonomous Weapons Systems in War and Peace

1.1: INTRODUCTION

While Autonomous Weapons Systems—AWS—have obvious military advantages, there are prima facie moral objections to using them. I have elsewhere argued (MacIntosh 2016) that there are similarities between the structure of law and morality on the one hand and of automata on the other, and that this plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the administration and enforcement of law as much as possible.

But in this chapter, I want to argue more specifically (and contra Peter Asaro 2016; Christof Heyns 2013; Mary Ellen O’Connell 2014; and others) that there are many conditions where using AWSs would be appropriate not just rationally and strategically, but also morally.1This will occupy section I of this chapter. In section II, I deal with the objection that the use of robots is inherently wrong or violating of human dignity. 2

1.2: SECTION I: OCCASIONS OF THE ETHICAL USE OF AUTONOMOUS FIRE- AND- FORGET WEAPONS

An AWS would be a “fire-and-forget” weapon, and some see such weapons as legally and morally problematic. For surely a human and human judgment should figure at every point in a weapon’s operation, especially where it is about to have its lethal effect on a human. After all, as O’Connell (2014) argues, that is the last

Duncan MacIntosh, Fire and Forget: A Moral Defense of the Use of Autonomous Weapons Systems in War and Peace

In: Lethal Autonomous Weapons. Edited by: Jai Galliott, Duncan MacIntosh and Jens David Ohlin, © Oxford University Press (2021). DOI: 10.1093/oso/9780197546048.003.0002

reconsideration moment, and arguably to fail to have a human doing the deciding at that point is to abdicate moral and legal responsibility for the kill. (Think of the final phone call to the governor to see if the governor will stay an execution.) Asaro (2016) argues that it is part of law, including International Humanitarian Law, to respect public morality even if it has not yet been encoded into law, and that part of such morality is the expectation that there be meaningful human control of weapons systems, so that this requirement should be formally encoded into law. In addition to there being a public morality requirement of meaningful human control, Asaro suspects that the dignity of persons liable to being killed likewise requires that their death, if they are to die, be brought about by a human, not a robot.

The positions of O’Connell and Asaro have an initial plausibility, but they have not been argued for in- depth; it is unclear what does or could premise them, and it is doubtful, I think, whether they will withstand examination. 3 For example, I think it will prove false that there must always be meaningful human control in the infliction of death. For, given a choice between control by a morally bad human who would kill someone undeserving of being killed and a morally good robot who would kill only someone deserving of being killed, we would pick the good robot. What matters is not that there be meaningful human control, but that there be meaningful moral control, that is, that what happens be under the control of morality, that it be the right thing to happen. And similar factors complicate the dignity issue—what dignity is, what sort of agent best implements dignity, and when the importance of dignity is overridden as a factor, all come into play. So, let us investigate more closely.

Clarity requires breaking this issue down into three sub-issues. When an autonomous weapon (an AWS) has followed its program and is now poised to kill:

i) Should there always be a reconsideration of its decision at least in the sense of revisiting whether the weapon should be allowed to kill?

ii) In a given case, should there be reconsideration in the sense of reversing the decision to kill?

iii) And if there is to be either or both, what sort of agent should do the reconsidering, the AWS or a human being?

It might be thought that there should always be reconsideration by a human in at least the revisiting sense, if not necessarily the reversing. For what could it cost? And it might save us from making a moral mistake.

But there are several situations where reconsideration would be inappropriate. In what follows, I assume that the agent deciding whether to use a fire-and-forget weapon is a rational agent with all-things-considered morally approvable goals seeking therefore to maximize moral expected utility. That is, in choosing among actions, she is disposed to do that action which makes as high as possible the sum of the products of the moral desirability of possible outcomes of actions and the probability of those outcomes obtaining given the doing of the various actions available. She will have considered the likelihood of the weapon’s having morally good effects given its design and proposed circumstance of use. If the context is a war context, she would bear in mind whether the use of the weapon is likely to respect such things as International Humanitarian Law and the Laws of War. So she would

be seeking to respect the principles of distinctness, necessity, and proportionality. Distinctness is the principle that combatants should be targeted before civilians; necessity, the principle that violence should be used only to attain important military objectives; and proportionality is the principle that the violence used to attain the objective should not be out of proportion to the value of the objective. More generally, I shall assume that the person considering using an AWS would bear in mind whether the weapon can be deployed in such a way as to respect the distinction between those morally liable to being harmed (that is, those whom it is morally permissible or obligatory to harm) and those who are to be protected from harm. (Perhaps the weapon is able to make this distinction, and to follow instructions to respect it. Failing that, perhaps the weapon’s use can be restricted to situations where only those morally liable to harm are likely to be targeted.) The agent deciding whether to use the weapon would proceed on the best information available at the time of considering its activation.

Among the situations in which activating a fire-and-forget weapon by such an agent would be rationally and morally legitimate would be the following.

1.2.1:

Planning Scenarios

One initially best guesses that it is at the moment of firing the weapon (e.g., activating the robot) that one has greatest informational and moral clarity about what needs to be done, estimating that to reconsider would be to open oneself to fog of war confusion, or to temptations one judges at the time of weapon activation that it would be best to resist at the moment of possible recall. So one forms the plan to activate the weapon and lets it do its job, then follows through on the plan by activating and then not recalling the weapon, even as one faces temptations to reconsider, reminding one’s self that one was probably earlier better placed to work out how best to proceed back when one formed the plan.4

1.2.2:

Short-Term versus Long-Term Consequences Cases

One initially best judges that one must not reconsider if one is to attain the desired effect of the weapon. Think of the decision to bomb Nagasaki and Hiroshima in hopes of saving, by means of the deterrent effect of the bombing, more lives than those lost from the bombing, this in spite of the horror that must be felt at the immediate prospect of the bombing. 5 Here one should not radio the planes and call off the mission.

1.2.3: Resolute Choice Cases

One expects moral benefit to accrue not from allowing the weapon to finish its task, but from the consequence of committing to its un-reconsidered use should the enemy not meet some demand. 6 The consequence sought will be available only if one can be predicted not to reconsider; and refraining from reconsidering is made rational by the initial expected benefit and so rationality of committing not to reconsider. Here, if the enemy does not oblige, one activates the weapon and lets it finish.

It may be confusing what distinguishes these first three rationales. Here is the distinction: the reason one does not reconsider in the case of the first rationale is because one assumes one knew best what to do when forming the plan that required non-reconsidering; in the case of the second because one sees that the long-term consequences of not reconsidering exceed those of reconsidering; and in the case of the third because non-reconsideration expresses a strategy for making choices whose adoption was expected to have one do better, even if following through on it would not, and morality and rationality require one to make the choices dictated by the best strategy— one decides the appropriateness of actions by the advantages of the strategies that dictate them, not by the advantages of the actions themselves. Otherwise, one could not have the advantages of strategies.

This last rationale is widely contested. After all, since the point of the strategy was, say, deterrence, and deterrence has failed so that one must now fulfill a threat one never really wanted to have to fulfill, why still act from a strategy one now knows was a failure? To preserve one’s credibility in later threat scenarios? But suppose there will be none, as is likely in the case of, for example, the threat of nuclear apocalypse. Then again, why fulfill the threat? By way of addressing this, I have (elsewhere) favored a variant on the foregoing rationale: in adopting a strategy, one changes in what it is that one sees as the desired outcome of actions, and then one refrains from reconsidering because refraining now best expresses one’s new desires— one has come to care more about implementing the strategy, or about the expected outcome of implementing it, than about what first motivated one to adopt the strategy. So one does not experience acting on the strategy as going against what one cares about.7

1.2.4: Un- Reconsiderable Weapons Cases

One’s weapon is such that, while deploying it would be expected to maximize moral utility, reconsidering it at its point of lethality would be impossible so that, if a condition on the permissible use of the weapon were to require reconsideration at that point, one could never use the weapon. (For example, one cannot stop a bullet at the skin and rethink whether to let it penetrate, so one would have to never use a gun.)

A variant on this case would be the case of a weapon that could be made able to be monitored and recalled as it engages in its mission, but giving it this feature would put it at risk of being hacked and used for evil. For to recall the device would require that it be in touch by, say, radio, and so liable to being communicated with by the enemy. Again, if the mission has high moral expected utility as it stands, one would not want to lower this by converting the weapon into something recallable and therefore able to be perverted. (This point has been made by many authors.)

By hypothesis, being disposed to reconsider in the cases of the first four rationales would have lower moral expected utility than not. And so being disposed to reconsider would nullify any advantage the weapon afforded. No, in these situations, one should deliberate as long as is needed to make an informed decision given the pressure of time. Then one should activate the weapon.

Of course, in all those scenarios one could discover partway through that the facts are not what one first thought, so that the payoffs of activating and not reconsidering are different. This might mean that one would learn it was a mistake to activate the weapon, and should now reconsider and perhaps abort. So, of course,

it can be morally and rationally obligatory to stay sensitive to these possibilities. This might seem to be a moot point in the fourth case since there, recalling the weapon is impossible. If the weapon will take a long time to impact, however, it might become rational and morally obligatory to warn the target if one has a communication signal that can travel faster than the speed of one’s kinetic weapon.

It is a subtle matter which possibilities are morally and rationally relevant to deciding to recall a weapon. Suppose one rationally commits to using a weapon and also to not reconsidering even though one knows at the time of commitment that one’s compassion would tempt one to call it off later. Since this was considered at the outset, it would not be appropriate to reconsider on that ground just before the weapon’s moment of lethality.

Now suppose instead that it was predictable that there would be a certain level of horror from use of the weapon, but one then discovers that the horror will be much worse, for example, that many more people will die than one had predicted. That, of course, would be a basis for reconsideration.

But several philosophers, including Martha Nussbaum, in effect, think as follows (Nussbaum 1993, especially pp. 83–92): every action is both a consequence of a decision taking into account moral factors and a learning moment where one may get new information about moral factors. Perhaps one forms a plan to kill someone, thinking justice requires this, then finds one cannot face actually doing the deed, and decides that justice requires something different, mercy perhaps, as Nussbaum suggests— one comes to find the originally intended deed more horrible, not because it will involve more deaths than one thought, but because one has come to think that any death is more horrible than one first thought. Surely putting an autonomous robot in the loop here would deprive one of the possibilities of new moral learning?

It is true that some actions can be learning occasions, and either we should not automate those actions so extremely as to make the weapons unrecallable, or we should figure out how to have our automata likewise learn from the experience and adjust their behaviors accordingly, perhaps self-aborting.

But some actions can reasonably be expected not to be moral learning occasions. In these cases, we have evidence of there being no need to build in the possibility of moral experiencing and reconsideration. Perhaps one already knows the horror of killing someone, for example. (There is, of course, always the logical possibility that the situation is morally new. But that is different from having actual evidence in advance that the situation is new, and the mere possibility by itself is no reason to forego the benefits of a disposition to non-reconsideration. Indeed, if that were a reason, one could never act, for upon making any decision one would have to reconsider in light of the mere logical possibility that one’s decision was wrong.)

Moreover, there are other ways to get a moral learning experience about a certain kind of action or its consequence than by building a moment of possible experience and reconsideration into the action. For example, one could reflect after the fact, survey the scene, do interviews with witnesses and relatives of those affected, study film of the event, and so on, in this way getting the originally expected benefit of the weapon, but also gaining new information for future decisions. This would be appropriate where one calculates that there would be greater overall moral benefit to using the weapon in this case and then revisiting the ethics of the matter, rather than the other way around, because one calculates that one is at risk of being excessively

squeamish until the mission is over and that this would prevent one from doing a morally required thing.

There is also the possibility that not only will one not expect to get more morally relevant experience from the event, but one may expect to be harmed in one’s moral perspective by it.

1.2.5: Protection of One’s Moral Self Cases

Suppose there simply must be some people killed to save many people—there is no question that this is ethically required. But suppose too that if a human were to do the killing, they would be left traumatized in a way that would constitute a moral harm to her. For example, she would have crippling PTSD and a tendency toward suicidality. Or perhaps the experience would leave her coarsened in a way, making her more likely to do evil in the future. In either eventuation, it would then be harder down the road for her to fulfill her moral duties to others and to herself. Here, it would be morally and rationally better that an AWS do the killing—the morally hard but necessary task gets done, but the agent has her moral agency protected. Indeed, even now there are situations where, while there is a human in the decision loop, the role the human is playing is defined so algorithmically that she has no real decision-making power. Her role could be played by a machine. And yet her presence in the role means that she will have the guilt of making hard choices resulting in deaths, deaths that will be a burden on her conscience even where they are the result of the right choices. So, again, why not just spare her conscience and take her out of the loop?

It is worth noting that there are a number of ways of getting her out of the loop, and a number of degrees to which she could be out. She could make the decision that someone will have to die, but a machine might implement the decision for her. This would be her being out of the loop by means of delegating implementation of her decision to an AWS. An even greater degree of removal from the loop might be where a human delegates the very decision of whether someone has to die to a machine, one with a program so sophisticated that it is in effect a morally autonomous agent. Here the hope would be that the machine can make the morally hard choices, and that it will make morally right choices, but that it will not have the pangs of conscience that would be so unbearable for a human being.

There is already a precedent for this in military contexts where a commander delegates decisions about life and death to an autonomous human with his own detailed criteria for when to kill, so that the commander cannot really say in advance who is going to be killed, how, or when. This is routine in military practice and part of the chain of command and the delegation of responsibility to those most appropriately bearing it— detailed decisions implementing larger strategic policy have to be left to those closest to battle.

Some people might see this as a form of immorality. Is it really OK for a commander to have a less troubled conscience by virtue of having delegated morally difficult decisions to a subordinate? But I think this can be defended, not only on grounds of this being militarily necessary—there really is no better way of warfighting— but on grounds, again, of distributing the costs of conscience: commanders need to make decisions that will result in loss of lives over and over again, and can only

escape moral fatigue if they do not have to further make the detailed decisions about whom exactly to kill and when.

And if these decisions are delegated to a morally discerning but morally conscienceless machine, we have the additional virtue that the moral offloading—the offloading of morally difficult decisions—is done onto a device that will not be morally harmed by the decisions it must make. 8 ,9

1.2.6: Morally Required Diffusion of Responsibility Cases

Relatedly, there are cases of a firing squad sort where many people are involved in performing the execution so that there is ambiguity about who had the fatal effect in order to spare the conscience of each squad member. But again, this requires that one not avail one’s self of opportunities to recall the weapon. Translated to robotic warfare, imagine the squad is a group of drone operators all of whom launch their individual AWS drones at a target, and who, if given the means to monitor the progress of their drone and the authority to recall it if they judged this for the best, could figure out pre-impact whose drone is most likely to be the fatal one. This might be better not found out, for it may result in a regress of yank-backs, each operator recalling his drone as it is discovered to be the one most likely to be fatal, with the job left undone; or it getting done by the last person who clues in too late, him then facing the guilt alone; or it getting done by one of the operators deliberately continuing even knowing his will be the fatal drone, but who then, again, must face the crisis of conscience alone.

1.2.7: Morally Better for Being Comparatively Random and NonDeliberate Killing Cases

These are cases where the killing would be less morally problematic the more random and free of deliberate intention each aspect of the killing was. What is morally worse, throwing a grenade into a room of a small number of people who must be stopped to save a large number of people; or moving around the room at super speed with a sack full of shrapnel, pushing pieces of shrapnel into people’s bodies— you have to use all the pieces to stop everyone, but the pieces are of different sizes, some so large that using them will kill; others only maim; yet others, only temporarily injure, and you have to decide which piece goes into which person. The effect is the same—it is as if a blast kills some, maims others, and leaves yet others only temporarily harmed. But the second method is morally worse. Better to delegate to an AWS. Sometimes, of course, the circumstance might permit the use of a very stupid machine, for example, in the case of an enclosed space, literally a hand grenade, which will produce a blast whose effect on a given person is determined by what is in effect a lottery. But perhaps a similar effect needs to be attained over a large and open area, and, given limited information about the targets and the urgency of the task, the effect is best achieved by using an AWS that will attack targets of opportunity with grenade-like weapons. Here it is the delegating to an AWS, plus the very randomness of the method of grenade, plus the fact that only one morally possibly questionable decision need be made in using the weapon—the decision to delegate—that makes it a morally less bad event. Robots can randomize and

so democratize violence, and so make it less bad, less inhumane, less monstrous, less evil.

Of course, other times the reverse judgment would hold. In the preceding examples, I in effect assumed everyone in the room, or in the larger field, was morally equal as a target with no one more or less properly morally liable to be killed, so that, if one chose person by person whom to kill, one would choose on morally arbitrary and therefore problematic, morally agonizing grounds. But in a variant case, imagine one knows this man is a father; that man, a psychopath; this other man, unlikely to harm anyone in the future. Here, careful individual targeting decisions are called for—you definitely kill the psychopath, but harm the others in lesser ways just to get them out of the way.

1.2.8: Doomsday Machine Cases

Sometimes what is called for is precisely a weapon that cannot be recalled—this would be its great virtue. The weapons in mutually assured destruction are like this— they will activate on provocation no matter what, and so are the supreme deterrent. This reduces to the case of someone’s being morally and rationally required to be resolute in fulfilling a morally and rationally recommended threat (item 1.2.3, above) if we see the resolute agent as a human implementation of a Doomsday Machine. And if we doubted the rationality or morality of a free agent fulfilling a threat morally maximizing to make but not to keep, arguably we could use the automation of the keeping of the threat to ensure its credibility; for arguably it can be rational and moral to arrange the doing of things one could not rationally or morally do one’s self. (This is not case in 1.2.4, above, where we use an unrecallable weapon because it is the only weapon we have and we must use some weapon or other. In the present case, only an unrecallable weapon can work, because of its effectiveness in threatening.)

1.2.9: Permissible Threats of Impermissible Harms Cases

These are related to the former cases. Imagine there is a weapon with such horrible and indiscriminate power that it could not be actually used in ways compatible with International Humanitarian Law and the Laws of War, which require that weapons use respect distinctness, necessity and proportionality, and must not render large regions of the planet uninhabitable for long periods. Even given this, arguably the threat of its use would be permissible both morally and by the foregoing measures provided issuing the threat was likely to have very good effects, and provided the very issuing of the threat makes the necessity of following through fantastically unlikely. The weapon’s use would be so horrible that the threat of its use is almost certain to deter the behavior against which it is a threat. But even if this is a good argument for making such a threat, arguably the threat is permissible only if the weapon is extremely unlikely to be accidentally activated, used corruptly, or misused through human error. And it could be that, given the complexity of the information that would need to be processed to decide whether a given situation was the one for which the weapon was designed, given the speed with which the decision would have to be made, and given the potential for the weapon to be abused were it under human control, it ought instead to be put under the control of an enormously sophisticated artificial intelligence.

Obviously, the real-world case of nuclear weapons is apposite here. Jules Zacher (2016) has suggested that such weapons cannot be used in ways respecting the strictures of international humanitarian law and the law of war, not even if their control is deputized to an AWS. For again, their actual use would be too monstrous. But I suggest it may yet be able to be right to threaten to do something it would be wrong to actually do, a famous paradox of deterrence identified by Gregory Kavka (1978). Arguably we have been living in this scenario for seventy years: most people think that massive nuclear retaliation against attack would be immoral. But many think the threat of it has saved the world from further world wars, and is therefore morally defensible. Let us move on. We have been discussing situations where one best guesses in advance that certain kinds of reconsideration would be inappropriate. But now to the question of what should do the deciding at the final possible moment of reconsideration when it can be expected that reconsideration in either of our two senses is appropriate. Let us suppose we have a case where there should be continual reconsideration sensitive to certain factors. Surely this should be done by a human? But I suggest it matters less what makes the call, more that it be the right call. And because of all the usual advantages of robots—their speed, inexhaustibility, etc.—we may want the call to be made by a robot, but one able to detect changes in the moral situation and to adjust its behaviors accordingly.

1.2.10: Robot Training Cases

This suggests yet another sort of situation where it would be preferable to have humans out of the loop. Suppose we are trying to train a robot to make better moral decisions, and the press of events has forced us to beta test it in live battle. The expected moral utility of letting the robot learn may exceed that of affording an opportunity for a human to acquire or express a scruple by putting the human in a reconsideration loop. For once the robot learns to make good moral decisions we can replicate its moral circuit in other robots, with the result of having better moral decisions made in many future contexts.

Here are some further cases and rationales for using autonomous weapons systems.

1.2.11: Precision in Killing Cases

Sometimes, due to the situations the device is to be used in, or due to the advanced design of the device, an AWS may provide greater precision in respecting the distinction between those morally liable and not liable to being killed— something that would be put at risk by the reconsideration of a clumsy human operator (Arkin 2013). An example of the former would be a device tasked to kill anything in a region known to contain only enemies who need killing—there are no civilians in the region who stand at risk, and none of the enemies in the region deserve to survive. Here the AWS might be more thorough than a human. Think of an AWS defending an aircraft carrier, tasked with shooting anything out of the sky that shows up on radar, prioritizing things large in size, moving at great speed, that are very close, and that do not self-identify with a civilian transponder response when queried. Nothing needs to be over an aircraft carrier and anything there is an enemy. An example of the second— of an AWS being more precise than a human by virtue of its

design—might be where the AWS is better at detecting the enemy than a human, for example, by means of metal detectors able to tell who is carrying a weapon and is, therefore, a genuine threat. Again, only those needing killing get killed.

1.2.12: Speed and Efficiency Cases

Use of an AWS may be justified by its being vastly more efficient in a way that, again, would be jeopardized by less- efficient human intervention (Arkin 2013)—if the weapon had to pause while the operator approved each proposed action, the machine would have to go more slowly, and fewer of the bad people would be killed, fewer of the good people, protected.

The foregoing, then, are cases where we would not want a human operator “in the loop,” that is, a human playing the role of giving final approval to each machine decision to kill, so that the machine will not kill unless authorized by a human in each kill. This would merely result in morally inferior outcomes. Neither would we want a human “on the loop,” where the machine will kill unless vetoed, but where the machine’s killing process is slowed down to give a human operator a moment to decide whether to veto. For again, we would have morally inferior outcomes.

Other cases involve factors often used in arguments against AWSs.

1.3: SECTION II: OBJECTIONS FROM THE SUPPOSED INDIGNITY OF ROBOT- INFLICTED DEATH

Some think death by robot is inherently worse than death by human hand, that it is somehow inherently more bad, wrong, undignified, or fails in a special way to respect the rights of persons—it is wrong in itself, mala in se , as the phrase used by Wendell Wallach (2013) in this connection has it.

I doubt this, but even if it were true, that would not decide the matter. For something can be bad in itself without being such that it should never be incurred or inflicted. Pain is always bad in and of itself. But that does not mean you should never incur it—maybe you must grab a hot metal doorknob to escape a burning building, and that will hurt, but you should still do it. Maybe you will have to inflict a painful injury on someone to protect yourself in self-defense, but that does not mean you must not do it. Similarly, even if death by robot were an inherent wrong, that does not mean you should never inflict or be subject to it. For sometimes it is the lesser evil, or is the means to a good thing outweighing the inherent badness of the means.

Here are cases that show either that death by robot is not inherently problematic, or that, even if it is, it could still be morally called for. One guide is how people would answer certain questions.

Dignity Case 1: Saving Your Village by Robotically Killing Your Enemy

Your village is about to be over-run by ISIL; your only defense is the autosentry. Surely you would want to activate it? And surely this would be right, even if it metes out undignified robot death to your attackers?

Dignity Case 2: Killing Yourself by Robot to Save Yourself from A

Worse Death from a Man

You are about to be captured and killed; you have the choice of a quick death by a Western robot (a suicide machine available when the battle is lost and you face capture), or slow beheading by a Jihadist. Surely you would prefer death by robot? (It will follow your command to kill you where you could not make yourself kill yourself. Or it might be pre-programmed to be able to consider all factors and enabled to decide to kill you quickly and painlessly should it detect that all hope is lost). A person might prefer death by the AWS robot for any of several reasons. One is that an AWS may afford a greater dignity to the person to be killed precisely by virtue of its isolation from human control. In some cases, it seems worse to die at human than at robot hands. For if it is a human who is killing you, you might experience not only the horror of your pending death, but also anguish at the fact that, even though they could take pity on you and spare you, they will not—they are immune to your pleading and suffering. I can imagine this being an additional harm. But with a machine, one realizes there is nothing personal about it, there is no point in struggle or pleading, there is no one in whose gaze you are seen with contempt or as being unworthy of mercy. It is more like facing death by geological forces in a natural disaster, and more bearable for that fact. Other cases might go the other way, of course. I might want to be killed gently, carefully and painlessly by a loving spouse trying to give me a good death, preferring this to death by impersonal euthanasia machine.

If you have trouble accepting that robot- inflicted death can be OK, think about robot- conferred benefits and then ask why, if these are OK, their opposite cannot be. Would you insist on benefits being conferred to you by a human rather than a robot? Suppose you can die of thirst or drink from a palette of water bottles parachuted to you by a supply drone programmed to provide drink to those in the hottest part of the desert. You would take the drink, not scrupling about there being any possible indignity in being targeted for help by a machine. Why should it be any different when it comes to being harmed? Perhaps you want the right to try to talk your way out of whatever supposed justice the machine is to impose upon you. Well, a suitably programmed machine might give you a listen, or set you aside for further human consideration; or it might just kill you. And in these respects, matters are no different than if you faced a human killer.

And anyway, the person being killed is not the only person whose value or dignity is in play. There is also what would give dignity to that person’s victims, and to anyone who must be involved in a person’s killing.

Dignity Case 3: Robotic Avenging of the Dignity of a Victim

Maybe the dignity of the victim of a killer (or of the victim’s family) requires the killer’s death, and the only way to get the killer is by robot.

Turn static files into dynamic content formats.

Create a flipbook