and much broader data sets will make adversarial attacks even harder to detect. This threat is so significant that the US Intelligence Advanced Research Projects Activity is funding a project to detect trojan AI attacks on a completed system.16 The concern is that governments could unknowingly operate an AI system that produces “correct” behaviour until a scenario presents in which a “trigger” is present. During deployment, for example, an adversary could attack a system and only cause a catastrophic failure to occur at a much later time. These kinds of attacks could impact image, text, audio and game-playing AI systems.
Global Governance on AI
Just as adversarial examples can be used to fool AI systems, they can be included in the training process to make them more robust against attacks. By training the most important national security AI systems on clean and adversarial data — either by labelling them that way or by instructing a model to separate them out — greater defence is possible. But sophisticated adversaries could likely evade this defence method on its own, and a defence in depth will be necessary using additional tactics.
The speed and scope of data-driven warfare suggest that we are entering a new era in which the potential for LAWS — with or without humans in the loop — could dramatically alter the global balance of power. From killer drones and humanmachine teaming to augmented military decision making (Slayer 2020), AI technologies will forcemultiply the capacity of the world’s militaries to project power. The ongoing weaponization of AI also overlaps the weaponization of space (The Economist 2019) as low-Earth orbit (LEO) increasingly becomes an operating environment for military surveillance, remote sensing,17 communications, data processing (Turner 2021) and ballistic weapons (Sevastopulo and Hille 2021).18
GANs have a wide variety of use cases from creating deepfakes to cancer prognosis (Kim, Oh and Ahn 2018). They may also be used to defend against adversarial attacks (Short, Le Pay and Ghandi 2019), using a generator to create adversarial examples and a discriminator to determine if it is real or fake. An added benefit is that using GANs as a defence may also actually improve the original model’s performance by normalizing data and preventing “overfitting” (IBM Cloud Education 2021).
The rise of AI in conjunction with LEO and LAWS represents a critical turning point in the nature of global security. For this reason, academic researchers, technology entrepreneurs and citizens around the world have raised concerns about the dangers associated with militarizing AI. As they rightly point out, the lack of international consensus on the norms and laws regulating the responsible development and use of AI risks future crises.
Benchmarking adversarial attacks and defence models — such as the use of GANs — is a comprehensive countermeasure against which AI systems can be compared. This method provides a quantitative measure for developing and meeting security standards and allows for assessment of capabilities and limits of AI systems. As part of this testing and evaluation process, game theory may be useful in modelling behaviour of adversaries in order to identify possible defence strategies.
Laws of War
Defence and Countermeasures
As AI systems cannot be “patched” in the traditional information security sense, the risk of adversarial attack against national security AI systems should be meticulously analyzed 16 See www.iarpa.gov/index.php/research-programs/trojai.
12
before deployment and regularly reviewed. Additionally, trained models — especially those on classified data and with the most sensitive application — should be carefully protected
CIGI Papers No. 263 — March 2022 • Daniel Araya and Meg King
Beyond the exaggerations of AI we often see in science fiction, it is important to establish the appropriate checks and balances for limiting the concentration of power that AI technologies could provide. Common international rules and
17 See https://earthdata.nasa.gov/learn/backgrounders/remote-sensing. 18 In 2019, the United States introduced Space Force (see www.airforce.com/spaceforce), a new branch of the US military, with the purpose of securing US interests in space. Alongside the United States, Russia, the European Union, India, Japan and China are all investing in advanced space programs with military applications. In 2007, China successfully tested a ballistic missile-launched anti-satellite weapon; while, more recently, India shot down a satellite in LEO using an antisatellite weapon during an operation code-named Mission Shakti (Still, Ledur and Levine 2019).