Publications

Research articles and other publications by the AutoNorms team

Report on AI in military decision support systems

Anna Nadibaidze, Ingvild Bode & Qiaochu Zhang •  4 November 2024

A new report published by the AutoNorms project reviews developments and debates related to AI-based decision support systems (AI DSS) in military decision-making on the use of force.

Written by Anna Nadibaidze, Ingvild Bode and Qiaochu Zhang, this report contributes to ongoing discussions on AI DSS in the military domain by providing a review of 1) the main developments in relation to AI DSS, focusing on specific examples of existing systems; and 2) the main debates about opportunities and challenges related to various uses of AI DSS, with a focus on issues of human-machine interaction in warfare.

Download the full report here.


Special feature on geopolitics and AI technologies

Ingvild Bode & Tom Watts •  November 2024

The AutoNorms team contributed two articles to a special feature examining the impact of AI technologies on world politics in the RUSI Journal (volume 169, issue 5).

In her contribution, Ingvild Bode reviews the literature on AI in international relations. She finds that scholarship on AI in IR can look back at a longer-than-expected trajectory and centres on four key themes: the balance of power; disinformation; governance; and ethics. Read Ingvild’s article here.

In his contribution, Tom Watts argues that IR scholars should approach AI with caution and avoid generating new analytical frameworks based on hype rather than reality. Read Tom’s article here.

ICRC Humanitarian Law & Policy Blog

Ingvild Bode •  3 September 2024

Writing for the ICRC Humanitarian Law & Policy blog, Ingvild Bode and Ishmael Bhila (PhD researcher at Paderborn University) unpack the problem of algorithmic bias with reference to AI-based decision support systems (AI DSS). They examine three categories of algorithmic bias – preexisting bias, technical bias, and emergent bias – across four lifecycle stages of an AI DSS, concluding that stakeholders in the ongoing discussion about AI in the military domain should consider the impact of algorithmic bias on AI DSS more seriously.

Read the blog post here.

Book chapter on autonomous drones

Ingvild Bode & Anna Nadibaidze •  21 August 2024

In a contribution to the De Gruyter Handbook of Drone Warfare (edited by James Patton Rogers), Ingvild Bode and Anna Nadibaidze write about drones integrating autonomous technologies. Militaries are increasingly interested in developing and acquiring loitering munitions, swarming technologies, and larger models of autonomous drones. This chapter explores developments in each of these areas. It considers the perceived advantages of autonomy in drones and the challenges associated with practices in relation to weapon systems incorporating autonomous and AI technologies, especially in terms of human-machine interaction and the quality of human control over the use of force in warfare.

Blog post on visuals of AI in the military

Anna Nadibaidze •  15 July 2024

Anna Nadibaidze wrote a blog post for the Better Images of AI blog, where she argues for the need to discuss and find alternatives to images of humanoid ‘killer robots’. She provides an overview of the main themes that one may observe in visual communication in relation to AI in international security and warfare, discusses why some of these visuals raise concerns, and argues for the need to engage in more critical reflections about the types of imagery used by various actors in the debate on AI in the military.

Book chapter on the geopolitics of AI in warfare

Ingvild Bode & Guangyu Qiao-Franco •  June 2024

Ingvild Bode and Guangyu Qiao-Franco contributed to the Handbook on Public Policy and Artificial Intelligence published by Edward Elgar and edited by Regine Paul, Emma Carmel, and Jennifer Cobbe. Their chapter is entitled “The Geopolitics of AI in Warfare: Contested Conceptions of Human Control”.

See more about the book here.

Transcending the fog of war? US military ‘AI’, vision, and the emergent post-scopic regime

Hendrik Huelss •  30 May 2024

In a new article for European Journal of International Security, Hendrik Huelss explores how artificial intelligence changes military ‘observation’. Analysing how AI technologies change military ‘observation’, this article suggests an imminent era of de-visualisation in the military – a deliberate relinquishment of human control for perceived military efficiency and effectiveness. This de-visualisation marks a transformative shift, urging nuanced consideration of the profound impact of ‘AI’ technologies on warfare dynamics.

Read the article open-access here.

Opinio Juris Symposium on Military AI

Ingvild Bode, Anna Nadibaidze & Guangyu Qiao-Franco •  April 2024

Ingvild Bode and Anna Nadibaidze wrote a contribution for Opinio Juris‘ Symposium on Military AI and the Law of Armed Conflict, convened by Lena Trabucco and Magda Pacholska. Their article, entitled “Human-machine Interaction in the Military Domain and the Responsible AI Framework”, offers a preliminary examination of the extent to which the Responsible AI framework addresses challenges attached to changing human-machine interaction in the military domain.

Guangyu Qiao-Franco also contributed to the symposium, writing about challenges of governing dual-use AI technologies in times of geopolitical rivalries together with Mahmoud Javadi.

Technology in the Quest for Status: The Russian Leadership's AI Narrative

Anna Nadibaidze •  16 March 2024

In an article published with the Journal of International Relations and Development, Anna Nadibaidze examines the mismatch between the Russian leadership’s AI narrative and the country’s technological capabilities via the lens of Russia’s quest for great power status and ontological security. She shows the need to scrutinise narratives surrounding technology, especially AI technologies and their associated ambiguities, as part of how states deal with the constant uncertainty about recognition of their self-perceived identity. Based on an analysis of textual and visual documents collected via open-access sources, the article finds that the Russian official AI narrative embeds three of the elements forming Russia’s conception of a great power, namely the ability to compete, modernise, and attain technological sovereignty. Although the official rhetoric does not match the reality of Russian capabilities, the narrative is used as a cognitive tool in the quest for identity during times of uncertainty.

ICRC Humanitarian Law & Policy Blog

Ingvild Bode •  14 March 2024

In a short piece for the ICRC Humanitarian Law & Policy blog, Ingvild Bode argues that bias is as much a social as a technical problem and that addressing it therefore requires going beyond technical solutions. She holds that the risks of algorithmic bias need to receive more dedicated attention as the Group of Governmental Experts on LAWS’ work turns towards thinking around operationalisation. These arguments are based on Ingvild’s presentation at the GGE side event “Fixing Gender Glitches in Military AI: Mitigating Unintended Biases and Tackling Risks” organised by the United Nations Institute for Disarmament Research (UNIDIR) on 6 March 2024.

Article in Security Dialogue

Guangyu Qiao-Franco •  31 January 2024

Security Dialogue published the article “Insurmountable Enemies or Easy Targets? Military-themed Videogame ‘Translations’ of Weaponized Artificial Intelligence” by Guangyu Qiao-Franco, co-authored with Paolo Franco (Radboud University, the Netherlands). 

International relations scholarship has long emphasized that popular culture can impact public understandings and political realities. This article explores these potentials in the context of military-themed videogames and their portrayals of weaponized artificial intelligence (AI). Within paradoxical videogame representations of AI weapons both as ‘insurmountable enemies’ that pose existential threats to humankind in narratives and as ‘easy targets’ that human protagonists routinely overcome in gameplay, the authors identify distortions of human–machine interaction that contradict real-world scenarios. By leveraging the Actor-Network Theory concept of ‘translation’, the authors explain how these distorted portrayals of AI weapons are produced by entanglements between heterogeneous human and non-human actors that aim to make videogames mass-marketable and profitable. 

Global Studies Querterly - cover
Special issue on communities of practice in Global Studies Quarterly

Ingvild Bode & Guangyu Qiao-Franco • 27 January 2024

Ingvild Bode and Guangyu Qiao-Franco published an article each as part of the special issue “International Communities of Practices and Social Ordering” in Global Studies Quarterly 4(1). This special forum was edited by Emanuel Adler, Niklas Bremberg, and Maïka Sondarjee. 

In her article “Emergent Normativity: Communities of Practice, Technology, and Lethal Autonomous Weapon Systems“, Ingvild draws on practice theories, science and technology studies, and critical norm research. She argues that a constellation of communities of practice shapes the public debate about LAWS.

Meanwhile, Guangyu’s article “An Emergent Community of Cyber Sovereignty: The Reproduction of Boundaries?” probes the boundary-work of Communities of Practice by examining China’s active efforts in advancing a state-centric approach in managing cyberspace in the international arena. 

Introduction to Special Issue on Algorithmic Warfare 

AutoNorms •  8 January 2024

In a new article published in Global Society, Ingvild Bode, Hendrik Huelss, Anna Nadibaidze, Guangyu Qiao-Franco, and Tom Watts take stock of of the ongoing debates on algorithmic warfare in the social sciences.

The article “Algorithmic Warfare: Taking Stock of a Research Programme” seeks to equip scholars in International Relations and beyond with a critical review of both the empirical context of algorithmic warfare and the different theoretical approaches to studying practices related to the integration of algorithms into international armed conflict. The review focuses on discussions about (1) the implications of algorithmic warfare for strategic stability, (2) the morality and ethics of algorithmic warfare, (3) how algorithmic warfare relates to the laws and norms of war, and (4) popular imaginaries of algorithmic warfare.

This article serves as the introduction to a Special Issue on the Algorithmic Turn In Security and Warfare published in Global Society 38(1) and edited by Ingvild Bode and Guangyu Qiao-Franco. 

Written Submission to the UN Office of the Secretary General’s Envoy on Technology

Ingvild Bode, Hendrik Huelss, Anna Nadibaidze & Tom Watts •  28 September 2023

The AutoNorms team has submitted a written contribution to the United Nations Office of the Secretary General’s Envoy on Technology. In preparation for the first meeting of the Multi-stakeholder High-level Advisory Body on Artificial Intelligence, the Office issued a call for papers on global AI governance. AutoNorms’ submission touches upon the issue of global governance of AI technologies in the military domain. Read it in full here.

Article published in Cooperation and Conflict

Tom Watts & Ingvild Bode •  23 September 2023

Cooperation and Conflict has published Tom Watts’ and Ingvild Bode’s article “Machine guardians: The Terminator, AI narratives and US regulatory discourse on lethal autonomous weapons systems”. References to the Terminator films are central to Western imaginaries of Lethal Autonomous Weapons Systems (LAWS). The puzzle of whether references to the Terminator franchise have featured in the United States’ international regulatory discourse on these technologies nevertheless remains underexplored. Bringing the growing study of AI narratives into a greater dialogue with the International Relations literature on popular culture and world politics, this article unpacks the repository of different stories told about intelligent machines in the first two Terminator films. Through an interpretivist analysis of this material, Watts and Bode examine whether these AI narratives have featured in the US written contributions to the international regulatory debates on LAWS at the United Nations Convention on Certain Conventional Weapons in the period between 2014 and 2022. Their analysis highlights how hopeful stories about ‘machine guardians’ have been mirrored in these statements: LAWS development has been presented as a means of protecting humans from physical harm, enacting the commands of human decision makers and using force with superhuman levels of accuracy. This suggests that, contrary to existing interpretations, the various stories told about intelligent machines in the Terminator franchise can be mobilised to both support and oppose the possible regulation of these technologies.

Blog contribution

Ingvild Bode & Tom Watts •  29 June 2023

In a contribution to the ICRC Humanitarian Law & Policy Blog, Ingvild Bode and Tom Watts highlight the need for legally binding rules on AWS based on their research about the development and use of loitering munitions. They write, “we do not need to go to dystopian sci-fi narratives to imagine potential problems associated with AWS. There are already problems at hand in how states design and use weapon systems integrating autonomous technologies in targeting in particular ways”.

Loitering Munitions and Unpredictability - cover
Loitering Munitions and Unpredictability

Ingvild Bode & Tom Watts • 7 June 2023

A new report published by the Center for War Studies, University of Southern Denmark and the Royal Holloway Centre for International Security highlights the immediate need to regulate autonomous weapon systems, or ‘killer robots’ as they are colloquially called.

Written by Dr. Ingvild Bode and Dr. Tom F.A. Watts, the “Loitering Munitions and Unpredictability” report examines whether the use of automated, autonomous, and artificial intelligence (AI) technologies as part of the global development, testing, and fielding of loitering munitions since the 1980s has impacted emerging practices and social norms of human control over the use of force. It is commonly assumed that the challenges generated by the weaponization of autonomy will materialize in the near to medium term future.

The report’s central argument is that whilst most existing loitering munitions are operated by a human who authorizes strikes against system-designated targets, the integration of automated and autonomous technologies into these weapons has created worrying precedents deserving of greater public scrutiny.

Read the full report here.

Article in Heidelberg Journal of International Law

Ingvild Bode •  May 2023

Ingvild Bode’s article “Contesting Use of Force Norms through Technological Practices” has been published in the Heidelberg Journal of International Law (HJIL) as part of a symposium on the contestation of the laws of war. This article examines the practice of targeted killing in the context of jus contra bellum and the emerging norm of ‘meaningful’ human control in jus in bello. It combines norm research with scholarship across critical international law, practice theories,  and science and technology studies to examine the emergence of contested areas in between the international normative and legal orders. Read the article here.

How can the EU regulate military AI?

Ingvild Bode & Hendrik Huelss •  29 May 2023

Writing in The Academic, Ingvild Bode and Hendrik Huelss analyse the EU’s ambivalent stance as a hesitant regulator of military AI. They argue that the EU’s position results in two significant consequences, both of which favour a specific type of technical, corporate expertise. Firstly, the EU’s modest attempts at establishing rules on military AI attract technical, and corporate experts to contribute their proficiency as part of advisory panels. Secondly, the EU finds itself becoming a rule-taker, as its member states utilise military applications of AI that embody design choices made by these technical, corporate experts. 

Written evidence submitted to the House of Lords Select Committee on AI in Weapon Systems

The AutoNorms team •  4 May 2023

The AutoNorms team has submitted written evidence to the UK House of Lords AI in Weapon Systems Select Committee as part of its enquiry on AI in weapon systems.

Read the evidence submitted by Ingvild Bode, Hendrik Huelss, and Anna Nadibaidze here.

Read the evidence submitted by Tom Watts here.

The Impact of AI on Strategic Stability is What States Make of It: Comparing US and Russian Discourses

Anna Nadibaidze • 26 April 2023

In their article published in the Journal for Peace and Nuclear Disarmament, Anna Nadibaidze and Nicolò Miotto argue that the relationship between AI and strategic stability is not only given through the technical nature of AI, but also constructed by policymakers’ beliefs about these technologies and other states’ intentions to use them. Adopting a constructivist perspective, they investigate how decision-makers from the United States and Russia talk about military AI by analyzing US and Russian official discourses from 2014–2023 and 2017-2023, respectively.

Nadibaidze and Miotto conclude that both sides have constructed a threat out of their perceived competitors’ AI capabilities, reflecting their broader perspectives of strategic stability, as well as the social context characterized by distrust and feelings of competition. Their discourses fuel a cycle of misperceptions which could be addressed via confidence building measures. However, this competitive cycle is unlikely to improve due to ongoing tensions following the Russian invasion of Ukraine. The article was published as part of a Special Issue on Strategic Stability in the 21st Century, edited by Ulrich Kühn. 

Article published in European Journal of International Relations

Ingvild Bode • 10 April 2023

In the article “Practice-based and public-deliberative normativity: retaining human control over the use of force”, published in the European Journal of International Relations, Ingvild Bode theorises how practices of designing, of training personnel for, and of operating weapon systems integrating autonomous technologies have shaped normativity/normality on human control at sites unseen. She traces how this normativity/normality interacts with public deliberations at the Group of Governmental Experts (GGE) on LAWS by theorising potential dynamics of interaction. Bode argues that the normativity/normality emerging from practices performed in relation to weapon systems integrating autonomous technologies assigns humans a reduced role in specific use of force decisions and understands this diminished decision-making capacity as ‘appropriate’ and ‘normal’. 

Analysis of Russia’s narratives on military AI and autonomy

Anna Nadibaidze • 3 March 2023

In an article for the Network for Strategic Analysis (NSA), Anna Nadibaidze analyses how Russia’s ‘low-tech’ war on Ukraine discredited its military modernization narrative, of which drones and AI have been a key element. She argues, “Russia’s full-scale invasion of Ukraine revealed the mismatch between the narrative Moscow has been promoting and the reality of Russian military technological capabilities”.

The article is also available in French on the website of Le Rubicon.

Article in Journal of European Public Policy

Ingvild Bode & Hendrik Huelss • 14 February 2023

The Journal of European Public Policy has published “Constructing expertise: the front- and back-door regulation of AI’s military applications in the European Union” by Ingvild Bode and Hendrik Huelss. This article is part of a Special Issue on the Regulatory Security State in Europe, co-edited by Andreas Kruck and Moritz Weiss.

The article investigates how the EU as a multi-level system aims at regulating military artificial intelligence (AI) based on epistemic authority. It suggests that the EU acts as a rule-maker and a rule-taker of military AI predicated on constructing private, corporate actors as experts. As a rule-maker, the EU has set up expert panels such as the Global Tech Panel to inform its initiatives, thereby inviting corporate actors to become part of its decision-making process through the front-door. But the EU is also a rule-taker in that its approach to regulating on military AI is shaped through the backdoor by how corporate actors design AI technologies. These observations signal an emerging hybrid regulatory security state based on ‘liquid’ forms of epistemic authority that empowers corporate actors but also denotes a complex mix of formal political and informal expert authority.

The need for and nature of a normative, cultural psychology of weaponized AI

Ingvild Bode  • 6 February 2023

Ingvild Bode co-authored the article “The need for and nature of a normative, cultural psychology of weaponized AI (artificial intelligence)” with Rockwell Clancy and Qin Zhu from the Department of Engineering Education, Virginia Polytechnic Institute and State University. The article was published in Ethics and Information Technology as part of the collection on Responsible AI in Military Applications.

This position piece describes the motivations for and sketches the nature of a normative, cultural psychology of weaponized AI. The motivations for this project include the increasingly global, cross-cultural and international, nature of technologies, and counter-intuitive nature of normative thoughts and behaviors. The nature of this project consists in developing standardized measures of AI ethical reasoning and intuitions, coupled with questions exploring the development of norms, administered and validated across different cultural groups and disciplinary contexts. The goal of this piece is not to provide a comprehensive framework for understanding the cultural facets and psychological dimensions of weaponized AI but, rather, to outline in broad terms the contours of an emerging research agenda.

Article in Ethics and Information Technology

Ingvild Bode, Hendrik Huelss, Anna Nadibaidze, Guangyu Qiao-Franco & Tom Watts • 3 February 2023

The AutoNorms team’s article “Prospects for the global governance of autonomous weapons: comparing Chinese, Russian, and US practices” argues for the necessity to adopt legal norms on the use and development of autonomous weapon systems (AWS). Without a framework for global regulation, state practices in using weapon systems with AI-based and autonomous features will continue to shape the norms of warfare and affect the level and quality of human control in the use of force. By examining the practices of China, Russia, and the United States in their pursuit of AWS-related technologies and participation at the UN CCW debate, we acknowledge that their differing approaches make it challenging for states parties to reach an agreement on regulation, especially in a forum based on consensus. Nevertheless, we argue that global governance on AWS is not impossible. It will depend on the extent to which an actor or group of actors would be ready to take the lead on an alternative process outside of the CCW, inspired by the direction of travel given by previous arms control and weapons ban initiatives.

The article has been published in Ethics and Information Technology as part of the collection on Responsible AI in Military Applications.

Article in The Chinese Journal of International Politics 

Guangyu Qiao-Franco & Ingvild Bode • 9 January 2023

In the article “Weaponised Artificial Intelligence and Chinese Practices of Human–Machine Interaction”, published in the Chinese Journal of International Politics, Guangyu Qiao-Franco and Ingvild Bode unpack China’s understanding of human–machine interaction. Despite repeatedly supporting a legal ban on lethal autonomous weapons systems (LAWS), China simultaneously promotes a narrow understanding of these systems that intends to exclude such systems from what it deems “beneficial” uses of AI. This article offers understandings of this ambivalent position by investigating how it is constituted through Chinese actors’ competing practices in the areas of economy, science and technology, defence, and diplomacy. Such practices produce normative understandings of human control and machine autonomy that pull China’s position on LAWS in different directions. Qiao-Franco and Bode contribute to the scholarship bounded by norm research and international practice theories in examining how normativity originates in and emerges from diverse domestic contexts within competing practices. They also aim to provide insights into possible approaches whereby to achieve consensus in debates on regulating LAWS, which at the time of writing have reached a stalemate.

Article published in Journal of Contemporary China

Guangyu Qiao-Franco • 1 December 2022

The article “China’s Artificial Intelligence Ethics: Policy Development in an Emergent Community of Practice”, by Guangyu Qiao-Franco and Rongsheng Zhu from Tsinghua University, has been published in the Journal of Contemporary China. Extant literature has not fully accounted for the changes underway in China’s perspectives on the ethical risks of artificial intelligence (AI). This article develops a community-of-practice (CoP) approach to the study of Chinese policymaking in the field of AI. It shows that the Chinese approach to ethical AI emerges from the communication of practices of a relatively stable group of actors from three domains—the government, academia, and the private sector. This Chinese CoP is actively cultivated and led by government actors. The paper draws attention to CoP configurations during collective situated-learning and problem-solving among its members that inform the evolution of Chinese ethical concerns of AI. In so doing, it demonstrates how a practice-oriented approach can contribute to interpreting Chinese politics on AI governance. 

Publication of analysis piece

Anna Nadibaidze • 8 September 2022

Writing for the Foreign Policy Research Institute (FPRI) blog, Anna Nadibaidze analyses the Russian leadership’s narrative on technological sovereignty. She argues, “The fact that Russia’s leadership is pushing this narrative suggests that the goal is, instead, to provide a sense of ontological security and intensify the belief in Russia’s identity as a great power”. Read the full piece here

Publication in La Vanguardia

Anna Nadibaidze • 9 June 2022

Anna Nadibaidze contributed to Dossier, a trimestral publication by the Barcelona-based newspaper La Vanguardia. Her text “Weaponized Artificial Intelligence in the Nuclear Domain” (translated into Spanish) appeared in Dossier #84, entitled “Nuclear Rearmament”.

Article published in Contemporary Security Policy

Anna Nadibaidze • 19 May 2022

Anna Nadibaidze’s article “Great power identity in Russia’s position on autonomous weapons systems”, published in Contemporary Security Policy, proposes an identity-based analysis of the Russian position in the global debate on AWS. Based on an interpretation of Russian written and verbal statements submitted to the United Nations Convention on Certain Conventional Weapons (CCW) meetings from 2014 to 2022, Nadibaidze finds that two key integral elements of Russian great power identity—the promotion of multipolarity and the recognition of Russia’s equal participation in global affairs—guide its evolving position on the potential regulation of AWS. The analysis makes an empirical contribution by examining one of the most active participants in the CCW discussion, an opponent to any new regulations of so-called “killer robots,” and a developer of autonomy in weapons systems. It highlights the value of a more thorough understanding of the ideas guiding the Russian position, assisting actors who seek a ban on AWS in crafting their responses and strategies in the debate.

Online publication in Le Rubicon

Anna Nadibaidze • 3 May 2022

In an online piece (in French) published in Le Rubicon, Anna Nadibaidze explores the different pathways available for the regulation of autonomous weapons. She notes the importance of moving forward in the AWS discussion, whether at the UN or as part of an independent process.

Publication of analytical piece in German

Ingvild Bode & Anna Nadibaidze • April 2022

Ingvild Bode and Anna Nadibaidze contributed to the magazine Ct Magazin für Computertechnik with the article “Von wegen intelligent: Autonome Drohnen und KI-Waffen im Ukraine-Krieg” (Not really intelligent: Autonomous Drones and Weaponised AI in the Ukraine War). 

Read the article in German here.

Book publication

Ingvild Bode & Hendrik Huelss • January 2022

Autonomous Weapons Systems and International Norms, by Ingvild Bode and Hendrik Huelss, has been published by McGill-Queen’s University Press.

In Autonomous Weapons Systems and International Norms Ingvild Bode and Hendrik Huelss present an innovative study of how testing, developing, and using weapons systems with autonomous features shapes ethical and legal norms, and how standards manifest and change in practice. Autonomous weapons systems are not a matter for the distant future – some autonomous features, such as in air defence systems, have been in use for decades. They have already incrementally changed use-of-force norms by setting emerging standards for what counts as meaningful human control. As UN discussions drag on with minimal progress, the trend towards autonomizing weapons systems continues.

A thought-provoking and urgent book, Autonomous Weapons Systems and International Norms provides an in-depth analysis of the normative repercussions of weaponizing artificial intelligence.

Report on Russian perceptions of military AI, automation, and autonomy

Anna Nadibaidze • 27 January 2022

In a report published by the Foreign Policy Research Institute (FPRI), Anna Nadibaidze provides an overview of the different conceptions and motivations that have been guiding Russian political and military leaderships in their ambitions to pursue weaponised AI. 

The report is available on the FPRI website.

Publication of essay by the GCSP

Anna Nadibaidze • 18 January 2022

Anna Nadibaidze’s essay “Commitment to Control Weaponised Artificial Intelligence: A Step Forward for the OSCE and European Security” was published by the Geneva Centre for Security Policy (CGSP). The essay received first prize ex-aequo in the 2021 OSCE-IFSH Essay Competition on Conventional Arms Control and Confidence- and Security-Building Measures in Europe.

Publication of analysis in E-International Relations

Tom Watts • 15 December 2021 

Tom Watts co-authored the article “Remote Warfare: A Debate Worth the Buzz?” with Rubrick Biegon and Vladimir Rauta. The piece, published online by E-International Relations, explores the different meanings of remote warfare and implications of this analytical concept for future scholarship.

Read it here.

Publication of special issue on remote warfare

Tom Watts • November 2021 

Tom Watts co-edited the “Remote Warfare and Conflict in the Twenty-First Century” issue of Defence Studies (Volume 21, Issue 4) along with Rubrick Biegon and Vladimir Rauta. He also co-authored two articles within the special issue:

Written contribution to the UN CCW Group of Governmental Experts on LAWS 

 AutoNorms • September 2021 

The AutoNorms team submitted a written contribution to the Chair of the Group of Governmental Experts (GGE) on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (LAWS), in preparation for the GGE’s second session which took place 24 September – 1 October 2021. The contribution addressed one of the Chair’s guiding questions, “How would the analysis of existing weapons systems help elaborate on the range of factors that should be considered in determining the quality and extent of human-machine interaction/human control/human judgment?”

Read the contribution here.

Opinion piece in TheArticle

 Anna Nadibaidze • 15 September 2021 

In an opinion piece for TheArticle, Anna Nadibaidze argues that while the debate on the potential regulation of lethal autonomous weapons systems at the UN is stalling, interested states parties will continue to pursue the development of weaponised artificial intelligence, further contributing to the multi-dimensional challenges brought by these technologies.

Read the piece here.

Publication of analysis in the German-language Ct Magazin für Computertechnik

 Ingvild Bode & Tom Watts • September 2021 

In a piece published with the German-language magazine Ct Magazin für Computertechnik, Ingvild Bode and Tom Watts examine the role and technical capabilities of some of the drone technologies used by the United States as part of the war in Afghanistan.

The German-language version of the text can be accessed here, and a longer English-language version has also been made available on the AutoNorms website.

Written evidence submitted to the Foreign Affairs Committee enquiry on “Tech and the future of UK foreign policy”

Ingvild Bode, Anna Nadibaidze, Hendrik Huelss & Tom Watts • June 2021 

The AutoNorms team has submitted to the UK House of Commons Foreign Affairs Committee as part of its enquiry on “Tech and the future of UK foreign policy”. The written evidence made a series of recommendations for how the UK Government should act to shape and directly influence AI governance norms. These included calling for the UK to clarify its stance on the role and quality of human control it considers appropriate in the use of force and acknowledging that setting a positive obligation for maintaining human control in specific use of force situations is a crucial step in regulating weaponised AI.

Read the written evidence here.

Analytical essay in Global Cooperation Research – A Quarterly Magazine 

Ingvild BodeApril 2021 

In this piece Ingvild Bode examines practice theories as an evolving theoretical programme in the discipline of International Relations. She argues that practice theories have much to gain from remaining diverse in their groundings and actively expanding that diversity beyond the current “canon”. She considers engagements with critical security studies, critical norm research, and Science and Technology Studies particularly useful. Bode also argues for a deeper theorisation of how both verbal and non-verbal practices produce and shape norms.

Read the article here.

Analysis in the Bulletin of the Atomic Scientists 

Ingvild Bode & Tom Watts 21 April 2021 

This analysis piece by Ingvild Bode and Tom Watts summarises their research on air defence systems in the context of the debate on lethal autonomous weapons systems (LAWS). They argue that looking at such historic and currently employed systems illustrates pertinent risks associated with their use.   

Read the article here.

Publication of a policy report on air defence systems

Ingvild Bode & Tom Watts • February 2021 

The policy report “Meaning-less Human Control”, written by Ingvild Bode and Tom Watts and published in collaboration with Drone Wars UK, argues that decades of using air defence systems with automated and autonomous features have incrementally diminished meaningful human control over specific use of force situations. The report argues that this process shapes an emerging norm, a standard of appropriateness, among states. This norm attributes humans a diminished role in specific use of force decisions. However, the international debate on LAWS is yet to acknowledge or scrutinize this norm. If this continues, the potential international efforts to regulate LAWS through codifying meaningful human control will be undermined.  

Read the report here. The catalogue on automation and autonomy in air defence systems can be accessed here.

Book chapter on AI, weapons systems, and human control

Ingvild Bode & Hendrik Huelss • 16 February 2021 

Ingvild Bode and Hendrik Huelss contributed to the book Remote Warfare: Interdisciplinary Perspectives, edited by Alasdair McKay, Abigail Watson and Megan Karlshøj-Pedersen, and published by E-International Relations. Their chapter, “Artificial Intelligence, Weapons Systems and Human Control”, discusses the impact that increasingly autonomous features in weapons systems can have on human decision-making in warfare. 

Read the chapter here.

Publication of analysis in The Conversation 

Ingvild Bode • 15 October 2020 

Writing after the September 2020 discussions of the GGE on LAWS, Ingvild Bode examines the extent to which CCW states parties agree on retaining meaningful human control over the use of force. She argues that many states champion a distributed perspective which considers how human control is present across the entire life-cycle of the weapons. Acknowledging that this reflects operational reality, Ingvild’s analysis also presents drawbacks of this perspective: it runs the risk of making human control more nebulous and distracting from how human control is exerted in specific use of force situations.

Read the article here.

Publication of project description in The Project Repository Journal 

Ingvild Bode • July 2020

In this piece Ingvild Bode maps out the research agenda for the ERC-funded AutoNorms project. The article offers a short overview of- AutoNorms’ research background and objectives, as well as the envisaged contribution that the project intends to make over the next five years (pp. 140-143). 

Read the article here.

List of publications by the AutoNorms team

Books

Bode, I. and Huelss, H. (2022). Autonomous Weapons Systems and International Norms. McGill – Queen’s University Press.

Book chapters

Bode, I. and Qiao-Franco, G. (2024). “AI Geopolitics and International Relations”. In Handbook on Public Policy and AI, edited by Paul, R., Carmel, E., and Cobbe, J. Cheltenham: Edward Elgar, 281-294.

Bode, I. and Nadibaidze, A. (2024). “Autonomous Drones”. In The De Gruyter Handbook of Drone Warfare, edited by Rogers, J. Berlin: De Gruyter, 369-384. 

Bode, I. and Huelss, H. (2021). “The Future of Remote Warfare? Artificial Intelligence, Weapons Systems and Human Control.” In Remote Warfare: Interdisciplinary Perspectives, edited by McKay, A., Watson, A. and Karlshøj-Pedersen, M. Bristol: E-International Relations Publishing, 218–33.

Peer-reviewed articles

Bode, I. (2024). AI Technologies and International Relations: Do We Need New Analytical Frameworks? The RUSI Journal. https://doi.org/10.1080/03071847.2024.2392394. 

Huelss, H. (2024). Transcending the fog of war? US military ‘AI’, vision, and the emergent post-scopic regime. European Journal of International Security. https://doi.org/10.1017/eis.2024.21

Nadibaidze, A. (2024). Technology in the quest for status: The Russian leadership’s artificial intelligence narrative. Journal of International Relations and Development 27, 117-142. https://doi.org/10.1057/s41268-023-00322-1

Qiao-Franco, G. and Franco, P. (2024). Insurmountable enemies or easy targets? Military-themed videogame ‘translations’ of weaponized artificial intelligence. Security Dialogue, 55(1), 81-102. https://doi.org/10.1177/09670106231218829.

Bode, I. (2024). Emergent Normativity: Communities of Practice, Technology, and Lethal Autonomous Weapon Systems. Global Studies Quarterly 4(1). https://doi.org/10.1093/isagsq/ksad073

Qiao-Franco, G. (2024). An Emergent Community of Cyber Sovereignty: The Reproduction of Boundaries?. Global Studies Quarterly 4(1). https://doi.org/10.1093/isagsq/ksad077.

Bode, I., Huelss, H., Nadibaidze, A., Qiao-Franco, G. and Watts, T.F.A. (2024). Algorithmic Warfare: Taking Stock of a Research Programme. Global Society 38(1). https://doi.org/10.1080/13600826.2023.2263473

Watts, T.F.A. and Bode, I. (2024). Machine guardians: The Terminator, AI narratives and US regulatory discourse on lethal autonomous weapons systems. Cooperation and Conflict 59(1), 107-128. https://doi.org/10.1177/00108367231198155

Bode, I. (2023). Contesting Use-of-Force Norms through Technological Practices. Heidelberg Journal of International Law 83(1), 39-64. https://www.nomos-elibrary.de/10.17104/0044-2348-2023-1-39.pdf.

Nadibaidze, A. and Miotto, N. (2023). The Impact of AI on Strategic Stability is What States Make of It: Comparing US and Russian Discourses. Journal for Peace and Nuclear Disarmament 6(1), 47-67. https://doi.org/10.1080/25751654.2023.2205552.

Bode, I. (2023). Practice-based and public-deliberative normativity: retaining human control over the use of force. European Journal of International Relations 29(4), 990-1016. https://doi.org/10.1177/13540661231163392.

Bode, I., and Huelss, H. (2023). Constructing expertise: the front- and back-door regulation of AI’s military applications in the European Union. Journal of European Public Policy 30(7), 1230-1254. https://doi.org/10.1080/13501763.2023.2174169.

Clancy, R., Bode, I., and Zhu, Q. (2023). The need for and nature of a normative, cultural psychology of weaponized AI. Ethics and Information Technology 25(6). https://doi.org/10.1007/s10676-023-09680-3.

Bode, I., Huelss, H., Nadibaidze, A., Qiao-Franco, G. and Watts, T.F.A. (2023). Prospects for the global governance of autonomous weapons: comparing Chinese, Russian, and US practices. Ethics and Information Technology 25(5). https://doi.org/10.1007/s10676-023-09678-x.

Qiao-Franco, G. and Bode, I. (2023). Weaponised Artificial Intelligence and Chinese Practices of Human–Machine Interaction. The Chinese Journal of International Politics 16(1), 106-128. https://doi.org/10.1093/cjip/poac024.

Qiao-Franco, G. and  Zhu, R. (2022). China’s Artificial Intelligence Ethics: Policy Development in an Emergent Community of Practice. Journal of Contemporary China 33(146), 189-205. https://doi.org/10.1080/10670564.2022.2153016.

Nadibaidze, A. (2022). Great Power Identity in Russia’s Position on Autonomous Weapons Systems. Contemporary Security Policy, 43(3), 407-43. https://doi.org/10.1080/13523260.2022.2075665.

Biegon, R. and Watts, T.F.A. (2022). Remote Warfare and the Retooling of American Primacy. Geopolitics, 27(3), 948-971. https://doi.org/10.1080/14650045.2020.1850442.

Biegon, R., Rauta, V. and Watts, T.F.A. (2021). Remote Warfare – Buzzword or Buzzkill? Defence Studies, 21(4), 427-446. https://doi.org/10.1080/14702436.2021.1994396.

Watts, T.F.A. and Biegon, R. (2021). Revisiting the Remoteness of Remote Warfare: US Military Intervention in Libya During Obama’s Presidency. Defence Studies, 21(4), 508-527. https://doi.org/10.1080/14702436.2021.1994397.

Huelss, H. (2020). Norms Are What Machines Make of Them: Autonomous Weapons Systems and the Normative Implications of Human-Machine Interactions. International Political Sociology, 14(2), 111–28. https://doi.org/10.1093/ips/olz023.

Bode, I. and Huelss, H. (2019). Introduction to the Special Section: The Autonomisation of Weapons Systems: Challenges to International Relations. Global Policy, 10(3), 327–30. https://doi.org/10.1111/1758-5899.12704.

Bode, I. (2019). Norm‐making and the Global South: Attempts to Regulate Lethal Autonomous Weapons Systems. Global Policy, 10(3), 359–64. https://doi.org/10.1111/1758-5899.12684.

Huelss, H. (2019). Deciding on Appropriate Use of Force: Human‐machine Interaction in Weapons Systems and Emerging Norms. Global Policy, 10(3), 354–58. https://doi.org/10.1111/1758-5899.12692.

Bode, I. and Huelss, H. (2018). Autonomous Weapons Systems and Changing Norms in International Relations. Review of International Studies, 44(3), 393–413. https://doi.org/10.1017/S0260210517000614.

Huelss, H. (2017). After Decision-Making: The Operationalization of Norms in International Relations. International Theory, 9(3), 381–409. https://doi.org/10.1017/S1752971917000069.

Reports

Nadibaidze, A., Bode, I., and Zhang, Q. (2024). AI in Military Decision Support Systems: A Review of Developments and Debates. Center for War Studies.

Bode, I. and Watts, T.F.A. (2023). Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control. Center for War Studies & Royal Holloway Centre for International Security.

Nadibaidze, A. (2022). Russian Perceptions of Military AI, Automation, and Autonomy. Foreign Policy Research Institute.

Nadibaidze, A. (2022). Commitment to Control Weaponised Artificial Intelligence: A Step Forward for the OSCE and European Security. Geneva Centre for Security Policy.

Bode, I. and Watts, T.F.A. (2021). Meaning-Less Human Control: Lessons from Air Defence Systems on Meaningful Human Control for the Debate on AWS. Drone Wars UK & Center for War Studies, University of Southern Denmark.

Other publications

Bode, I. and Bhila, I. (2024, 3 September). The problem of algorithmic bias in AI-based military decision support systems. ICRC Humanitarian Law & Policy Blog

Nadibaidze, A. (2024, 15 July). Visuals of AI in the Military Domain: Beyond ‘Killer Robots’ and towards Better Images? Better Images of AI Blog.

Bode, I. and Nadibaidze, A. (2024, 4 April). Human-machine Interaction in the Military Domain and the Responsible AI Framework. Opinio Juris.

Qiao-Franco, G. and Javadi, M. (2024, 3 April). Navigating the Governance of Dual-Use Artificial Intelligence Technologies in Times of Geopolitical Rivalries. Opinio Juris.

Bode, I. (2024, 14 March). Falling under the radar: the problem of algorithmic bias and military applications of AI. ICRC Humanitarian Law & Policy Blog.

Bode, I. and Watts, T.F.A. (2023). Loitering munitions: flagging an urgent need for legally binding rules for autonomy in weapon systems. ICRC Humanitarian Law & Policy Blog

Bode, I., Huelss, H., and Nadibaidze, A. (2023). Kunstig intelligens i krig (Artificial Intelligence in Warfare). Jysk Fynske Medier i Erhverv+. [in Danish]

Nadibaidze, A. (2023). La guerre « low-tech » de la Russie contre l’Ukraine a discrédité son récit de modernisation militaire. Le Rubicon. [in French]

Nadibaidze, A. (2022). Understanding Russia’s Efforts at Technological Sovereignty. Foreign Policy Research Institute Blog.

Nadibaidze, A. (2022). “La inteligencia artificial militarizada en al ámbito nuclear” (Weaponized Artificial Intelligence in the Nuclear Domain). La Vanguardia Dossier No. 84. [in Spanish]

Nadibaidze, A. (2022). Russian Great Power Identity in the Debate on ‘Killer Robots’. Contemporary Security Policy Blog.

Nadibaidze, A. (2022). Quel futur pour le débat international sur les systèmes d’armes autonomes?. Le Rubicon. [in French]

Bode, I., and Nadibaidze, A. (2022). “Von wegen intelligent: Autonome Drohnen und KI-Waffen im Ukraine-Krieg” (Not really intelligent: Autonomous Drones and Weaponised AI in the Ukraine War). Ct Magazin für Computertechnik 10/2022. [in German]

Biegon, R., Rauta, V., and Watts, T.F.A. (2021). Remote Warfare: A Debate Worth the Buzz? E-International Relations.

Nadibaidze, A. (2021). The AI Arms Race: How Can We Control the Use of Killer Robots? TheArticle.

Bode, I. and Watts, T.F.A. (2021). “Vereitelte Drohnenaufklärung: Die USA halten Opferzahlen von Drohneneinsätzen zurück” (Drones in Afghanistan: Not a Technological ‘Silver Bullet’). Ct Magazin für Computertechnik [in German].

Bode, I. (2021). Practice Theories and Critical Security Studies. Global Cooperation Research: A Quarterly Magazine 1/2021, 8-10.

Bode, I. and Watts, T.F.A. (2021). Worried about the Autonomous Weapons of the Future? Look at What’s Already Gone Wrong. Bulletin of the Atomic Scientists.

Bode, I. (2020). The Threat of ‘Killer Robots is Real and Closer than You Might Think’. The Conversation.

Bode, I. (2018). AI has Already Been Weaponised – And It Shows Why We Should Ban ‘Killer Robots’. The Conversation.

Bode, I. and Huelss, H. (2017). Why ‘Stupid’ Machines Matter: Autonomous Weapons and Shifting Norms. Bulletin of the Atomic Scientists.