AutoNorms

Weaponised Artificial Intelligence, Norms, and Order

An International Research Project

funded by the European Research Council (ERC), hosted by the Centre for War Studies (CWS) at the University of Southern Denmark (SDU)

Human-Machine Interaction

HuMach

The Distributed Agency of Humans and Machines in Military AI

An International Research Project

Funded by the Independent Research Fund Denmark (DFF)
Hosted by the Centre for War Studies (CWS) at the University of Southern Denmark (SDU)

An International Research Project

Weaponised artificial intelligence (AI) in the form of autonomous weapons systems (AWS) could diminish the role of meaningful human decision-making in warfare. 

Today, states already use more than 130 systems that can autonomously track targets. But future autonomous weapons will increasingly include AI in their critical functions. Here, machines, rather than humans, will make life or death decisions. This development is likely to change international norms governing the use of force. 

The EU-funded AutoNorms project will develop a new theoretical approach to study how norms, understood as standards of appropriateness, manifest and change in practices. It will investigate norm emergence and change across four contexts of practices (military, transnational political, dual-use and popular imagination) in the US, China, Japan and Russia. It will also review the impact AWS could have on the current international security order.

Funding

This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 852123).

Autonomous weapons systems could diminish the role of meaningful human decision-making in warfare.

Weaponised Artificial Intelligence, Norms, and Order

Weapon systems that integrate an increasing number of automated or autonomous features raise the question of whether humans will remain in direct, meaningful control of the use of force. Concerns relate, in particular, to weapon systems with autonomy in their critical functions – that is selecting, and engaging targets through sensors and algorithms without human input. Such autonomous features can take many different forms: we find them inter alia in loitering munitions, in aerial combat vehicles, in stationary sentries, in counter-drone systems, in air defence systems, in surface vehicles, and in ground vehicles. While diverse, these systems are captured by the catch-all category of autonomous weapons systems (AWS), because they weaponise Artificial Intelligence (AI). 

Many states consider applying force without any human control as unacceptable. But there is less consensus about various complex forms of human-machine interaction along a spectrum of autonomy and the precise point(s) at which human control stops being meaningful. To illustrate, can human control that executes decisions based on indications it has received from a computer be considered meaningful, given that “blackboxed”, algorithmic processing is not accessible to human reasoning? Faced with these questions in the transnational debate at the UN Convention on Certain Conventional Weapons (CCW), states reach different conclusions: some states, supported by civil society organizations, advocate introducing new legal norms to prohibit so-called fully autonomous weapons, while other states leave the field of open in order to increase their room of manoeuvre.  

As discussions drag on with little substantial progress, the operational trend towards including automated and autonomous features in weapon systems continues. A majority of the top 10 arms exporters such as the USA, China, and Russia, are developing or planning to develop some form of AWS. Such dynamics point to a potentially influential trajectory AWS may take towards changing what is appropriate when it comes to the use of autonomously applied force. Technologies have always shaped and altered warfare and therefore how force is used and perceived. Yet, the role that technology plays should not be conceived in deterministic terms. Rather, technology is ambivalent, making how it is used in international relations, and especially in warfare, a political question.

Learn more about AutoNorms

Read more about the questions driving our research, meet our team and explore our latest findings.

The AutoNorms research is organised in seven research themes. We regularly publish blogposts on our website that explores these key topics. Find our latest entries here below, or explore all research themes by clicking below. 

Latest blogposts

Japanese ‘Robot Culture’ and the Military Domain: Fact or Fiction?

Imagine that you are sitting in a restaurant somewhere in central Tokyo. You have just ordered your lunch using the tablet provided. Suddenly, coming from the direction of the kitchen, you hear a jingle playing. Next, a little white robot turns the corner and drives in your direction. As it

AI Summits and Declarations: Symbolism or Substance?

The UK’s AI Safety Summit, held on 1-2 November at Bletchley Park, has generated different types of responses from experts and commentators. Some praise it as a “major diplomatic breakthrough” for Prime Minister Rishi Sunak, especially as he managed to get 28 signatures, including those of China, the EU, and

Loitering Munitions Report Online Launch Event

On 8th December 2023 13.00-14.15 (CET)/12.00-13.15 (GMT), an expert panel (including Laura Bruun, Stockholm International Peace Research Institute) will discuss the major findings of the “Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control” report published earlier this year. You can register to attend this online

The Creator of New Thinking On AI? Popular Culture, Geopolitics, and Why Stories About Intelligent Machines Matter

Whilst the depiction of weaponised artificial intelligence (AI) technologies in popular culture is often highly inaccurate and dramatized, Hollywood blockbusters provide the starting point from which many members of the public begin to develop their thinking about these technologies. For instance, news articles discussing AI are often accompanied with images

The AutoNorms project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 852123).