Loitering Munitions Report Online Launch Event
On 8th December 2023 13.00-14.15 (CET)/12.00-13.15 (GMT), an expert panel (including Laura Bruun, Stockholm International Peace Research Institute) will discuss the major findings of the “Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control” report published earlier this year. You can register to attend this online event here. Co-authored by Dr. […]
The Creator of New Thinking On AI? Popular Culture, Geopolitics, and Why Stories About Intelligent Machines Matter
Whilst the depiction of weaponised artificial intelligence (AI) technologies in popular culture is often highly inaccurate and dramatized, Hollywood blockbusters provide the starting point from which many members of the public begin to develop their thinking about these technologies. For instance, news articles discussing AI are often accompanied with images of metallic silver skulls with […]
Global Governance of AI in the Military Domain
The AutoNorms team has submitted the short paper below to the United Nations Office of the Secretary General’s Envoy on Technology. In preparation for the first meeting of the Multi-stakeholder High-level Advisory Body on AI, the Office issued a call for papers on global AI governance. AutoNorms’ submission, written by Ingvild Bode, Hendrik Huelss, Anna […]
Five Questions We Often Get Asked About AI in Weapon Systems and Our Answers
By Anna Nadibaidze and Ingvild Bode The ongoing integration of artificial intelligence (AI) and autonomous technologies in weapon systems raises many questions across a variety of fields, including ethics, law, philosophy, and international security. As part of the AutoNorms project, we have contributed to many of these discussions over the past three years, including through […]
Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control
Download the report here By Ingvild Bode and Tom Watts A new report published by the Center for War Studies, University of Southern Denmark and the Royal Holloway Centre for International Security highlights the immediate need to regulate autonomous weapon systems, or ‘killer robots’ as they are colloquially called. Written by Dr. Ingvild Bode and Dr. […]
A Question of Trust? New US Initiatives to Tackle the Human Control Problem
A lack of or a substantially diminished quality of human control is often understood as the major problem associated with military AI. The US Department of Defense (DoD) ‘Directive 3000.09’ that was released in 2012 as one of the first political documents on autonomy in weapon systems, for example, states in its updated version from […]
‘Responsible AI’ in the Military Domain: Implications for Regulation
This blog is based on the regulation subpanel of the Realities of Algorithmic Warfare breakout session, held at the REAIM Summit 2023. Watch the full breakout session here. The global debate on military applications of artificial intelligence (AI) and autonomy is gradually expanding beyond autonomous weapon systems (AWS) towards the concept of ‘Responsible AI’. Proponents […]
AutoNorms at the UN GGE on LAWS in March 2023
The AutoNorms team regularly participates in meetings of the United Nations Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS). The GGE meetings take place in Geneva and bring together delegations of state parties to the UN Convention on Certain Conventional Weapons (CCW), as well as representatives […]
Has REAIM “Re-aimed” AI Applications in the Military Domain?
A positive step towards regulating the development and growing use of artificial intelligence (AI) in warfare was taken during a two-day conference in The Hague in February 2023, namely the Global Summit on Responsible Artificial Intelligence in the Military Domain (REAIM). As an initiative of the Dutch government in partnership with the Republic of Korea, […]
Consequences of Using AI-Based Decision-Making Support Systems for Affected Populations
The following essay builds on remarks delivered by Ingvild Bode as part of the Expert Workshop “AI and Related Technologies in Military Decision-Making on the Use of Force”, organised by the International Committee of the Red Cross (ICRC) & Geneva Academy Joint Initiative on the Digitalization of Armed Conflict on 8 November 2022. I want […]