Research Article

Topics: All, Human - Machine Interaction, Technology

The New AutoPractices Project: Toward Governing AI Technologies in Military Decision-Making from the Bottom Up

"Non-image" by Adrien Limousin.

On 1 June 2024, the AutoNorms team started a new policy-oriented project called AutoPractices. The purpose of the AutoPractices project is to initiate and accompany a process of social innovation to govern autonomous and AI technologies (AIT) in the military domain from the bottom up. The project aims to do this by addressing the practices of designing, of training personnel for, and of using AIT that diverse stakeholders perform.

AutoPractices takes key findings of the ERC AutoNorms project as a starting point. The AutoNorms project argues that practices of designing, training personnel for, and using autonomous weapon systems shape a social ‘norm’ —or shared “understanding of appropriateness” — of what counts as the requisite form of human control over the use of force. AutoNorms research has revealed that this emerging norm accepts a diminished, reduced form of human control when interacting with AIT as ‘normal’ and ‘appropriate’. This emerging norm of diminished human control is both a societal challenge and public policy problem because it undercuts the exercise of human agency over the use of force.

Recognising that practices of use make norms, especially in the absence of specific legal regulation, is an important insight and the starting point for the AutoPractices project. It means that stakeholders could shape a positive norm of human control/agency through changing their practices of designing, training personnel for, and using AIT in targeting decision-making. AutoPractices presents a bottom-up approach to governing AIT in targeting decision-making because it starts from stakeholders in technical, military, and political spaces changing their daily practices. This process of social innovation can recontextualise the emerging norm of human control/agency into a positive version through stakeholders enacting different practices from the bottom up.

The AutoPractices project follows a four-stage pathway to social innovation, where at stage 1 (idea generation), we co-develop a working definition of positive human control/agency in targeting decision-making incorporating AIT together with AutoPractices collaborators. At stage 2 (research), AutoPractices raises awareness among stakeholders about the differential effect that the practices they perform have on the emerging human control/agency norm in the military domain through qualitative surveys and interviews. In stage 3 (implementation), AutoPractices co-creates a best practices toolkit on the basis of information gathered in interviews and surveys (stages 1-2) to sustain and strengthen human control/agency in targeting decision-making integrating AIT at three multi-stakeholder workshops. In stage 4 (spread), AutoPractices diffuses the best practices toolkit strategically through multipliers, such as influential stakeholders and the media. The best practices toolkit allows stakeholders involved in AutoPractices and those beyond it to revisit and reflect on the practices they perform. In this way, the best practices toolkit can be a source for sustaining and diffusing a positive norm of human control/agency in targeting decision-making incorporating AIT from the bottom up.

A positive norm of human control/agency in warfare benefits three sets of stakeholders: (1) governmental security and defence policymakers, because it reduces the humanitarian, legal, security, and ethical risks associated with the emerging norm of diminished human control/agency; (2) actors designing and using AIT in targeting decision-making, because it offers operational guidance on how and where humans have to be involved along the lifecycle of weapon systems to ensure human control/agency and the attribution of responsibility/accountability; (3) and wider society, because it demarcates clear technical and political-ethical guardrails to ensure that crucial decisions in warfare remain the prerogative of humans.

AutoPractices is a Proof of Concept (PoC) project funded by the European Research Council. For AutoPractices, the AutoNorms team will collaborate with Alexander Blanchard at the Stockholm International Peace Research Institute (SIPRI), Shimona Mohan at the United Nations Institute for Disarmament Research (UNIDIR, in her personal capacity), and Ariel Conn (expert member, Global Commission on Responsible AI in the Military Domain). AutoPractices runs from June 2024 until December 2025.

We will post regular updates about the progress of the AutoPractices project on the AutoNorms website.

Four Stage Social Innovation Model of AutoPractices
Featured image credit: Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0

Share this article

Related articles

All

What Can the Public Learn About AI Weapons by Playing Videogames?

By Paolo Franco and Guangyu Qiao-Franco Military-themed videogames are not only popular among consumers, but they also generate considerable commercial interest. The Call of Duty series, for example, is one of the best-selling videogame franchises of all time, having sold 425 million units and earning $30 billion in revenue since

Read More »