Research Article

Topics: All, Human - Machine Interaction, Technology

AI in Military Decision Support Systems: A Review of Developments and Debates

By Anna Nadibaidze, Ingvild Bode, and Qiaochu Zhang

A new report published by the Center for War Studies at the University of Southern Denmark reviews developments and debates related to AI-based decision support systems (AI DSS) in military decision-making on the use of force.

Written by Anna Nadibaidze, Ingvild Bode, and Qiaochu Zhang, this report contributes to ongoing discussions on AI DSS in the military domain by providing a review of 1) the main developments in relation to AI DSS, focusing on specific examples of existing systems; and 2) the main debates about opportunities and challenges related to various uses of AI DSS, with a focus on issues of human-machine interaction in warfare.

While acknowledging that the development of AI DSS is a global and long-standing trend, the report focuses on mapping and analysing specific examples as part of the three main cases: the United States’ Project Maven, the Russia-Ukraine war (2022-), and the Israel-Hamas war (2023-). These cases are treated as indicative of possible uses of AI DSS, as well as representative of some of the varied opportunities and challenges associated with AI DSS in military decision-making on the use of force.

Potential opportunities of AI DSS include increased speed, scale, and efficiency of decision-making which might lead to strategic or humanitarian advantages in a battlefield context. With increased speed and scale, however, also come various risks and concerns around how humans interact with AI DSS in military decision-making on the use of force.

The report argues that challenges raised by the use of AI DSS in decision-making on the use of force are often linked to human-machine interaction and the distributed agency between humans and machines, which raises legal, ethical, and security concerns. It highlights the following, interrelated challenges: dynamics of human-machine interaction in AI DSS, issues related to trust, targeting doctrines and rules of engagement, data-related issues and technical malfunctions, legal challenges, and ethical challenges.

To push the debates on AI DSS further, the report recommends that stakeholders in the global debate about military applications of AI focus on questions of human-machine interaction and work towards addressing the challenges associated with distributed agency in warfare. It includes the following recommendations to address some of the main challenges associated with AI DSS, particularly those stemming from human-machine interaction in the military domain:

1. Ensuring context-appropriate human involvement in the use of force by:

  • strengthening and sustaining human oversight, involvement, and the exercise of agency throughout the entire lifecycle of AI DSS
  • providing human operators with protocols, training, and guidance that allow for them to exercise 1) critical assessments of systems’ outputs and 2) assessments of a decision’s legal, strategic, and humanitarian impacts
  • limiting the speed and pace of use-of-force decision-making in particular contexts of use of AI DSS, such as in urban, populated areas
  • engaging in rigorous, continuous testing and regular audits to assess the quality of human involvement in human-machine interaction
  • considering how the social, political, institutional, and technical aspects interact in different contexts of use, and with what potential implications


2. Adopting a multistakeholder approach to sustain and strengthen the human role in decision-making on the use of force, which includes

  • establishing regulations on human-machine interaction in use-of-force decision-making
  • pursuing the discussion about human-machine interaction in warfare in a space that brings together diverse stakeholders beyond state representatives,
  • intensifying the bottom-up process of setting operational standards

Funding

Research for the report was supported by funding from the European Research Council under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 852123, the AutoNorms project).

Featured image credit: Vidsplay on StockSnap

Share this article

Related articles