Research Article

Topics: All, Political Process

AutoNorms at the UN GGE on LAWS

In 2021-2022, the AutoNorms team participated in three meetings of the United Nations Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS). The GGE meetings take place in Geneva and bring together delegations of state parties to the UN Convention on Certain Conventional Weapons (CCW), as well as representatives of civil society, non-governmental organisations, and academia.

 

August 2021

On 13 August 2021, Anna Nadibaidze delivered the following verbal statement on behalf of AutoNorms:

Thank you Mr. Chair,
I take the floor as a team member of the AutoNorms project, an international research project hosted by the Center for War Studies at the University of Southern Denmark.

Please allow me to make some remarks about the [chair’s] guiding questions which we received yesterday. Perhaps my comments will be useful to delegations as they further reflect upon the chair’s paper in the next weeks.

These comments concern issue number 2 (human-machine interaction and human control) and specifically the last question within that section, about the analysis of existing weapons systems [How would the analysis of existing weapons systems help elaborate on the range of factors that should be considered in determining the quality and extent of human-machine interaction/human control/human judgment?].

Mr. Chair, our team believes that an examination of existing weapons systems with automated and autonomous features is crucial to understanding the changing nature of human-machine interaction and determining the quality and extent of human control needed.

Let us look at the example of air defence systems, which our project examines in depth in a previous research report called “Meaning-less human control”. Automated and autonomous features have been integrated into critical functions of air defence systems, such as targeting, for decades. They are often considered unproblematic because in theory, states can limit how and when they are deployed. However, our study reveals problematic practices which have contributed to an emerging norm that diminishes the quality of human control over specific targeting decisions.

As the human operator’s role in air defence systems has changed from active controllers to passive supervisors, human operators have lost both situational awareness and a functional understanding of how algorithmic systems make targeting decisions. While human operators often formally retain the final decision, in practice the decision is often meaningless.

The complexities of human-machine interaction do not always allow for situational awareness, do not provide human operators with the knowledge necessary to understand and question the system’s logic, and do not give them the time to engage in meaningful deliberation. In current air defence systems, humans can in, some situations, be unintentionally set up to fail as meaningful operators. As the integration of automation and autonomy into the critical functions of air defence systems has widened and accelerated, an emerging standard of appropriateness attributing humans a diminished role has emerged and gradually normalized over time.

What this example shows is that existing weapons systems with both automated and autonomous features can obstruct the exercise of human control because they increase the complexity of the system’s operations. This problematic operational quality of human-machine interaction needs to be scrutinized because there is a risk that deliberative efforts to codify human control as a vital component of international law (such as efforts of many delegations here at the GGE) could be undermined.

Based on our report, we make a number of recommendations including:

  • Scrutinising current practices of how states operate weapon systems with automated and autonomous features in specific use of force situations.

  • More in-depth studies of the emerging standards set for human control produced by the use of other existing weapon systems with automated and autonomous features beyond air defence systems. We support calls by stakeholders, such as the ICRC and SIPRI (Stockholm International Peace Research Institute), for the detailed study of existing autonomous weapons systems – this includes all “weapon systems that select targets on the basis of sensor input”. For example, loitering munitions are one such weapon systems that should be closely studied.

In conclusion, Mr. Chair and distinguished delegates, analysing currently existing weapon systems and how their development, testing and use are already changing the nature of human-machine interaction is crucial to understanding and determining the factors that should be considered for the human control element of LAWS. Our analysis of existing air defence systems shows that control through human-machine interaction is the decisive element in ensuring that control remains “meaningful”. Thank you for your attention.

 

September 2021

In September 2021, the AutoNorms team submitted a written statement to the GGE. It is available here

 

July 2022

On 28 July 2022, Ingvild Bode delivered the following verbal statement on behalf of AutoNorms. The recording of the statement is available here.

Distinguished chair, distinguished delegates,
I would like to make some general comments about the reference to good practices related to human-machine interaction, as mentioned in paragraph 22 [of the draft report circulated by the Chair]. I am speaking on behalf of the Center for War Studies, at the University of Southern Denmark.  

While examining good practices in relation to existing weapon systems with autonomous features can certainly be a useful exercise, we can also learn what kind of problems and challenges such practices create. I would like to illustrate this briefly by examining one such weapon system with autonomous features: loitering munitions. Allow me to make three points here:

First, loitering munitions demonstrate that proliferation concerns raised by several delegations with regard to autonomous weapon systems are well-founded. While larger types have been operational since the mid-2000s, we have seen a trend towards lighter, smaller, and portable systems that are designed to target personnel. Such systems are proliferating at speed as their use in various armed conflicts over the past few years demonstrates.

Second, as part of the shift towards smaller loitering munitions since the 2010s, manufacturers now specifically design and advertise their use in urban warfare. This comes with a potentially greater risk for civilian harm. Technical AI experts have long warned about the relative stupidity of AI when it comes to distinguishing between civilians and combatants. The legal categories of civilians and combatants alone require deliberative, context-bound human judgement. This is more severe in an urban environment as a cluttered space.

Third, the precise quality of human control exercised by the operators of loitering munitions in specific use of force situations is uncertain. Most manufacturers of loitering munitions characterize their platforms as “human-in-the-loop” systems. At the same time, loitering munitions proliferating today have the technical functionality to identify, track, and attack targets without direct human intervention. In other words, they could be used autonomously.

We can also raise questions about the quality of human control exercised. As such systems are typically used in close proximity to war-fighting, what is the realistic decision-making space of their operators? Under combat conditions, there is a strong likelihood that humans will trust the system’s outputs rather than scrutinizing them. This is what investigations on automation bias, including our own research on air defence systems, have shown.

Many manufacturers of loitering munitions are also working on their platform’s swarm capability. This creates a significant strain on the human-in-the-loop guarantee. It also alters the overall predictability of the system, as swarm behaviour may not be foreseeable nor understandable to the human in charge.

To conclude, considering existing practices related to human-machine interaction in greater detail can add significantly to our understanding. But not only in the form of good practices, but also to identify potentially problematic practices of the uses of autonomy in weapon systems that warrant regulation. Thank you.

Featured image credit: Guangyu Qiao-Franco

Share this article

Related articles

All

The Imaginaries of Human-Robot Relationships in Chinese Popular Culture

The portrayals of artificial intelligence (AI) and human-robot interactions in popular culture, along with their potential impact on public perceptions of AI and the regulations governing this evolving field, have garnered growing interest. Building on previous studies on public imaginaries of AI in Hollywood movies, particularly focusing on the Terminator

Read More »