Research Article

Topics: All, Political Process

AI Summits and Declarations: Symbolism or Substance?

The UK’s AI Safety Summit, held on 1-2 November at Bletchley Park, has generated different types of responses from experts and commentators. Some praise it as a “major diplomatic breakthrough” for Prime Minister Rishi Sunak, especially as he managed to get 28 signatures, including those of China, the EU, and the US, on the Bletchley Declaration—the main output of the event. Others see it as a first step towards potential future discussions on global AI governance, although with “limited progress”. But there are also reasons to be sceptical over what type of progress these kinds of events and declarations on AI safety can achieve.

For instance, the summit’s overwhelming focus on ‘existential risk/threat’ to humanity and ‘frontier AI’ (this ambiguous term seems to designate advanced general-purpose AI models and comes from language used by the industry) overshadowed current, existing threats of AI technologies. Some civil society representatives criticized the summit for “giving a platform to alarmist narratives”, although it is worth noting that the Bletchley Declaration ended up including both optimistic and pessimistic ways of thinking about AI, which also reflect ‘fearful’ and ‘hopeful’ AI narratives featuring in popular culture and the media debate. The Declaration, described by the Financial Times’ John Thornhill as “worthy, but toothless”, starts off by mentioning the opportunities of AI, but then highlights different types of risks—ranging from those related to AI applications in “daily life” to “catastrophic harm” of ‘frontier AI’ models.

The participants included mostly government officials (in restricted delegations) and Big Tech players, with few representatives of startups, civil society, human rights organisations, and academia, especially from social sciences and ethics. The ‘interview’ that Sunak held with Tesla CEO Elon Musk reinforced the feeling that Big Tech actors are prioritized in the debate about AI safety. As Sky News’ deputy political editor Sam Coates suggested, it appeared as if Sunak was “selling Britain” to Musk as if it were a product. Some organisations and institutes hosted AI fringe events in London and around the UK to complement the Summit and discuss AI safety in a more representative fashion, beyond the focus on industry and frontier models.

Moreover, the non-transparent invitation process and lack of representation from the Global South were highlighted as problematic for an event claiming to be concerned about global developments. The summit was also criticized for not being more concrete on regulation, with a Nature editorial highlighting that regulation does not need to be put in opposition to innovation, as it is currently being framed by Sunak and UK officials.

In a WIRED piece entitled “Britain’s Big AI Summit Is a Doom-Obsessed Mess”, the author Peter Guest notes that both the summit’s venue (Bletchley Park is considered to be the birthplace of modern computing) and content suggest that “symbolism triumphed over substance”. Do these types of AI summits and declarations really amount to substantive concerns about AI safety and progress towards regulation which would address these concerns, or should they be considered as more of a symbol—in particular, a symbol of status in what seems to be an emerging global “race to AI regulation”?

AI development and summits as symbols of international status

There is a well-established scholarship in International Relations exploring the ways in which states seek status, which is most often thought of as a standing in the international hierarchy, membership of a club, relative standing within a club, or perhaps a mix of all these elements. To pursue recognition of their desired place in the international system from others, states try to acquire and demonstrate symbols, or markers, of this specific status. As status scholar Pål Røren writes, these are “things, attributes, privileges, or reputations that actors acquire, embody, or practice to signal their preferred social status” (p.13).

Examples of status symbols include artefacts that states try to obtain and demonstrate to others, such as nuclear weapons, space technologies, or aircraft carriers. They can also be activities such as hosting the Olympic Games or FIFA World Cups, participating in peacekeeping operations, sending humanitarian aid, or being active in global environmental politics. In other words, states do ‘things’ that are valued in relation to the standing that they want to occupy in the global arena.

Often, they also engage in ‘conspicuous consumption’ when they pursue their status. In the same way as individuals are consumers of items which bring prestige in society (for instance, certain brands of automobiles or clothing), governments invest considerable amounts of money into projects which might not be strategically important or militarily/economically beneficial. They invest in these symbols because showing them to others helps institutionalize a country’s place in the international hierarchy. As Lilach Gilady writes in the book The Price of Prestige, “policy decisions are not only a means for achieving specific material goals but also a gesture to be observed by other peers” (p.2).

In the modern international system, the pursuit of advanced AI systems has become considered as a symbol of status, prestige, and technological prowess. Significant investments into AI have become synonymous with what it “means to be a modern and technologically advanced nation”. They are what Gilady calls ‘Big Science’ status symbols, associated with “advanced technological capabilities, industrialization, and modernity, which are valued by contemporary international society” (pp. 126-127). Certain countries integrate AI into their branding: for instance, AI technologies are part of the “national mantra” in Singapore’s digital transformation agenda. States seem to believe that demonstrating cutting-edge AI development is what makes them modern in the current international hierarchy. This belief and value attributed to AI are part of what fuels the ongoing global competition in AI, often called the “AI (arms) race”.

However, in recent months the competition seems not only to be about developing AI, but also about pursuing symbols that demonstrate acknowledgement of risks of AI, such as hosting AI summits and issuing declarations on concerns about AI, especially generative AI since the launch of OpenAI’s ChatGPT in 2022. Not only there is a competition in AI innovation, but also one in symbolic AI regulation.

Only in recent weeks, the US has set up its AI Safety Institute (as did the UK) and published its executive order on AI safety, the Group of Seven released a statement to advance the Hiroshima AI Process, and the EU has been making progress in the negotiations over its AI Act—which European Commission President Ursula von der Leyen called a “blueprint for the whole world”. In the military domain, this year has seen the first edition of the REAIM Summit co-hosted by the Netherlands and the Republic of Korea, several conferences on autonomous weapon systems (in Luxembourg, Costa Rica, Trinidad and Tobago, and one upcoming in the Philippines), and declarations such as the REAIM Call to Action and the US political declaration responsible military use of AI and autonomy, which 45 states have endorsed as of 13 November 2023. These initiatives also involve different types of participants, as they can be focused on global vs. regional levels, or be more inclusive (for instance, involving civil society and academia) vs. exclusive (only for states or mostly for states and industry).

Many actors want to show that they are active in the field of AI regulation, including by hosting these events or branding themselves as the leader in this area. Some of these initiatives can be seen as part of the process of status-seeking, and the UK’s summit is a good illustration. Hosting this summit is perhaps more about the UK’s search for its post-Brexit role in the global arena, rather than about acting upon specific policies in relation to AI safety. The current UK government has expressed scepticism towards new regulation in the near future (as Sunak said, “there is no rush to regulate”) with its goal of making the UK a “science and technology superpower” or “an AI-friendly island of innovation”, i.e., trying to balance its ambitions to be part of both the AI competition and the regulatory competition clubs. However, with this approach, and especially if the momentum on concrete policies is not maintained, “the Summit risks being a flashy spectacle for a government on the wane”, according to Seán Ó hÉigeartaigh from the Leverhulme Centre for the Future of Intelligence. In other words, a flashy summit as a symbol of the UK’s search for itself and its post-Brexit status.

The fact that two more AI safety summits are already planned to be held in the Republic of Korea and France suggests that other states want to “take up the baton” and also use such summits in their quest for status and recognition in the AI regulation sphere, although the Korean and French approaches might be different, depending on how they perceive themselves in the global hierarchy and the AI regulatory competition.

Moving from symbolism towards substance

Some of the initiatives we have seen in recent months might be mostly symbolic and representative of many actors willing to be part of the AI regulatory club. This involves the risk of lagging on substance, although there seems to be consensus that more progress is needed on that front. As the Guardian’s Dan Milmo and Kiran Stacey wrote about the UK’s summit

Every delegation at Bletchley was keen to claim preeminence in AI regulation, from European diplomats noting they had started the regulatory process four years ago to Americans talking up the power of their new AI safety institute… But most agree on the importance of international summits such as this one, not least to help define the problem different countries are trying to tackle.

We are likely to see more of these AI summits and declarations in the near future, before any legally binding norms emerge, as legal expert Nathalie Smuha writes. Although these are welcome steps because they put AI on the global agenda, to be effective steps on the path towards governing AI, the symbolism would need to move to substance sooner than later.

There is still no global definition of what exactly it means to develop AI in “in such a way as to be human-centric, trustworthy and responsible”, as the Bletchley Declaration states. The document, as many other similar statements, does not provide details on the types of policies needed to ensure responsible development and use of AI. More concrete discussions are needed on the operationalisation of these principles, and details about what they imply in practice. Some efforts might be going into this direction, for instance the future recommendations of the UN High-Level Advisory Body on AI, or, in the military domain, operational guidelines from the IEEE Standards Association research group on AI and autonomy in defence.

While summits and non-binding declarations are important steps forward in setting global norms on AI development and use, including in the military domain, there is a risk that they might be too symbolic and not substantive enough for the debate to move forward. Several processes to ensure ‘safe and responsible’ AI development are ongoing at the same time. Many actors want to invest their time and money into demonstrating their willingness to be part of the club, but it remains unclear which of these initiatives would really bring the operational substance that is needed in the global conversation on AI right now.  

Featured image credit: T S on Unsplash 

Share this article

Related articles

All

The Imaginaries of Human-Robot Relationships in Chinese Popular Culture

The portrayals of artificial intelligence (AI) and human-robot interactions in popular culture, along with their potential impact on public perceptions of AI and the regulations governing this evolving field, have garnered growing interest. Building on previous studies on public imaginaries of AI in Hollywood movies, particularly focusing on the Terminator

Read More »