Research Article

Topics: All, Events, Political Process, United States of America

“Mission Impossible”? Talking Popular Culture at the REAIM 2024 Summit

The simultaneous release of the films Barbie and Oppenheimer on 21 July 2023 was a cultural phenomenon. “Barbenheimer”—as this event was popularly called—captured global attention. It also invited reflection on how the public comes to think about the politics of nuclear weapons and what role pop culture plays in this process. Released a week prior to Barbie and Oppenheimer, the Hollywood Blockbuster Mission: Impossible – Dead Reckoning Part One explores the hypothetical risks associated with a group of technologies that the director Christopher Nolan describes as having its own “Oppenheimer moment”: AI. According to Nolan, who directed Oppenheimer, there are “very strong parallels” between the current state of AI research and the creation of the atomic bomb.

The plot of the film Mission: Impossible – Dead Reckoning Part One centers around the attempts of various factions to control a rogue, superintelligent AI called The Entity. Drawing from a common science-fiction trope associated with “machine uprisings”, the Entity becomes sentient and escapes the control of its human creators. According to Associated Press reports, President Joe Biden watched this film during a weekend visit to Camp David. Deputy White House chief of staff Bruce Reed has claimed that if Biden “…hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about it”. Alongside a range of other preexisting considerations, Biden’s viewership of Mission: Impossible – Dead Reckoning Part One was linked by journalists to his administration’s issuing of a new executive order on AI in October 2023 that introduced new safety standards for these technologies.

Mission: Impossible – Dead Reckoning Part One lacked Barbenheimer’s cultural impact, underperforming at the global box office. Nevertheless, the film is worth discussing as it provides important insights into the potential scope of popular culture’s impact on the ongoing regulatory debates on AI in the military domain.

Building upon my previous research of the 2023 film The Creator, the first section of this piece reflects on the Entity’s cinematic depiction as a sentient and ghost-like AI superintelligence in Mission: Impossible – Dead Reckoning Part One and how this relates to American politics and security. Inspired by the observations made by the journalist Jared Keller, the second section of this piece then examines the direct influence that the 1983 film WarGames appears to have had on President Ronald Reagan’s concerns about cybersecurity. Against the backdrop of the upcoming Responsible AI in the Military Domain (REAIM) Summit that is to be held in South Korea in September 2024, the third and final section of this piece calls on the various stakeholder groups participating in this process to reflect on popular culture’s potential capacity to shape perceptions of “(in)appropriate” AI usage at the highest levels of decision-making.

Source: Paramount Pictures, public domain, via Wikimedia Commons

Mission: Impossible – Dead Reckoning Part One and the spectral depiction of the Entity

The Entity is described in the film’s script as an “active learning defense system” that, following its creation by US intelligence services, came to possess “multiple personalities, at times behaving like a computer virus, then a tapeworm, then a botnet”. At the film’s start, the Entity is shown orchestrating the destruction of a next-generation Russian submarine (the Sevastopol). This action serves two purposes: first, to create the impression that the Entity had been destroyed alongside the Russian submarine; and second, to hide the location of the “source-code” that human actors could use to gain control over the Entity. Beginning with news and social media sites, the Entity then set about “[d]istorting any and all digital information with which it comes in contact”. After absorbing the “top secret active-learning AI” software being developed by Saudi Arabia’s General Intelligence Directorate, the Entity becomes “sentient”. This led to the system hacking into the “major defense, finance and infrastructure systems” of all the world’s most politically significant states as well as the “world’s intelligence networks”. In these and other ways, the Entity reflects many common myths about AI becoming an all-knowing, sentient threat to humanity.

A disembodied superintelligence more akin to Skynet than the Terminator, the Entity does not take a humanoid form. It principally works through human agents that have either been co-opted toward its cause or are seeking to control the system for their own political purposes. The Entity’s ultimate goals are never made explicitly clear. As one of the human characters that is acting on behalf of the system describes it:

You have no idea the power I represent. Thousands of quadrillions of computations per millisecond, subtly manipulating the minds of billions, while parsing every possible cause and effect, every scenario, however implausible, into a very real map of the most probable next. And, with only a few changes to the present, the future is all but assured.

Having first watched Mission: Impossible – Dead Reckoning Part One on a transatlantic flight to Washington D.C., what immediately jumped out to me about the depiction of the Entity was how closely it reflects the current technological zeitgeist—more specifically, the hype that has increasingly been attached to AI technologies. The Silver and Orange lines of the Washington D.C. metro system, for instance, are adorned with advertisements for AI and cybersecurity products. Mission: Impossible – Dead Reckoning Part One bundles together these different technologies. At one point in the film, for instance, the Entity is described as “[a] self-aware, self-learning, truth-eating digital parasite infesting all of cyberspace”.                 

Example of AI advertisements found in Washington D.C. subway. Source: Tom Watts, 2024

When reflecting on how the Entity was depicted in the film, I was similarly struck by its spectral and free-floating character. As the computer scientist Erik J. Larson notes, the Entity is a “mashup of  villianery, and at times is a shadow unseen, like a looming Poltergeist, soon to terrify by entering the physical world”. Throughout most of the film, the Entity is simultaneously everywhere but nowhere; infallible but impotent; safe in its capacity to predict and manipulate the future but vulnerable to the dare-and-do of Ethan Hunt, the James Bond-esque agent played by Tom Cruise.

In my mind, this was analogous to the description of AI as being akin to earlier inventions such as electricity. This metaphor has some value. It benchmarks how widely the perception has taken hold that these technologies are set to transform not only the global economy but almost every aspect of society. At the same time however, much like the Entity’s presence in Mission: Impossible – Dead Reckoning Part One, the AI-as-electricity metaphor inadvertently frames AI as being an unobservable but ubiquitous presence. Not only does this risk fuelling the unrealistic expectations that many hold about the technical sophistication of these technologies, but it also risks erasing the role that human institutions and agents play in driving the continued development of these technologies.

And third, Mission: Impossible – Dead Reckoning Part One reflects concerns that the basket of technologies associated with AI will be used to disseminate political disinformation and distort the democratic process. As Gabriel—the Entity’s principal interlocutor—puts it: “Whoever controls the Entity controls the truth”. Throughout the film, the Entity is shown generating fake audio and text files as well as manipulating live video feeds. These actions align with the concerns expressed by American intelligence agencies about the potential use of generative AI to create “deepfakes” optimized to undermine the democratic process. In January 2024 for instance, a fake version of President Biden’s voice was used in automated robocalls aimed at discouraging Democrats from participating in the New Hampshire primary elections. Russian interference in the 2016 presidential election has had a lasting impact on American politics. Unsurprisingly therefore, this anxiety has come to be expressed in popular culture.

WarGames, popular culture, and the Reagan administration

Whilst it is not possible to know how much of an influence this film had on the president’s perception of these technologies, it has been widely reported that Mission: Impossible – Dead Reckoning Part One reinforced Biden’s existing concerns about AI risks. According to Deputy White House chief of staff Bruce Reed, in the months leading up to the White House’s issuing of a new executive order on AI in October 2023, the president held a series of meetings with his science advisory council and cabinet in which he discussed AI. During this time, Biden reportedly “saw fake AI images of himself, of his dog” as well as learning more about the “incredible and terrifying technology of voice cloning”. In Reed’s assessment, if Biden had not already been “concerned about what could go wrong with AI before that movie, he saw plenty more to worry about”.

The reported influence that Mission: Impossible – Dead Reckoning Part One had on reinforcing President Biden’s thinking about the potential risks associated with AI has interesting parallels with the role that popular culture played in Ronald Reagan’s support for nuclear disarmament measures during the closing decades of the Cold War. Prior to his election as Governor of California and later President of the United States, Reagan had been both an actor and served as the President of the Screen Actors Guild. During his presidency, Reagan would regularly watch films with a small group of staffers and, in early July 1983, this included WarGames.

Released a year prior to James Cameron’s highly influential The Terminator, WarGames tells the story of a young computer hacker played by Matthew Broderik. This character hacks into a US military supercomputer programmed to fight a nuclear war against the Soviet Union. The War Operation Plan Response (WOPR) system, as it was called, had been created for the North American Aerospace Defense Command following the hesitance that some Air Force personnel had displayed in authorizing retaliatory nuclear strikes during training exercises. WOPR was thereafter delegated control of the US’ strategic nuclear arsenal and had been programmed with the ability to refine its capabilities through simulating nuclear wars with the Soviet Union.

As described by its writers, one of the core messages that WarGames aimed to communicate to audiences was to not let “humans out of the loop” of the decision to launch nuclear weapons. This premise is presented in the film through WOPR’s initial inability to distinguish between a real nuclear attack on the United States and one inadvertently simulated by Matthew Broderik’s character. During the film’s climax, WOPR ultimately comes to the conclusion that, much like playing Tic-Tac-Toe against an experienced player, nuclear war is a “strange game” in that “the only winning move is not to play”. Prior to this realization however, the computer system had almost triggered World War Three.

According to the author Fred Kaplan’s account, during a meeting with national security advisors and Congressional representatives to discuss nuclear arms control in the week after having watched WarGames, Reagan asked all attendees if they had also seen the movie. After summarizing the film’s plot, Reagan then inquired with General John W. Vessey, Jr.—Chairman of the Joint Chiefs of Staff—whether “something like this could really happen?” A week later, General Vessey returned to the White House to inform Reagan that, in his opinion, “the problem is much worse than you think”. This admission set in motion a series of actions that culminated in the issuing of national security directive NSDD-145 in September 1984.

NSDD-145 begins from the understanding that, given recent advancements in information and communication technologies of the kind fictionalized in WarGames, the “traditional distinctions between telecommunications and automated information systems [had] begun to disappear”. Foreshadowing a set of narratives that has been applied to AI during the 21st century, these technological advancements were understood to promise “greatly improved efficiency and effectiveness” but pose “significant security challenges”. To this end, NSDD-145 institutionalized the understanding that “[t]elecommunications and automated information processing systems are highly susceptible to interception, unauthorized electronic access, and related forms of technical exploitation, as well as other dimensions of the hostile intelligence threat”. To counter this risk, the directive outlined a series of policies and bureaucratic reforms aimed at strengthening the protection of sensitive information.

Many aspects of NSDD-145 would ultimately not be implemented due to, amongst other things, privacy concerns. As Kaplan nevertheless notes, Reagan’s viewership of WarGames was important for having contributed toward “the first time that an American president, or a White House directive, discussed what would come to be called ‘cyber warfare’”. WarGames also shaped wider social perceptions of computing. Speaking on Google’s 21st anniversary in 2008, the company’s cofounder Sergey Brin described the film as “a key movie of a generation, especially for those of us who got into computing”.

Reagan in the White House, 1982. Source: White House Photographic Collection, public domain, via Wikimedia Commons
Source: United Artists, public domain, via Wikimedia Commons

Popular culture and the “responsible” use of military AI—is the only winning move not to play?

To recap, whilst lost amidst the “Barbenheimer” phenomenon that swept much of the world in July 2023, the depiction of the malign superintelligence known as the Entity in Mission: Impossible – Dead Reckoning Part One has reportedly reinforced President Biden’s perceptions of the risks associated with AI. This reported influence of popular culture on the highest level of decision-making has historical precedent. During the Cold War, changes to the Reagan administration’s policies on telecommunications and cybersecurity have been traced to the president viewing WarGames.

To be sure: not all works of popular culture are likely to directly influence how policymakers assess the risks and potential opportunities associated with emerging technologies. What the academic researcher David Kirby has described as the “WarGames effect”—the capacity of some works of popular culture to directly influence real-world policymaking by raising awareness of an outcome to be avoided— is comparatively rare. As previous research has shown, the depiction of AI in popular culture is often highly dramatized and can distract from the various ways these technologies could have, in some cases, already negatively impacted human behaviour. For related reasons, leading figures in the global regulatory debates on the weaponization of AI such as the UC Berkeley computer scientist Stuart Russell have previously called on journalists to stop constantly drawing attention to the Terminator franchise.

At the same time, the role that popular culture may play in orientating how policymakers perceive and communicate the risks and opportunities associated with military AI should not be dismissed as unimportant. Speaking at the United Nations in 2019 for instance, then UK Prime Minister Boris Johnson asked whether advances in AI technologies would culminate in “Helpful robots washing and caring for an aging population? Or pink-eyed Terminators sent back from the future to cull the human race?”. In 2016, then US Deputy Secretary of Defense Robert Work drew reference the JARVIS software used by Iron Man as an approximate template for his vision of using AI to augment human decision making. Senior US military officials have similarly described the ethical and legal challenges associated with the use of autonomous weapon systems as being the “Terminator Conundrum”.

As Charli Carpenter has previously argued, since the inception of the global governance debates on lethal autonomous weapon systems in the early 2010s, stakeholder groups have referenced science-fiction to, amongst other things, start conversations and raise awareness of issues. In our previous research, Ingvild Bode and I have argued that policymakers have invoked popular culture to both support and oppose the possible regulation of autonomous weapon systems. To provide just one example, Arnold Schwarzenegger plays both “good” and “bad” versions of the Terminator in the Terminator franchise, being shown both protecting and hunting human characters.

For these and other reasons, the various stakeholder groups participating in the upcoming REAIM Summit scheduled for 9-10 September in South Korea should not avoid nuanced conversations about the role films, television shows, video games, and other forms of popular culture could play in shaping what policymakers perceive to be the “(ir)responsible” military uses of AI. Stuart Russell and others including speakers at the 2023 REAIM Summit rightly argue that references to the Terminator draw unwarranted attention to the hypothetical risks associated with sentient AI. This distracts attention from how AI is already being used by global militaries to conduct a range of data processing and decision support tasks. As we approach the 40th anniversary of The Terminator’s release, the popular perception of the risks associated with AI must shift from Skynet to addressing the very real and very immediate ethical and legal challenges associated with the growing use of these technologies in the military domain.

At the same time, there is an important difference between making popular culture references that warp the global regulatory process and those which use them as a springboard for reflecting on how policymakers may have formed opinions about the challenges and opportunities produced by the real-world development and use of AI technologies. Policymakers watch movies too and will also be aware of the different types of stories told about military AI in popular culture.

For the REAIM summit to fully meet its intended aim of providing a “global platform for multi-stakeholder dialogue on responsible application of AI in the military domain”, participants should therefore be invited to reflect on what influence, if any, popular culture may have had on their thinking about these issues. Unlike in WarGames, the only winning move in this instance could be to play.

This research was funded by a Leverhulme Trust Early Career Research Grant (ECF-2022-135). The views expressed in this piece are the author’s and do not reflect those of his funder or host institution.
Featured image credit: Jake Hills on Unsplash

Share this article

Related articles

All

AI Summits and Declarations: Symbolism or Substance?

The UK’s AI Safety Summit, held on 1-2 November at Bletchley Park, has generated different types of responses from experts and commentators. Some praise it as a “major diplomatic breakthrough” for Prime Minister Rishi Sunak, especially as he managed to get 28 signatures, including those of China, the EU, and

Read More »
All

Loitering Munitions Report Online Launch Event

On 8th December 2023 13.00-14.15 (CET)/12.00-13.15 (GMT), an expert panel (including Laura Bruun, Stockholm International Peace Research Institute) will discuss the major findings of the “Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control” report published earlier this year. You can register to attend this online

Read More »