Research Article

Topics: United States of America

Google’s Brave New World? Big Tech, Military AI, and the Trump Effect

In recent days, Google’s update of its AI principles that avoids clear ethical pledges in contrast to the 2018-version has gained attention. While this could be seen as a major policy shift of a big AI player, I argue in this post that it underlines an intensification of business activities that were already part of Google’s portfolio (and of others such as Amazon and Microsoft). The changing US policy landscape works thereby as an amplifier of the trend toward private companies becoming involved with military applications of AI.   

The inauguration of the 47th US President Donald J. Trump on 21 January at the Capitol Rotunda in Washington D.C. visually underlined the importance that the incoming Trump administration intends to give to leading actors in the AI/technology sector. Seated behind the President’s wife, Facebook founder Mark Zuckerberg, Amazon founder Jeff Bezos, Google’s CEO Sundar Pichai and Elon Musk, the CEO of Tesla, Space X and X, followed the ceremony. Trump has emphasized repeatedly that the promotion of “AI” is of major strategic importance for his administration, for example, by announcing the launch of the $500bn “Stargate” project meant to build AI infrastructure in the US on the day following his inauguration.

Trump’s focus on AI can be seen in the context of deeper dynamics in the Republican party that range from a general distaste of government regulation and fostering of innovation-friendly environment to a more ideological view on AI/technology. The 2024 Republican Platform released on 8 July of last year outlines about “AI”: “We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.”

Indeed, Biden’s Executive Order 14110 of 30 October 2023 with the title “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” was revoked soon after Trump’s inauguration—an act that was less in the focus of public attention but well-noticed by the industry. Trump’s Executive Order 14179, entitled “Removing Barriers to American Leadership in Artificial Intelligence”, was signed on 23 January 2025 with the purpose of maintaining AI “leadership” by “develop[ing] AI systems that are free from ideological bias or engineered social agendas”. As outlined here, “[t]his order revokes certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence”.

The impact of Executive Order 14179 in terms of implementing the outlined policy remains yet unclear. At the same time, there is already a move in the Big Tech sector to start pre-emptively self-deregulating by updating corporate policies and procedures, illustrated by Facebook’s “Hateful Conduct” policy.  

In the context of AI in the military domain, these developments and the political signalling attached are likely to further accelerate the perception of a window of unprecedented business opportunities for corporations and venture capital firms investing heavily in the development of military applications of AI. This trend has already unfolded significantly during the Biden administration. Here, the increased interest of venture capital to fund start-ups offering their products and services to US military and security departments and agencies might increase the pressure for established, major Big Tech corporations to rethink their business policies. While different factors contribute to this trend, socio-political and economic perceptions of the importance of “AI” have strongly increased in the past two-three years.

While some of the smaller but influential companies such as Palantir and Anduril have been outspoken in promoting their projects, products, and—as in the case of Andruil’s founder Palmer Luckey—also their financial and ideological support of the Trump campaign/Republican party, other major players in the sector have a more troublesome and secretive relation with US military contracts.

Google LLC made headlines in 2018 when employees protested against its leading role in the US military project Maven meant to develop a machine-learning computer vision algorithm for image recognition—and ultimately target recognition—purposes. The 2017-launched Maven was well-covered in media and academia (as well as here, here, and here) in the past few years. Such coverage also illustrated the functional shortcomings of military AI, as here by Bloomberg in 2024: “While humans at the 18th Airborne Corps can correctly identify a tank 84% of the time, Maven gets it closer to 60%. And experts say that number goes down to 30% on a snowy day.”

Google left the project Maven collaboration in 2018, a move that was then considered as a direct outcome of the criticism aimed at Google’s corporate ethics. Less in the focus of media attention was, however, that industry competitors of Google such as Amazon and Microsoft reportedly entered into contractual relationships with the Pentagon to develop Maven after Google left the project.

But ending the contribution to project Maven was at that time not the only visible sign of Google underlining its ethical-normative commitment. Google CEO Sundar Pichai, who was seated next to Trump on Inauguration Day as abovementioned, outlined key objectives and responsibilities of Google’s stance towards AI in a June 2018 blog post “AI at Google: our principles”. The central pledge can be found in the following paragraph (direct citation from the blog post):

AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm.  Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.

The updated version of Google’s AI principles from 4 February 2025, however, does not reproduce this explicit normative approach presented in 2018. This has been relatively widely covered by various media outlets, as a Google blog post on the same day by James Manyika SVP, Research, Labs, Technology & Society, and Demis Hassabis CEO and Co-Founder, Google DeepMind, provided arguably the new context of Google’s approach. It is said here, inter alia, that “There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”    

While this move is noteworthy, Google has, in fact, not ceased to work under US military contracts in the years after the Maven engagement. In 2022, Google Cloud services was awarded a co-contract with the DoD for “Joint Warfighting Cloud Capability Procurement” with a ceiling of $9bn (JWCC). The purpose of the JWCC is to provide a single cloud computing solution that is “critical in creating a global, resilient, and secure information environment that enables warfighting and mission command, resulting in improved agility, greater lethality, and improved decision-making at all levels”, as the DoD notes. The extent to which such projects have the potential to enable to “cause overall harm”, “injury to people”, or the use in “surveillance” is certainly debatable.

Further and notable, a letter signed by nearly 200 workers at Google’s DeepMind in May 2024 received much less attention than the Google employees’ protest against project Maven. The letter referred reportedly to an article by Time Magazine from April 2024, where it was highlighted that Google Cloud had a contract (together with Amazon) with the Israeli government and military cloud computing Project Nimbus.

Apart from subscribing to the general Google AI principles from 2018, DeepMind is one of the signatories of the “Lethal Autonomous Weapons Pledge” by the Future of Life Institute, where, inter alia, “the undersigned agree that the decision to take a human life should never be delegated to a machine “. Remarkably, Google and Amazon workers already protested in 2021 against the Project Nimbus contracts, which remain opaque in their scope and purpose.

Given these trajectories, I argue that Google’s updated AI principles do not actually signal a significant policy shift by now putting an end to previous ethical-normative corporate behaviour. Rather, this update is just the visible and final hint that Google seems intent on extensively widening its already existing work for military applications.

This development can be attributed to expectations that the Trump administration will remove regulations, allowing for unrestricted development, testing, and use of AI across all sectors, including military and security. Big Tech players such as Google do not seem to be willing to miss out on business opportunities by upholding minimal or introducing serious principles for responsible AI/technology development. Google might consider it more beneficial show more openly its willingness to offer products and services for the military. In the US context, the Trump AI effect likely represents a major setback for attempts to develop and introduce government and industry standards for AI.  

Featured image credit: Photo by Caleb Perez on Unsplash

Share this article

Related articles

All

What Can the Public Learn About AI Weapons by Playing Videogames?

By Paolo Franco and Guangyu Qiao-Franco Military-themed videogames are not only popular among consumers, but they also generate considerable commercial interest. The Call of Duty series, for example, is one of the best-selling videogame franchises of all time, having sold 425 million units and earning $30 billion in revenue since

Read More »

AutoNorms

An international research project examining weaponised artificial intelligence, norms, and order​

Brows Research Themes

Recent Articles

Weapons Systems Data