Research Article

Topics: All, Technology, United States of America

Shortening the Kill Chain with Artificial Intelligence

This post has been guest written by Jennifer Rooke. Jennifer's author information has been included at the end of this post.

In a speech at the Air Force Association’s annual Air, Space & Cyberspace Conference held on 20 September 2021, US Secretary of the Air Force Frank Kendall stated (at 20:50 in the embedded video) that:

This year, the [Air Force’s] chief architect’s office deployed AI algorithms for the first time to a live operational kill chain at the Distributed Common Ground [System (DCGS)] and an air operations center for automated target recognition. In this case, moving from experimentation to real military capability in the hands of operational warfighters significantly reduced the manpower-intensive tasks of manually identifying targets—shortening the kill chain and accelerating the speed of decision-making.

Secretary Kendall made these remarks within a couple of days of a rare Department of Defense concession. After three weeks of defending the 29 August drone strike in Kabul as morally “righteous in “eliminating an imminent ISIS-K threat” against its forces, General McKenzie—the commander of U.S. Central Command that oversees military operations across the region—admitted that “Clearly our intelligence was wrong on this particular white Toyota Corolla.” He referred to the drone strike that killed Zemari Ahmadi and nine others (three of his children, Zamir, 20, Faisal, 16, and Farzad, 10; his cousin Naser, 30; three nephews, Arwin, 7, Benyamin, 6, and Hayat, 2; and two nieces, Malika and Somaya) as “a tragic mistake.” He defended the procedures that were followed by emphasizing that an imminent threat to US forces precluded taking the time, in this instance, to further “develop” the target: “We did not have the luxury of time to develop pattern of life and to do a number of other things. We struck under the theory of reasonable certainty. Probably our strikes in Afghanistan going forward will be under a higher standard.This was, of course, not the first US drone strike that misidentified and killed so-called “unintended” targets. A summary of the disparity among, and incompleteness of, compiled statistics for drone strikes in Afghanistan is available elsewhere on the AutoNorms website.

While impossible for me to verify, it is feasible that AI algorithms were also employed in this particular “live operational kill chain” event. A US Air Force (USAF) spokesperson wrote in a follow-up email exchange with an Air Force Magazine journalist following Secretary Kendall’s speech that:

These AI algorithms were employed in operational intelligence toolchains, meaning integrated into the real-time operational intel production pipeline to assist intelligence professionals in the mission to provide more timely intelligence. The algorithms are available at any [DCGS site] and via the [DCGS] to any [air operations center] whenever needed, so they’re not confined to a particular location.

Earlier this year, Air Force Magazine also reported on an experimentation-phase demonstration directed from Ramstein Air Base in Germany that must have been a prelude to introducing these AI algorithms into a “live operational kill chain.” The US military relies on such demonstrations as a means to test, within a scenario as close to a live operational environment as possible, the feasibility and value of new technologies, operational concepts, and procedures. In this instance, US Air Forces in Europe-Air Forces Africa (USAFE-AFAFRICA) demonstrated their ability, along with a few NATO coalition partners—the Netherlands, Poland, and the United Kingdom—to integrate new targeting technology into intelligence, surveillance and reconnaissance (ISR) and command and control operations that involved the local DCGS site (known as Distributed Ground Station (DGS)-4). It very likely also included the associated European Partnership Integration Enterprise (EPIE) organization, given that several NATO partners fully participated in the event. This EPIE is a hub of coalition partners (including the Eurodrone program members France, Spain, Italy, and additionally, Belgium, the Netherlands and a few others) that collaborate on the use of various ISR products, such as video derived from drone platforms. The implication of their participation in that demonstration would be significant because it indicates that they are not only sharing intelligence information, but they are also actively collaborating in development of these ISR techniques and procedures.

If these AI algorithms are now being used in the ways described above, it would be highly relevant to current regulatory debates on Lethal Autonomous Weapon Systems (LAWS) that coalesce around definitions of “meaningful human control.” I argue that debates focused on the point of acquiring a target and firing a weapon preclude a discussion about human control at the outset of the targeting process when “threats” get defined, which then sets in motion a diffuse range of human-machine interactions leading to those lethal decision points. Algorithms and automation have long played a role in the targeting process. The important point here is that AI algorithms are now being employed as a means to further accelerate the speed in which the “kill chain” is executed. This drive toward faster, machine-assisted decision-making uncritically assumes that out-pacing adversaries is the decisive factor in maintaining a competitive advantage, without evaluating what dynamics this accelerated speed introduces into one’s own actions across the battlespace and what so-called “tragic mistakes” and “unintended” consequences might result from its pursuit.

In the remainder of this blogpost, I will translate some of this military language for a wider audience and also introduce some of the DCGS’s standard procedures, which is particularly important given the central role it plays in United States Air Force (USAF) targeting operations. I begin with a general overview of this weapon system and provide hyperlinks to documents for more details. Next, I present a socio-historical narrative based upon my experiences as a former USAF intelligence officer that explains how this weapon system has developed since the First Gulf War. I describe the logics within the broader US military bureaucratic culture focused on offensive maneuver warfare that has transitioned over the past thirty years from hunting Scud missile systems, to hunting improvised explosive devices (IEDs) to hunting humans. Those logics have directly shaped the use and development of the DCGS weapon system. Finally, I highlight plans already underway to incorporate AI technology into USAF ISR operations in order to accelerate the speed in which lethal decisions are made.

The DCGS weapon system

Much research has focused on the role of drone pilots and sensor operators in contemporary warfare since they are the ones who control unmanned aircraft, their sensors, and their weapons in what has come to be known as “remote-split” operations. However, they only do so by interacting with a globally distributed socio-technical assemblage that includes not only those drones, but also manned aircraft and a multitude of other platforms; the sensors they transport; an array of datalinks and communications and computer systems required to transmit and make the data collected by those sensors useable; the people who collate and interpret the data; and the reports they generate that then get distributed, acted upon and also stored for potential future access and re-use across this networked system of systems. Most existing research has also predominately focused on the “engage” phase of the “kill chain” process, ignoring by and large all of the activity leading up to “hostile” target confirmation and destruction—a process that begins with defining the “threats” to be targeted.

Such narrow framing of this targeting process obscures the systemic structures, bureaucratic institutions, and cultural biases behind the intensive ISR production that first defines “threats” and then directs these platforms and weapons toward those targeted for elimination. In the words (as quoted in Air Force Magazine) of retired Lieutenant General David Deptula, primary planner of the air campaign in the First Gulf War and a key architect of modern USAF aerial surveillance systems:

Everyone focuses on this little piece of fiberglass flying around called an unmanned aerial vehicle, but it’s just a host for sensors that provide data to this vast analytic enterprise we call the Distributed Common Ground System [DCGS], which turns the data into information and hopefully knowledge.

The USAF officially describes this DCGS as the “primary intelligence, surveillance and reconnaissance (ISR) planning and direction, collection, processing and exploitation, analysis and dissemination (PCPAD) weapon system” (italics added for emphasis) used for both manned and unmanned aircraft that consists of at least 27 regionally aligned and globally networked sites. The long stream of words abbreviated in the acronym PCPAD allude to a process that profiles and tracks those deemed suspicious and threatening. Details on the inner workings of this process are publicly available on the USAF doctrine website, within Air Force Doctrine Publication 2-0 Globally Integrated ISR Operations (currently undergoing revision). 

Figure 1. Air Force Distributed Common Ground System (USAF 2015, 32)

This algorithm-based forensic monitoring, a “sort of militarized rhythm analysis, even a weaponized time-geography” as Derek Gregory (2011) describes it, separates data from its context. This process has been described as “perpetual policing” (Holmqvist-Jonsäter 2010), comparable to violent US domestic policing surveillance activities. Former NSA General Counsel Stewart Baker has proclaimed, “Metadata absolutely tells you everything about somebody’s life. If you have enough metadata, you don’t really need content,” and Retired General Michael Hayden, former director of the NSA and CIA, has confirmed, “We kill people based on metadata.” The algorithms that generate such “actionable intelligence” are shaped and controlled by those within this sensing and analytical infrastructure (Andrejevic and Burdon 2015), with this DCGS weapon system at its core.

The USAF boasts on its website (last updated in 2015) of a daily DCGS enterprise operational tempo at that time of “more than 50 ISR [aircraft] sorties exploited, over 1,200 hours of motion imagery reviewed, approximately 3,000 Signals Intelligence (SIGINT) reports produced, 1,250 still images exploited and 20 terabytes of data managed.” Given that space constraints preclude a detailed discussion of the daily routines involved with this intelligence production enterprise, I will instead direct the reader to a series of previously published articles that provide additional insight into some aspects of DCGS operations.

A journalist with Air & Space magazine was afforded access in 2016 to the DCGS headquarters center at Joint Base Langley-Eustis in Virginia and reported his observations here. Several years earlier, in 2009, General Deptula co-authored a Joint Forces Quarterly journal article, available here, that provided an example of the mix of distributed technology and human interaction employed across the DCGS weapon system network during an 18-hour Predator mission over Afghanistan. It is within these standard procedures that AI algorithms are now being employed. These procedures do not translate data into any genuine knowledge about particular “threats” nor even into an accurate portrayal of situational awareness, because the data collected and processed by this socio-technical assemblage is inherently biased from the outset. These procedures extrapolate collected data from their social contexts, thus reducing human subjects to mere data points to be analyzed for discernable patterns and connections among them that are deemed to be threatening. The interpretation of that data is also predicated on a belief that a more efficient, automated means of identifying such complex patterns and connections will lead to a more accurate understanding of relationships among them, even though they have already been stripped of their specific contexts and meanings within the daily lives of those being surveilled. This underlying assumption and the standard procedures employed in service of its aims undoubtedly directly contributed to mischaracterization of Mr. Amadhi’s rather mundane routine of filling water bottles for his family and transporting aid-work colleagues in his non-descript white Toyota Corolla as an “imminent threat” to US forces at Kabul airport that led to the decision to conduct a pre-emptive lethal strike and to assert that it was in “self-defense.” It could be argued, rather, that the decision to conduct a pre-emptive lethal strike shaped the frame through which the data was interpreted and the standard procedures were set in motion. 

Complexity and chaos in offensive maneuver warfare

In order to better comprehend the logics driving these globally integrated and distributed ISR operations, it is useful to describe the ideological and theoretical underpinnings that currently shape US military—and hence, also NATO—doctrine, strategy, operational planning, and tactics. This is with the aim to inform why and how these militaries intentionally strive to create and exploit chaos across their networked battlespaces. They desire to preserve an asymmetric competitive advantage over their adversaries through application of concepts founded in nonlinear science (systems thinking, chaos and complexity theories). Bosquet (2008) explains that through application of these theories, these militaries style themselves as “complex adaptive systems” that behave through, for example, self-synchronicity and swarming techniques. Instead of trying to minimize chaos, these militaries aim to generate it themselves, assuming that, in so doing, they can better understand and, therefore, control it. To aid in this, they apply schematic models that are intended to help make sense of the data that flows across their networks by condensing complexity into data points that can be sifted for patterns and correlations. Bosquet (2008, 173) further explains that through this attempt to “separate regularities from randomness in the raw data flow,” these militaries think that they “can constitute a description of an observed system, predict events, or create prescriptions for [their] own behaviour.”

Much credit for the “shock and awe” of the 1991 air campaign in the First Gulf War has been attributed to the application of such theoretical concepts toward strategies of offensive maneuver warfare, one commonly referred to as a warfighters’ four-step decision-making OODA—an acronym for Observe-Orient-Decide-Act—loop that was espoused by USAF retired Colonel John Boyd, a Korean War fighter pilot (for details see Osinga 2007; Bosquet 2008). It models a rational decision-making process that has become a key component of network-centric warfare. It assumes the centrality of information superiority in all operations. This superiority does not aim at any genuine understanding of an adversary, though; it simply targets and disrupts an adversary’s perceived decision-making OODA loop before that adversary can influence one’s own. It dictates a mentality of pre-emption to avert risks to one’s own OODA loop and, therefore, one’s competitive advantage. It demands speed—the faster opponent wins. One need only ponder the title of the current USAF Chief of Staff’s latest internal guidance, Accelerate Change, or Lose, to begin understanding this.

One might think that, after thirty years of continuous military operations across Southwest Asia and into Africa, the US military and its coalition allies would have developed an understanding of their perceived adversaries’ motivations and intentions, but that has never been the real aim of these or any foreseeable future operations. They are waged to maintain a Western global hegemony that seeks to dominate and eliminate so-called “threats” to the international order that underpins its assertions of superiority and the universalism of its claims. The OODA-loop model is part of a “cultural tool kit” (see Swidler 1986) that projects Western militaries’ own particular ways of thinking about vulnerabilities and strengths onto their adversaries and negates recognition of any alternative perspectives. That is quite different from an acknowledgment of diversity in thought—in this case, calculative decision-making—and any genuine attempt to understand their adversaries’ likely motivations and intentions that have been shaped within a context of mutually intertwined relationships.

The following traces how this form of offensive maneuver warfare has evolved since the 1980s. This socio-historical narrative serves to highlight continuities and disruptions in the US military’s risk-averse mindset and operations that propel this modern form of remote warfare. Many trace the onset of the “Global War on Terror” to President Bush’s declarations after the attack on 11 September 2001, but its history reaches back at least two decades prior when President Reagan declared state-sponsored “terrorism” to be “acts of war” that could be countered in national self-defense. That discourse posited “terrorism” as a “threat” that operates outside boundaries acceptable under normative moralizing laws of “just war,” thereby delegitimizing those violent acts while authorizing a militarized “just war” response. That discourse pushed prosecution of political violence labeled “terrorism” beyond covert operations and criminal juridical channels and into the realm of open warfare (Stampnitzky 2013). Former Libyan leader Colonel Muammar Gaddafi became a specific target since he had covertly sponsored attacks against deployed US and Israeli forces during the Cold War as part of his well-vocalized resistance to their violent interventions throughout Africa and the Middle East.

I began my university studies at the US Air Force Academy just months before President Reagan authorized airstrikes over Tripoli in 1986 against Gaddafi, which he justified by invoking Article 51 of the UN Charter, in retaliation for sponsorship of the bombing of a discotheque in Berlin that injured 229 people and killed two American soldiers and a civilian. I graduated three years later and was commissioned as a second lieutenant only a few months before the fall of the Berlin Wall, a period pundits would describe as the end of the Cold War and even jubilantly as “the end of history.” Within only a year, President H. Bush characterized the expulsion of Iraqi forces from Kuwait—justified and legitimized under the umbrella of UN Security Council resolutions—as a “rare opportunity to move toward an historic period of cooperation. Out of these troubled times, our fifth objective—a new world order—can emerge: a new era—freer from the threat of terror, stronger in the pursuit of justice, and more secure in the quest for peace.”

Amidst this victorious and euphoric rhetoric, I deployed as a newly trained intelligence officer. I served as officer-in-charge of the forward-deployed surveillance and warning downlink station for the RC-135 Rivet Joint reconnaissance aircraft that flew signals intelligence (SIGINT) collection missions during the First Gulf War along the contested border demarcating Saudi Arabia, Kuwait and Iraq. Communications technology at that time restricted data transmission between the aircraft and its ground support station to a visual horizon unhindered by hills or mountains, referred to as line-of-sight radio frequency propagation, thus the surveillance and warning station was erected within that communications footprint on the outskirts of Riyadh in Saudi Arabia. All this forward presence required a significant logistical capacity. These heavy, manned and unarmed aircraft—referred to as High-Value Airborne Assets (HVAA) in military lingo—served as the “Ears of the Storm” and flew continuously at a relatively safe standoff distance from the lethal Iraqi air defense system, accompanied by in-flight refueling aircraft and a robust patrol of F-15 fighter aircraft to protect them against potential hostile fire.

RC-135 aircrews had been accustomed to flying strategic intelligence collection flights along the borders of the Soviet Union during the Cold War—officially coined Peacetime Aerial Reconnaissance Program (PARPRO) missions—that had resulted in a total of 170 US military aircrew casualties between 1946 and 1991 (Hall 1997). One such mission had also tragically contributed to the mistaken air-to-air shootdown of Korean Air Lines Flight 007 in 1983 that killed 269 passengers and crew because the Soviet air defense system erroneously identified it as an RC-135 reconnaissance aircraft that had been loitering in the area near Sakhalin Island and directed an Su-15 aircraft pilot to fire upon the commercial aircraft. Aware of this history, Gulf War planners anticipated losing several of these aircraft in combat operations and designed a robust air campaign to achieve air superiority and minimize risk to its vulnerable air and ground forces as quickly as possible.

Yet, US forces faced an adversary we knew very little about, and we significantly underestimated and misjudged Iraqi Scud missile capabilities and tactics. Intelligence personnel had erroneously assumed the Iraqis would rigidly adhere to standardized Soviet doctrine and procedures that they had long become familiar with while flying those PARPRO missions. However, the Iraqis had adapted their operations so that they could quickly launch their missiles and then elusively dart away to hide from coalition aircraft. That confounded coalition efforts to find and “fix” them—in other words, to immobilize and destroy them—in what would become a preoccupation with an intensive but ultimately unsuccessful “Scud-hunting” strategy that expended 2,493 aircraft sorties with no confirmed destruction of Scud missiles or launchers (see Keaney and Cohen 1993).

These RC-135 manned SIGINT reconnaissance aircraft, as well as those of others, such as the U-2 imagery intelligence (IMINT) reconnaissance aircraft that also played a significant role in that effort, have since remained continuously employed across Southwest Asia for over thirty years. Their associated ground stations, like the one I led, would eventually become the backbone of today’s DCGS networked architecture. What were once independently designed, forward-deployed line-of-sight intelligence collection systems have been cobbled together into a networked architecture that processes data from thousands of interdependent, synchronized ISR missions across the world each year.

This transformation from vulnerable, forward-deployed ground stations to what is now referred to as “reachback operations” began in the aftermath of the First Gulf War. During this time, Cold War era intelligence collection operations, procedures and infrastructure were adapted on-the-fly to support changing doctrine, strategy, plans and the new “Scud-hunting” targeting mission. On the technology front, it is helpful to realize that the average American household today has more bandwidth available in its broadband internet connectivity than the entire US military had during the Gulf War. Technological advancements enabled smaller forward-deployed footprints that reduced presence, and therefore risk, to US and coalition forces, which also served to maintain Western public support for these overseas operations. Full integration of drones into military targeting operations after their initial use as surveillance-only assets in the 1990s proceeded initially apart from this DCGS architecture before eventually also becoming part of it, capable of carrying both sensors and weapons and providing closer, persistent surveillance of suspected “threats.” Technological advancements also exponentially increased the speed in which the OODA-loop driven targeting process occurs, from a period of days during the First Gulf War to a matter of only minutes in its current iteration. AI algorithms will further decrease that timeline.

A decade after the First Gulf War, the 9/11 attacks and the ensuing “Global War on Terror” spurred Western military force deployments to Afghanistan and then Iraq, justifications framed again within UN Charter Article 51 “self-defense” and UN Security Council resolution language. The invasion of Iraq was legitimized as a “just war” to prevent an illegitimate “rogue” regime—part of an “axis of evil”—from attaining weapons of mass destruction that could too easily fall into the hands of “terrorists.” Those combat operations generated more on-the-fly adaptations in new force protection missions reminiscent of the older, futile “Scud-hunting” missions. This time an “IED-hunting” mission that attempted to find and fix IEDs emplaced along routes traversed by coalition forces before they could detonate and kill any of those forces operating in the vicinity. Much like the wasted resources expended to target Scuds before they could launch, the US military spent billions of dollars trying to locate IEDs before they could detonate. As recounted by a former commander of the DCGS weapon system: “Units found some success in countering IEDs, for example, by refocusing ISR from locating the devices to understanding the insurgent network behind them. To meet the ends of protecting troops from IED attack, ISR planners adjusted the ways from threat warning to targeting, and the means from route scans to manhunting.” He stated unequivocally that the primary motivation to protect coalition forces lay behind the so-called “success” achieved in transitioning from hunting weapons to pre-emptively killing suspected human “threats” before they could emplace and employ any weapons that might endanger those lives deemed worth saving. Two philosophers in particular have analyzed the logics behind this hunting analogy. Grégoire Chamayou (2015, 33-34) posits that this type of hunting warfare has fundamentally shifted the character of war away from its more traditional expression as a “duel” in which two fighting forces face each other on the battlefield. It has become a game of “hide-and-seek” in which the hunter now seeks domination over its “prey” trying to evade and survive. Philosopher Judith Butler (2004, 2009) also offers insightful reflections about the political paranoia feeding these narratives that amount to articulations of supremacy that determine who counts as human, whose lives count as lives, and what makes them grievable.

As I have attempted to convey, this quest for pre-emptive dominance over perceived “threats” to US military forces intervening abroad over the past thirty years has generated an insatiable quest for data collection and analysis to feed what essentially amounts to racial profiling. Targets are demonized and imbued with intent based upon who they are perceived to interact with and what they are perceived to do. While I have written elsewhere—as part of a masters-level academic group project on Islamophobia—about my personal reflections on such racialized aspects of US military interventions, an array of 30 respected international scholars has thoroughly and critically assessed this dimension in the recently released “Terror Trap” report.

DCGS “Next Generation”

The exponential increase in data collection has driven an unsustainable labor-intensive processing cycle. The DCGS weapon system simply collects more data than it can exploit in its current configuration. The USAF refers to their DCGS “Next Generation” plans already underway as a culture change that will transform the platform-centric ISR operations of yesteryear to a problem-centric focus. They are advertised to deliver not just improved situational awareness, but deeper situational understanding that is supposed to eliminate the so-called “unintended consequences” and “tragic mistakes” that have led to the killings of the Ahmadi family and untold others. This is driven by a desire to better utilize finite human resources to exploit the glut of data that continuously flows across the network, which they intend to achieve through more automation. However, since the DCGS weapon system architecture has been forged over the years from legacy “silos,” efforts have proven cumbersome to integrate the collected data in its entirety so that it can be available to anyone at any time to support any inquiry. Thus, the weapon system is transforming into a new cloud-based open architecture with a single common hardware and software infrastructure. They are working with defense technology research and development companies to apply AI so that the platform-centric PCPAD data analytics work performed by people can be accomplished onboard the data-collecting sensors through neuromorphic processors and edge computing. General Atomics has already flight-tested the Agile Condor wide-area surveillance drone pod, an AI-driven targeting computer designed to “automatically detect, categorize, and track potential items of interest.”  

The desire is that humans will no longer spend hours staring at drone video footage waiting for patterns to emerge since Project Maven has created algorithms that classify that video data, curate and then label its contents to create suitable datasets for machines to process and to alert humans when certain patterns emerge. A deep-learning capability backed by extreme computing processing power, called Artificial Intelligence Discovery and Exploitation (AIDE), will supposedly sort through all data injected into the DCGS network for information most relevant to an analyst’s request for information, and it will learn over time to push tailored notices to her. In other words, it will provide her the biased results she seeks. Some ISR personnel will also be trained as data scientists. They have even given the DCGS intelligence production cycle model, previously referred to as PCPAD, a new acronym—the Sense, Identify, Attribute, and Share (SIAS) process, and the DCGS intelligence personnel refer to themselves as “sense makers” who will converge on problems as teams with a swarm mindset. All of this is intended to free humans to assess information compiled from the data and make decisions in increasingly tighter time loops, faster and faster inside the perceived adversary’s decisional OODA loop.

Senior military officers have approached their observations of lethal drone operations during the Azerbaijan-Armenia war over Nagorno-Karabakh within this OODA-loop model perspective—time will be of the essence to disrupt adversaries’ decision-cycles before they can disrupt one’s own, and at much faster speeds in hostile combat environments than required in the permissive environment that has enabled largely uncontested impunity throughout the “War on Terror.” They look to proceed along their path of ever-expanding militarized competitive advantage toward what they claim is a new paradigm shift and revolution in military affairs. Yet, they apply the same time-worn mode of operations that views adversaries as existential “threats” that must be overcome and contained. This quest for dominance, however, also creates blind spots that lead to an erroneous belief that technological superiority will assure a competitive advantage. Adversaries will continue to develop low-tech operational tactics to counter the US’ asymmetrical power advantage.

“Unintended consequences”

The multitude of actors composing this socio-technological assemblage work to determine who deserves to live or die while negating the threat of lethal violence posed by them and denying their victims the rule of law that the West purports to defend. These dehumanizing calculations disregard human diversity and the historical contexts of our relational and interdependent vulnerabilities and precarity. This characterization of dangerous “threats” is additionally authorized and justified through rhetorical narratives that marshal the customary international humanitarian law language of “self-defense” (distinction, necessity and proportionality) in support of pre-emptive strikes to kill anyone who appears suspect before they might attack Western forces. In his latest memoir, A Promised Land, former President Obama reflected over making decisions to take the lives of countless young men over the course of his presidency:

I wanted somehow to save them – send them to school, give them a trade, drain them of the hate that had been filling their heads. And yet the world they were a part of, and the machinery I commanded, more often had me killing them instead” in what he characterized as “more targeted, nontraditional warfare” (Obama 2020, 353).

This amounts to an argument of resignation and inevitability, and therefore, a lack of responsibility for taking the lives of others. I argue—in line with Judith Butler’s research—that it is neither necessary nor inevitable and that there are alternative lenses to view this reality. She explains that these frames of war “work both to preclude certain kinds of questions, certain kinds of historical inquiries, and to function as a moral justification for retaliation” (2004, 4). I have attempted to reframe this official narrative about why and how the US conducts its mode of network centric warfare. I have done this to illuminate its rhetoric as part of a public conditioning for its necessity. It can be argued, however, that this drive to employ accelerating predictive analytics and pre-emptive lethal actions in order to control risks and uncertainties that might challenge an asymmetric competitive advantage is far more detrimental than it is beneficial in the long run to its aims of assuring a favorable international normative order.  

Author Information: Jennifer Rooke served 24 years as an intelligence officer in the United States Air Force upon graduation from the Air Force Academy in 1989 until military retirement at the rank of colonel in 2013. She was involved in resource allocation for and operational oversight of several different organizations across the Distributed Common Ground System (DCGS) weapon system. While the Deputy Director and then Director of Intelligence for Headquarters US Air Forces in Europe at Ramstein Air Base in Germany, she oversaw the training program for Project Crossbow, the joint US-UK program that integrated UK intelligence analysts into the DCGS network. That program became the catalyst for today’s European Partner Integration Enterprise that, according to public statements by her successors, now resembles a pared-down version of a DCGS team. She is currently a student in the Cultural Anthropology and Development Studies advanced master program within the Department of Social and Cultural Anthropology at the Catholic University (KU Leuven) in Belgium.

Share this article

Related articles

All

AI Summits and Declarations: Symbolism or Substance?

The UK’s AI Safety Summit, held on 1-2 November at Bletchley Park, has generated different types of responses from experts and commentators. Some praise it as a “major diplomatic breakthrough” for Prime Minister Rishi Sunak, especially as he managed to get 28 signatures, including those of China, the EU, and

Read More »