0% found this document useful (0 votes)
13 views38 pages

Pauwels-Nov2024

CIGI Paper No. 310 discusses the implications of generative AI in next-generation information warfare, emphasizing its potential to democratize and industrialize disinformation tactics. The paper outlines how the convergence of AI with other technologies complicates the landscape of warfare, blurring the lines between military and civilian targets and raising concerns about the manipulation of public perception. It calls for a comprehensive response to address the security challenges posed by these developments, highlighting the need for legal frameworks and societal resilience against emerging threats.

Uploaded by

bbay43040
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views38 pages

Pauwels-Nov2024

CIGI Paper No. 310 discusses the implications of generative AI in next-generation information warfare, emphasizing its potential to democratize and industrialize disinformation tactics. The paper outlines how the convergence of AI with other technologies complicates the landscape of warfare, blurring the lines between military and civilian targets and raising concerns about the manipulation of public perception. It calls for a comprehensive response to address the security challenges posed by these developments, highlighting the need for legal frameworks and societal resilience against emerging threats.

Uploaded by

bbay43040
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

CIGI Paper No.

310 — December 2024

Preparing for Next-Generation


Information Warfare with
Generative AI
Eleonore Pauwels
CIGI Paper No. 310 — December 2024

Preparing for Next-Generation


Information Warfare with
Generative AI
Eleonore Pauwels
About CIGI Credits
The Centre for International Governance Innovation (CIGI) is an independent, Managing Director and General Counsel Aaron Shull
non-partisan think tank whose peer-reviewed research and trusted analysis
Director, Program Management Dianna English
influence policy makers to innovate. Our global network of multidisciplinary
researchers and strategic partnerships provide policy solutions for the digital Program Manager and Research Associate Kailee Hilt
era with one goal: to improve people’s lives everywhere. Headquartered Senior Publications Editor Jennifer Goyder
in Waterloo, Canada, CIGI has received support from the Government of Publications Editor Christine Robertson
Canada, the Government of Ontario and founder Jim Balsillie.
Graphic Designer Sepideh Shomali

À propos du CIGI
Le Centre pour l’innovation dans la gouvernance internationale (CIGI) est un
groupe de réflexion indépendant et non partisan dont les recherches évaluées
par des pairs et les analyses fiables incitent les décideurs à innover. Grâce
à son réseau mondial de chercheurs pluridisciplinaires et de partenariats
stratégiques, le CIGI offre des solutions politiques adaptées à l’ère numérique
dans le seul but d’améliorer la vie des gens du monde entier. Le CIGI, dont le
siège se trouve à Waterloo, au Canada, bénéficie du soutien du gouvernement
du Canada, du gouvernement de l’Ontario et de son fondateur, Jim Balsillie.

Copyright © 2024 by the Centre for International Governance Innovation

The opinions expressed in this publication are those of the author and do not
necessarily reflect the views of the Centre for International Governance Innovation
or its Board of Directors.

For publications enquiries, please contact [email protected].

The text of this work is licensed under CC BY 4.0. To view a copy of this licence,
visit http://creativecommons.org/licenses/by/4.0/.

For reuse or distribution, please include this copyright notice. This work may
contain content (including but not limited to graphics, charts and photographs)
used or reproduced under licence or with permission from third parties.
Permission to reproduce this content must be obtained from third parties directly.

Centre for International Governance Innovation and CIGI are registered


trademarks.

67 Erb Street West


Waterloo, ON, Canada N2L 6C2
www.cigionline.org
Table of Contents
vi About the Author

1 Executive Summary

1 Introduction

2 Framing Section

7 Technical Section

12 Targeting Primarily Civilian Populations

16 Targeting Military Personnel and Operations

18 Scenario: Information Warfare on Biological Threats

20 Legal Section and Concluding Thoughts

26 Works Cited
About the Author
Eleonore Pauwels is an international expert
in the security, societal and governance
implications generated by the convergence of
artificial intelligence (AI) with other dual-use
technologies, including cybersecurity, genomics
and neurotechnologies. Eleonore provides expertise
to the World Bank, the United Nations and the
Global Center on Cooperative Security in New
York. She also works closely with governments and
private sector actors on AI-Cyberthreats Prevention,
the changing nature of conflict, foresight and
global security. In 2018 and 2019, she served as
research fellow on emerging cybertechnologies for
the United Nations University’s Centre for Policy
Research. At the Woodrow Wilson International
Center for Scholars, she spent 10 years within
the Science and Technology Innovation Program,
leading the Anticipatory Intelligence Lab. She is also
part of the Scientific Committee of the International
Association for Responsible Research and
Innovation in Genome Editing. Eleonore regularly
testifies before US and European authorities,
including the US Department of State, the National
Intelligence Council, the US Bipartisan Commission
on Biodefense, the Council of Europe, the European
Commission and the United Nations. She writes
for Nature, The New York Times, The Guardian,
Scientific American, Le Monde, Slate, UN News, The
UN Chronicle and the World Economic Forum.

vi CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


and defence, civilian and military technologies,

Executive Summary and state forces and cyberproxies are fading.

Modern conflicts increasingly involve the


Modern conflicts involve the weaponization of weaponization of information and the manipulation
information and the manipulation of human of human behaviours and perceptions. The rapid
behaviours. Artificial intelligence (AI) and its development of artificial intelligence (AI) and
integration into individuals’ daily lives promises its integration into individuals’ daily lives and
to augment, accelerate, but also complicate societies’ inner structures promises to not only
these trends. Two important shifts will help augment and accelerate, but also complicate
us understand this emerging warfare for what these trends. This paper aims to demonstrate two
it truly is: an attack on humanity itself. important shifts that will help us to recognize
and understand this emerging warfare for
AI is making information warfare more powerful
what it truly is: an attack on humanity itself.
and more accessible. Generative AI combined
with data capture provides new techniques to First, AI is making information warfare both
industrialize the offensive use of disinformation. more powerful and more accessible by acting
In addition, the integration of generative AI with as a catalyst. The development of generative AI,
other powerful technologies complexifies the combined with diverse forms of data capture,
potential of information warfare. What is at stake provides new techniques to drastically improve,
is the weaponization of dual-use knowledge itself. tailor, scale up and even industrialize the offensive
Generative AI is already learning to democratize use of disinformation. For instance, personalized AI
military and civilian expertise in technological assistants and chatbots now have the capability to
domains as complex as AI, neuro-, nano- and engage users in seemingly authentic conversations,
biotechnology. Such capacity will provide subtly injecting manipulative content tailored to
both state and non-state actors with access to the user’s psychological profile and preferences.
knowledge and mentorship related to impactful With persuasive narratives, sophisticated bot
technologies. This diffusion of power will change networks can deeply influence individual and group
the nature of information and physical warfare, beliefs. These battles for influence, supercharged
increasing dual-use knowledge asymmetries by algorithms, are being waged for the control
between threat actors in conflicts. There is an of people’s emotions and attitudes and are the
urgent need to prepare for misuse scenarios that prevailing means of undermining social cohesion
harness technological convergence. New converging and trust. In times of conflict, these tools impact
risks will bring collective security challenges that critical elements of civilian protection and
are not well understood or anticipated globally. civilian decision making for survival, causing
direct and indirect harms to populations (United
Nations General Assembly 2022). For marginalized
populations and vulnerable groups, such as women
and youth, they may increasingly condition and

Introduction limit notions of self-determination, and could


continue to do so with future generations to come.

The world has entered a complex and dangerous Second, the integration of AI, including generative
decade. As new and old threats converge AI, with other powerful technologies is a radical
and challenge the multilateral order, one of shift as this convergence broadens and complexifies
the most seismic shifts is taking place at the the potential of information warfare. What is at
intersection of war, technology and cyberspace. stake is the weaponization of dual-use knowledge
Modern conflicts — whether they are declared, itself, including possibly all forms of dual-use
contested or waged in the grey zone — are expertise developed by human civilization.
amplified by a technological revolution that
is inherently dual use. These conflicts merge
physical and digital fronts, invading cities and
factories, homes and everyday devices, and
producing new targets and victims in their
wake. Frontiers between peace and war, offence

Preparing for Next-Generation Information Warfare with Generative AI 1


Generative AI is already learning to democratize1 warfare have combined with trends related to
strategic military and civilian expertise and tacit digital transformation and the evolving conflict
knowledge in technological domains as complex landscape (see Box 1). The technical section sheds
as AI, neuro-, nano- and biotechnology. Such light on the two major shifts mentioned above,
capacity will provide a diversity of both state demonstrating how AI not only democratizes
and non-state actors with access to sensitive information warfare, but also complexifies
knowledge and mentorship related to impactful and broadens its ramifications. The technical
technologies. This diffusion of power will change section will therefore cover specific uses of AI
not only the scale, but also the nature of both in information warfare, including psychological
information and physical warfare, increasing operations; the implications for military forces
dual-use knowledge asymmetries between threat and civilian populations; a few recent real-world
actors involved in conflicts. There is an urgent manifestations; and AI’s future potential and
need to prepare for adversarial use or misuse convergence with other technologies. A detailed
scenarios that could harness the convergence scenario follows, demonstrating how, at a time
of what were presented as primarily civilian, of protracted armed conflict, AI-led information
beneficial technologies. This new confluence of warfare could harness dual-use knowledge and
risks will bring collective security challenges that sophisticated techniques in biotechnology to
are not well understood or anticipated globally. undermine public authorities and psychologically
destabilize civilian populations. The legal section
The stakes are high. As AI and generative AI systems reviews the protective measures afforded in the
reshape how knowledge, expertise and information international legal framework, as well as the legal
are used and potentially manipulated in conflict gaps and ambiguities that may inhibit effective
and in the grey zone between war and peace, protection and accountability. This final section
now is the time to think forward and assess risks, closes with highlighting the need for civil-military
vulnerabilities and forms of resilience. While there synergies and elements of a whole-of-society
will be specific implications for military forces response to strengthen prevention and resilience.
and strategic thinking, prevention and resilience
will depend on a whole-of-society response.

The strategic goals of this paper are twofold.


First, by providing in-depth analysis on how AI,
in convergence with other technologies, can be Framing Section
used to amplify information warfare, the paper
aims to inform military authorities and strategic Information warfare is not a new phenomenon,
thinkers, policy makers and legal experts, and and conflicts of the past have involved deception.
civil society and multilateral institutions about This paper does not aim to retrace the history of
the emerging strategies at play that have the war propaganda and review extensive literature
potential to threaten and weaken societies while but instead builds on interviews2 with technical
escaping accountability. Second, by analyzing how and legal experts to analyze recent trends and
international law applies to emerging forms of their implications. The goal is to show how the
information warfare, this paper aims to identify integration of new technologies into our networked
legal gaps and ambiguities as well as potential world is changing not only the scale, but also the
entry points to support governance and policy nature and power of war and information warfare.
processes at national and multilateral levels.
Transformative shifts are taking place at the
The paper opens with a framing section to define intersection of war, technology and cyberspace.
the topic of concern and rapidly review how To understand how these shifts impact and
recent conceptualizations of information shape information warfare, we need to look at
a series of converging trends. They concern the

1 In this context, the term “democratize” implies that generative AI is


helping to spread and share advanced knowledge and capabilities that 2 In 2022–2023, the author conducted a series of interviews with experts
were previously confined to a select group of experts in the military in AI and cybersecurity, security implications of emerging technologies,
or specialized civilian fields. This process allows more individuals and civilian protection in conflict, policy and international law. This research
organizations to leverage high-level strategic insights and tools, thereby paper builds on the insights, signals and foresight discussed during those
allowing outsourcing to a diversity of actors across different sectors. interviews.

2 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


global revolution that constitutes the digital A Multipolar Cyberspace
transformation of our societies; multiple ways
In the current multipolar geopolitical landscape,
to harness information warfare in a multipolar
authoritarian regimes such as Russia, China
environment; and the evolving nature of conflicts.
and Iran have relied on proxies and increasingly

The Digital Revolution employed sophisticated information warfare tactics


to advance their strategic interests. Leveraging
The internet has become a laboratory for disinformation, these states aim to undermine public
information warfare, a new theatre of war trust in democratic institutions, manipulate global
where information itself is weaponized (United narratives and destabilize their adversaries. The
Nations General Assembly 2022). In their excellent COVID-19 pandemic and subsequent vaccination
monograph, LikeWar: The Weaponization of Social efforts provided fertile ground for these information
Media, Peter Warren Singer and Emerson T. operations, highlighting their capability to exploit
Brooking demonstrate how social media has crises for geopolitical gains. The confluence of
created a new global environment for conflict, information warfare and biological threats is
blurring distinctions between civilian and military particularly relevant to this paper’s scenario.
functions and actions in the digital and physical
Russia has a long history of using disinformation as
realms (Singer and Brooking 2018). The rapid digital
a tool of statecraft, aiming to create confusion and
transformation of our societies has increasingly
weaken the resolve of its adversaries. During the
merged civilian and military technologies, creating
COVID-19 pandemic, Russian actors disseminated
new dependencies between the digital architectures
false information about the origins and impacts
that power private, public and national security
of the virus. State-controlled media and proxy
systems. On this internet battlefield that defies
websites promoted conspiracy theories, suggesting
the control of military forces and governments,
that the virus was a bioweapon developed by the
supremacy is achieved through the command of
United States (Mouton, Lucas and Guest 2023; Moy
attention and pervasive forms of psychological and
and Gradon 2023). This narrative was designed to
algorithmic influencing. In Singer and Brooking’s
sow distrust and exacerbate tensions between the
words, “because virality can overwhelm truth,
United States and its allies. In addition, Russia’s
what is known can be reshaped” (ibid., 22).
disinformation efforts targeted the vaccination
Despite its adaptive nature, several critical trends campaigns of Western countries. Russian media
that have characterized information warfare in spread misleading information about the safety
the past are still relevant today. Back in 2014, the and efficacy of Western vaccines such as Pfizer and
year the Russian Federation annexed Crimea, Peter Moderna, while promoting its own Sputnik V vaccine
Pomerantsev and Michael Weiss published The as a superior alternative (Schafer et al. 2021). These
Menace of Unreality: How the Kremlin Weaponizes efforts aimed to undermine public confidence in
Information, Culture and Money, shedding light on Western vaccines, thereby slowing vaccination rates
how, at its core, the Russian system of information and prolonging the pandemic’s impact on Western
manipulation relies on nihilism about the existence societies (MacDonald and Ratcliffe 2023; Mouton,
of objective truth (Pomerantsev and Weiss 2014). Lucas and Guest 2023; Whiskeyman and Berger 2021).
Reminiscent of what William Hutchinson wrote
China has also been at the forefront of combining
nearly two decades ago, we realize that, in modern
information warfare with AI innovation to
conflicts, information can be both “targeted” and
influence global perceptions and advance its
“weaponized” and that psychological influence has
strategic objectives (Beauchamp-Mustafaga
become ever more critical to control populations
2024). During the pandemic, Chinese state media
at home and across borders through global reach
and online platforms disseminated positive
(Hutchinson 2006). In the near future, the advent
narratives about China’s handling of the outbreak,
of immersive digital spaces could amplify the
contrasting its success with the perceived failures
ways that every one of us is involved in modern
of Western countries (MacDonald and Ratcliffe
conflicts, with the potential to blur even further
2023; Beauchamp-Mustafaga 2024; Moy and
the line between reality and deception and to
Gradon 2023; Whiskeyman and Berger 2021). This
mobilize large swaths of populations, resources
was part of a broader strategy to deflect blame
and weapons around deceiving narratives.
and position China as a global leader in pandemic
management. China’s disinformation campaigns

Preparing for Next-Generation Information Warfare with Generative AI 3


extended to vaccine diplomacy. Chinese state The Evolving Nature of Conflicts
media cast doubt on the safety and effectiveness
Civilian populations are increasingly targeted
of Western vaccines, while promoting Chinese-
in wartime disinformation and suffer enduring
made vaccines such as Sinovac and Sinopharm
harms. Manipulation of information in conflict
(Schafer et al. 2021). This narrative was aimed at
is increasingly used to legitimate direct acts of
enhancing China’s soft power and expanding its
violence against civilians and recruit youth into
influence in regions such as Africa, Latin America
offensive operations, leading to highly traumatic
and Southeast Asia, where vaccine diplomacy
physical and mental harms (Katz 2021; United
could translate into geopolitical leverage.
Nations General Assembly 2022). Another disturbing
Iran has utilized information warfare to target form of adversarial manipulation harnessed by
both regional rivals and the broader international parties to conflict is the distortion of information
community. Iranian state media circulated that is related to humanitarian and medical
conspiracy theories suggesting that the COVID-19 efforts and vital to secure human needs (Katz
virus was an American biological weapon, aiming 2021; Morris 2024). The brunt of these evolving
to stoke anti-American sentiment and divert practices of information warfare have been
attention from Iran’s own domestic challenges suffered by civilian populations, with a particularly
(Mouton, Lucas and Guest 2023). Iranian vivid impact on women and children (United
disinformation also targeted vaccination efforts: Nations General Assembly 2022; see Box 3).
false information about the dangers of Western
What we see materializing before our eyes is
vaccines was spread by Iranian media, contributing
a polymorphous type of warfare that merges
to vaccine hesitancy and undermining public
cyberattacks and information operations and is
health efforts (Schafer et al. 2021; Whiskeyman
waged by states or their proxies, sometimes in
and Berger 2021). This strategy was part of a
hostile situations that do not clearly meet the
larger effort to portray Iran as resilient and self-
legal threshold of an armed conflict, in the “grey
sufficient, capable of managing the pandemic
zone” between war and peace (Pauwels 2024). We
without Western assistance (MacDonald and
have entered a new era of hybrid warfare where
Ratcliffe 2023; Mouton, Lucas and Guest 2023).
non-military tactics coexist or are coordinated
What we also see emerging is how nation-states are with kinetic warfare, and target both military
increasingly engaging in various types of knowledge forces and civilians (Burt 2023; Khan 2023). There is
and technological transfer (such as AI) with proxy also an increasing risk of facing a privatization of
actors to target civilian populations alongside information warfare. The world has been forced to
traditional political, economic and information cope with surrogates and mercenaries in the past,
aspects of advanced geopolitical conflicts. As when outsourcing war depended largely on arms
mentioned by Wesley R. Moy and Kacper T. trade and trafficking; proxies today, however, thrive
Gradon, “from reconnaissance activities and the on the intangible transfer of dual-use knowledge
profiling of target audiences to the weaponization and democratized access to related technologies.
of distorted or fake information and psychological Recent research has pointed to the harmful merger
operations, AI broadens the potential of between two growing industries — those that
information operations” (Moy and Gradon 2023, 57). trade cyberweapons and those that industrialize
cybercrime — and the offensive proxy capacities
these industries bring to an increasing number
of nation states and violent actors (Pauwels
2024). Nation-states and their proxies have the
potential to harness the integration of AI and
converging technologies in information warfare
and pose new systemic risks in conflicts.

4 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


Box 1: Primary Strategic Terms and Scope

There is no internationally agreed conceptualization of what constitutes information warfare


and how it manifests in armed conflict. Top-down definitions of information warfare vary
between tech-leading nations such as the United States, Russia and China. Even within a
national context, military institutions and doctrines might not necessarily share the exact
same concepts and scope. For instance, the US Navy approaches information as purely raw
data or digital signals, while the US Department of Defense also considers narratives that can
influence human perceptions and behaviour (Bingle 2023).

Yet there is a common language or matrix of key terms and concepts that has been defined by
scholarship and refined by humanitarian practitioners (a synthesis has been provided in Box
2). This matrix is important for applying policy and international legal frameworks, yet it also
reflects the complexity for practitioners on the ground to recognize and label different types
of operations that manipulate information at a time of conflict (International Committee of
the Red Cross [ICRC] 2021).

For the purpose of this paper, which focuses on situations of conflict, “information warfare”
is defined as the collection, dissemination, manipulation, corruption and degradation
of information with the goal of achieving strategic advantage over a conflict party and/
or its population (Marlatt 2008; Prier 2017). Another comprehensive and more recent
conceptualization of information warfare is “a struggle to control or deny the confidentiality,
integrity, and availability of information in all its forms, ranging from raw data to complex
concepts and ideas” (Bingle 2023, 6). As Morgan Bingle explains, “Offensively, information
warfare occurs when one side within a conflict seeks to impose their desired information
state on their adversary’s information and affect how target individuals or populations
interpret or learn from the information they possess or are collecting” (ibid.). Bingle stipulates
that “the offensive actor can target at either the information itself or the individuals and
larger group forming their target audience” (ibid.).

In this paper, “information” or “influence operations” are defined as “the strategic and
calculated use of information and information-sharing systems to influence, disrupt, or
divide society,” for instance by involving “the collection of intelligence on specific targets,
disinformation and propaganda campaigns, or the recruitment of online influencers” (Spink
2023, 48). Psychological warfare can be framed as “the planned use of propaganda and other
psychological operations to influence the opinions, emotions, attitudes, and behavior of
opposition groups” (ibid.). A useful definition of “adversarial information operations” is the
one proposed by the “Oxford Statement on International Law Protections in Cyberspace:
The Regulation of Information Operations and Activities” as “any coordinated or individual
deployment of digital resources for cognitive purposes to change or reinforce attitudes or
behaviours of the targeted audience.”3

3 See www.elac.ox.ac.uk/the-oxford-process/the-statements-overview/the-oxford-statement-on-the-regulation-of-information-operations-and-activities/.

Preparing for Next-Generation Information Warfare with Generative AI 5


Box 2: Glossary
Definitions in this textbox are adapted from those of the ICRC, FP Analytics and Microsoft’s
Digital Front Lines report (2023) and the RAND Corporation, a non-profit global policy think
tank that provides research and analysis to help improve policy and decision making. For terms
related to technologies and their military uses, the author referred to research provided by
Microsoft and RAND. The author used ICRC’s insights for terms related to conflict, particularly
the 2021 ICRC report Harmful Information — Misinformation, Disinformation and Hate Speech in
Armed Conflict and Other Situations of Violence (ICRC 2021).

Misinformation: False information that is spread by individuals who believe the information to be
true or who have not taken the time to verify it.

Disinformation: False information that is fabricated or disseminated with malicious intent. This can
include terms such as propaganda and information operations.

Propaganda: Propaganda refers to information, often inaccurate or misleading, that is used to


promote a specific viewpoint or influence a target audience. It might include elements of truth but
presents them in a biased way to undermine the credibility or reputation of an opponent. When
digital advertising, social media algorithms or other exploitative tactics are employed to spread
propaganda, it becomes known as computational propaganda. This form of propaganda can also be
used to target, recruit, radicalize and coordinate activities among potential supporters of extremist
ideologies, a process commonly referred to as online radicalization and recruitment.

Hate speech: All forms of expression, including text, images, audio or video, that incite, promote
or justify hatred and violence based on intolerance toward identity traits such as gender, religion,
ethnicity or sexual orientation. This speech often blends misinformation, disinformation and
rumours, and is manipulated by its perpetrators to fuel animosity. Utilizing both traditional and
digital communication channels, hate speech exacerbates tensions between groups and can incite
violence against individuals based on their identity.

Dual-use technologies: Technologies that have a primary civilian and commercial application, but
also have the potential to be weaponized or used for military applications.

AI: The simulation of human intelligence in machines that are programmed to think and learn like
humans. These machines can perform tasks that typically require human cognitive functions, such as
visual perception, speech recognition, decision making and language translation.

Generative AI: AI systems that use advanced algorithms, such as generative adversarial networks
and transformer models, to create new, realistic content, such as text, images and audio, that is often
indistinguishable from human-generated content.

Foundational models: Large-scale AI models trained on broad data sets that can be fine-tuned for
a variety of specific tasks. These models serve as a base or “foundation” upon which specialized
models for specific applications can be built. The term has become prominent with the rise of large
language models (LLMs) such as GPT-4, which are pre-trained on vast amounts of text data and can
be adapted for tasks ranging from language translation to sentiment analysis with relatively little
additional training.

Deepfake: An image or recording that has been convincingly altered and manipulated to
misrepresent a person as doing or saying something that was not actually done or said.

Grey-zone tactics: The acts of state parties in relation to a dispute that maintain high-level
diplomatic relations while interacting antagonistically below the threshold of war.

Hybrid warfare: The use of non-military tactics alongside conventional kinetic warfare to achieve
foreign policy goals.

6 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


photorealistic faces that closely resemble those

Technical Section in the training data set. LLMs are specifically


designed to generate human-like text by
analyzing vast amounts of language data.
The technical section will analyze the potential
of AI technologies, in particular generative AI, to Generative AI models have the potential to
influence and shape human behaviour at both become increasingly autonomous, functioning
the individual and population levels in situations as AI personal assistants and learning from
of armed conflict and advanced geopolitical different domains of human experience and
confrontations. This section aims to answer the expertise. For instance, LLMs have demonstrated
following questions: What are the core converging the capacity to support laboratory work by
AI capabilities and what is the critical leap achieved providing options for building biological design
by generative AI? How can the confluence of and outsourcing complex tasks to adequate bio-
these AI techniques act as a catalyst to amplify foundries (Sandbrink 2023; Carter et al. 2023). In
information warfare? How can these emerging cybersecurity, generative AI models can learn
trends in AI-led information warfare be harnessed from vast amounts of historical data on cyber
by threat actors in conflict situations? And what incidents and predict future threats (Stanham
are the potential impacts and reverberating 2023). Generative AI models also power AI decision-
effects on civilian populations, as well as on support systems that optimize data analysis and
military forces and combatant strategies? provide recommendations and predictions to aid
decision making in war (Stewart and Hinds 2023).
Core Converging AI Capabilities: Generative AI models are not only converging
What Are Converging AI with dual-use expertise and other emerging

Capabilities and What Is technologies, but are also merging with our daily
experiences, monitoring how humans live, move
the Critical Leap Achieved and feel. By progressively learning and simulating

by Generative AI? human inputs and behaviours, generative AI


systems promise to develop dynamic content
The current AI revolution builds on a confluence and sustain interactions that imitate the features
of techniques and capabilities. Foundational AI of human conversations and, to some extent,
models are systems that can learn to optimize relationships (Feldstein 2023; Hiebert 2024). With
large-scale data analysis, identify and classify generative AI, the critical leap forward will likely
patterns, structures and anomalies in vast data come from both its increased autonomy and
troves, and turn those insights into representations new capacity to capture, simulate and interact
and predictions (Moy and Gradon 2023; Feldstein with human behaviours (Marcellino et al. 2023).
2023). They are called “foundational” models These trends may amplify dual-use potential and
because they serve as the groundwork for a wide unpredictability, with resulting consequences that
range of AI applications, providing general purpose are difficult to anticipate, mitigate and control.
representations of data that can be fine-tuned or
extended for specific tasks. Combined with an
array of data-capture and sensing technologies,
these models can be used to analyze a broad range
of features and variations in a heterogeneity of
data sets, from general image and text/language,
to more precise features such as biometrics,
human emotions and actions (Pauwels 2020b).

Generative AI models leverage the representations


and features learned by foundational models to
generate new content (text, images, narratives,
videos and even music) that exhibits similar
characteristics and patterns as the training data.
For example, a generative AI model trained
on images of human faces can generate new,

Preparing for Next-Generation Information Warfare with Generative AI 7


Box 3: Information Operations and Their Harmful Impact on Specific Groups

The interplay between armed conflict and disinformation is intricately intertwined with
existing grievances, amplifying human suffering, stoking hatred and disproportionately
impacting vulnerable groups (United Nations General Assembly 2022).

Studies have shown the disproportionate impact of disinformation on women, children,


and lesbian, gay, bisexual, transgender and questioning persons. For instance, women
and children can suffer both psychological and physical harm from being targets of
misinformation, disinformation and hate speech (United Nations Human Rights Commission
2018a, 2018b). Information operations can contribute to physical harm, including sexual
violence — for example, when hate speech incites violent attacks against children of a
minority group (Ridout et al. 2019). It can also lead to psychological and social harm through
online harassment and sexual abuse as well as through digital hate speech and geo-targeted
threats (when hate speech includes exact information about where women, children and
sexual and gender minority populations live) (ICRC 2021, 9). Threat actors may resort to online
information manipulation in order to target women and children, who are isolated from their
families and in need of humanitarian help, and lure them to specific locations for trafficking.

Minorities and marginalized racial and ethnic groups often bear the brunt of the destructive
effects of information warfare. For instance, in conflicts in Myanmar (United Nations
Human Rights Commission 2018a, 2018b) and Ethiopia (Jackson, Kassa and Townsend 2020),
combatants have exploited mass communication platforms to incite hatred, dehumanize
opponents and trigger violations of human rights. In past conflicts in Kenya, Nigeria and
South Africa, political leaders have employed divisive and inflammatory rhetoric to deny
established facts, escalate tensions and incriminate national, ethnic and religious groups
(Pauwels 2020a). Refugees, internally displaced persons and migrants are frequently
depicted as threats to national security or social cohesion, fuelling hatred against them.

Generative AI: A Revolution for 2019). Our “patterns of life” — our conversations

Information Warfare? How Can and emotions, biometric features and behaviours —
can now be turned into predictive insights to
the Confluence of AI Generative fuel information warfare. The vast amount of
Techniques Act as a Catalyst to digital information now generated by populations
Amplify Information Warfare? means that more of these routine behaviours
can be understood through AI computing. A
AI technologies, in particular generative AI
confluence of AI functions and techniques makes
models, are making information warfare both
it increasingly possible to analyze, classify,
more powerful and more accessible. The capacity
profile and, to some extent, predict and influence
of generative AI, merged with diverse forms
human behaviour (Pauwels 2020b). The global
of human behavioural data capture, provides
AI industry posits that significant amounts of
more impactful techniques to drastically
raw information about human experience can be
improve, tailor scale up and even industrialize
turned into actionable intelligence; in other words,
the offensive use of disinformation.
a critical mass of behavioural insights allows for
individuals to be influenced remotely. For instance,
Behavioural profiling and influencing targeted advertisements and content can exploit
We have entered a technological era where our psychological triggers to influence purchasing
private and collective experiences have become decisions, voting behaviours or social interactions,
free material for behavioural surveillance (Zuboff creating an environment where individual

8 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


autonomy is significantly compromised by elections, a Facebook post featured an audio
externally engineered stimuli. In 2018, the revelation recording purportedly capturing a conversation
that Facebook and Meta platforms made the private between Michal Šimečka, leader of the liberal
data of about 87 million of its users available to Progressive Slovakia party, and journalist Monika
the Trump campaign fuelled new levels of public Tódová from the newspaper Denník N (Zuidijk 2023).
anxiety about the ability of tech giants to exploit or The voices on the recording seemed to discuss
monetize personal information (Raymond 2022). plans to manipulate the election, including buying
votes from the country’s marginalized Roma
By accessing a myriad of human insights within community. The deepfake was intended to discredit
our digital networks, generative AI will learn to a liberal candidate and bolster support for more
profile crowds, classify sentiment and preferences, conservative and populist factions. Despite quick
and simulate expertise, emotions and authentic interventions to expose the fabrication, the rapid
behaviours, thereby crafting content that can circulation of the recording likely contributed to
be tailored, personalized and evolved over time shifting public sentiment and shaping election
(Marcellino et al. 2023; Beauchamp-Mustafaga outcomes in favour of populist forces.
2024; Feldstein 2023; Hiebert 2024). Today, an
industry flourishes around digital personas that Public panic could also be sowed by videos warning
use forged speech and videos to impersonate of non-existent epidemics, health safety scandals
individuals — including deceased ones — with or widespread cyberattacks. In April 2023, the
the goal of furthering personal relationships (Yang US Republican National Committee released a
2024; Carballo 2023). Think of chat bots that harness 30-second video featuring AI-generated images
people’s social media footprints and biometrics in of President Joe Biden and Vice President Kamala
order to become life partners or online ghosts. Harris celebrating an election night victory (Dorn
2023). The video then depicted simulated scenes
Engineered reality and authentic human-AI of chaos, including explosions in Taiwan, police in
relationships tactical gear patrolling San Francisco, an influx of
migrants at the southern US border and deserted
The technological leap brought by generative AI buildings on Wall Street. These forged incidents
will increasingly blur the distinction between could potentially lead to international political
real and synthetic content and authentic or military escalations. With the proliferation
interactions and impersonations, challenging of sophisticated deepfake videos, combined
both human perception and machine detection. with deepfake backstories and cover-ups, even
qualified news reporters, decision makers and
LLMs enable the creation of a large amount of
diplomats will increasingly struggle to parse
unique, long-form, higher-quality deceptive
propaganda and disinformation from real news.
messages that go beyond short texts to news stories
Already, lawmakers across the globe are being
and public discourses, marking an incremental
targeted for their positioning over geostrategic
improvement over previous methodology. Deepfake
competitions and conflicts. In 2024, The New York
technology is another example, in which image
Times reported that an Israeli political consulting
and video generation relies on the convergence
firm called STOIC received US$2 million from
of various algorithmic architectures, including
Israel’s Ministry of Diaspora Affairs to influence
deep residual networks that can, with surprising
democratic members of the US Congress to ensure
accuracy, read human lips, synthesize speech and
their support for Israel, at a time when many of
simulate facial expressions and bodily movements
these members are questioning continued American
(Mubarak et al. 2023). Its capability includes
military support to Israel amid rising civilian
altering facial features and expressions, gait and
casualties and suffering in Gaza (Jingnan 2024).
biometrics, as well as simulating behaviours on
video in real time. If an individual has a digital However, what truly sets generative AI apart is the
footprint that includes, for example, talks and potential for vast bot networks to convincingly
podcasts, deep residual networks are also able mimic spontaneous human behaviour. Such
to reproduce a synthetic version of their voice. automated networks can generate text, images
On the eve of an election, deepfake videos could and soon, with all likelihood, video and audio,
falsely portray public officials being involved in bolstering the credibility of the messenger and
criminal or unsavoury behaviours. For example, the persuasiveness of the interaction (Marcellino
in October 2023, just two days before Slovakia’s

Preparing for Next-Generation Information Warfare with Generative AI 9


et al. 2023; Feldstein 2023; Brandt 2023). civilians, are monetized and acquired by private
Individual customization is promised as the next sector offensive actors, proxies and cybercriminal
breakthrough, with AI assistants increasingly groups (Pauwels 2024). Previous examples of
mimicking genuine interpersonal relationships and unregulated, irresponsible innovation have
potentially replacing or competing with human shown potential risks. For example, Clearview
social bonds (Hiebert 2024). Increasingly, generative AI, the controversial facial recognition company,
AI models will learn to create personalized content has developed a powerful facial recognition
in real time through individual interactions with algorithm capable of identifying individuals from
chatbots, leveraging granular population and user images taken from the internet. The company
data to craft tailored messages that resonate with claims to have amassed a database of billions
specific personas. As Kyle Hiebert has eloquently of images sourced from social media platforms,
written, this trend could result in “digital siloed websites and other online sources (Hart 2022).
forms of existence,” nihilism in relation to The technology works by comparing facial images
objective truth, political apathy and “erosion of from these sources against those in its database
civic engagement and social capital” (ibid.). to generate potential matches, along with links
to the source images. This capability has raised
As they have infiltrated into our routines and serious concerns about privacy, civil liberties and
daily lives, generative AI models will develop an human rights. Techniques such as algorithm and
emerging capacity for decision making, which could data-exploitation leading to misuse and abuse
be used in dynamic relationships to progressively are afforded to both state and non-state actors.
influence and take control over both targeted and
larger audiences. This capacity to influence public Non-state actors and proxies have increasingly
opinion with misleading simulations and mobilize gained access to some generative AI capacities
large swaths of populations around aggressive through decentralized Web3 platforms, as well
narratives could have powerful long-term as acquisition in other ungoverned markets.
implications for maintaining peace and security. Similarly to trends in cybercrime and the cyber
arms race, a number of generative AI systems
In general, the deployment of generative AI forgery are being customized, repurposed through open-
technology will drastically alter the relationship source platforms and acquired within dark web
between evidence and truth across journalism, communities and underground marketplaces
criminal justice, conflict investigations, political (Pauwels 2024). Open-source AI research actually
mediation and diplomacy. By eroding the sense boomed in 2023, with AI-related GitHub increasing
of truth and trust between citizens and the by nearly 60 percent compared to 2022 (Maslej
state — and indeed among states — generative et al. 2023). Meta has published its generative
AI’s misuse and abuse could become deeply AI model (called “Llama 3”) as an open source,
corrosive to democracies, global expertise which means that the model’s source code can be
and international governance systems. modified and repurposed. Dubbed “WormGPT”
and “FraudGPT,” open-source versions of OpenAI’s
Industrialization and privatization of GPT model are monetized on the dark web and
information warfare may already have been repurposed in cyberattacks
and fraudulent hacks (Wirtschafter 2024).
Through sustained campaigns and relying on
human-like interactions, generative AI can Past research on the “industrialization” of cyber
be used to automate content dissemination offence highlights what experts have detected
at low cost and on a large industrial scale. on the ground: increased forms of trading,
collaboration and outsourcing between threat
Large private sector groups and governments are
actors, including state proxies, mercenaries
already ramping up investments to develop current
and cybercrime groups. Similar dynamics could
and more refined generative AI systems. Leading
accelerate and amplify an already emerging
tech nations will have unrivalled advantage as
trend: the industrialization and privatization of
they already power global networks, such as Meta,
information warfare (McGuire 2021; Pauwels 2024).
Instagram and TikTok, and can exploit massive
sources of behavioural surplus about populations The implications of rapidly expanding and
and subgroups. Yet, increasingly, data troves, unregulated markets for information warfare
including routine and sensitive information about will be corrosive to international peace and

10 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


security, with a potential rise in information global scales. While these attacks can be aimed
operations leveraged by mercenary and terrorist at both military forces and civilian populations, a
groups. The availability of targeted influencing recent body of research shows that civilians are
or large-scale disinformation to anyone who increasingly targeted by hostile operations that
can afford it is already transforming how manipulate information critical for their survival,
contemporary conflicts are fought. Both state and influencing their behaviours to the point of
and non-state actors are drastically empowered causing harm and undermining their security
through information warfare, but the power and well-being (Katz 2021; Morris 2024; Burt 2023;
relationship between these parties becomes less Khan 2023; Lahmann 2020; Feldstein 2023; United
asymmetrical, with an increased diffusion of Nations General Assembly 2022; see Box 3).
power. As a result, the potential beneficiaries of
the industrialization of information operations
may include private mercenary groups,
terrorist groups, transnational illicit networks

Targeting Primarily
and proxy forces involved in conflict.

Such diffusion of cyber power could rapidly


reach increasing numbers of private sector Civilian Populations
offensive actors and private groups associated
with mercenary activity (Agranovich 2023). For
instance, the former Wagner Group was actively
Psychological Operations
involved in spreading global disinformation Undermining Civilian
campaigns and leveraging influence operations Security in Conflict
(Marten 2022). Harmful implications have already In conflict zones, access to reliable information
been seen through information operations plays a crucial role in civilian protection. In the
waged in different countries across the globe. words of Irene Khan, the UN Special Rapporteur
Terrorist organizations may acquire, exploit or on the promotion and protection of freedom of
outsource services to support their offensive opinion and expression, “the freedom of opinion
agendas, resulting in an increasing correlation and expression, including the right to seek, receive
between criminal accessibility and mercenary and disseminate diverse sources of information
and terrorist capability. Violent extremist must be upheld by States in times of crises and
groups and criminal organizations, from Hamas armed conflict as a precious ‘survival right’ on
and Boko Haram to Mexican drug cartels, which people’s lives, health, safety, security
have relied on cyberespionage to infiltrate and dignity depend” (United Nations General
governments and collect private information Assembly 2022, art. 19(2), paras. 1 and 5). Individuals
about intelligence personnel (Wirtschafter 2024). affected by conflict are particularly susceptible
to the harmful effects of disinformation (ibid.)
Waging Information Warfare: How Can due to their dire living conditions, pervasive
These Emerging Trends in AI-Led Information confrontations with violence and limited access to
Warfare Be Harnessed by Threat Actors in reliable information. Conflict zones have become
Conflict Situations? And What Are the Potential powerful incubators for disinformation, and
Impacts on Civilian Populations and Military hostile state and non-state actors have effectively
Forces and Their Combatant Strategies? used disinformation to shape the narratives
behind conflicts and impact civilians. Intense
Information operations have emerged as a
social fragmentation and weakening of public
powerful threat with the goal to undermine
institutions in these settings amplifies the impact
civilian resilience of populations in conflict
of disinformation, creating “digital siloed forms of
situations — even in combination with kinetic
existence” that reinforce the rapid and endemic
attacks — and to manipulate public opinion within
tactics of information warfare (Hiebert 2024).
and beyond national borders. The development
of generative AI models means that information In this context, a troubling use of information and
operations will likely become more adaptive, psychological warfare adopted by conflict parties
interactive and manipulative, waged with both involves the calculated manipulation of information
precision and iteration at personal, local and critical to meeting human security needs, with

Preparing for Next-Generation Information Warfare with Generative AI 11


the goal of influencing the behaviours, emotional officials and civil society to monitor and counteract.
states and well-being of civilians. Local-level Unlike broader strategic narratives, which can
information and psychological operations have be more easily detected, localized disinformation
had some of the most harmful consequences for blends seamlessly with the chaotic flow of
civilians — obscuring frontline developments, information during active conflict. This makes it
sowing panic, preventing or hindering civilian difficult to discern deliberate disinformation from
efforts to evacuate from conflict-affected areas the misinformation that naturally arises in such
and deceiving civilians about the availability volatile environments. The synchronization of
and functioning of critical infrastructures and these localized information operations with kinetic
emergency services (Spink 2023). Combined with military actions has significantly amplified their
predictive behavioural and emotional analysis, the impact (Burt 2023; Fedorov 2023). This strategy
use of generative AI models could transform such heightens confusion and panic among civilians
operations into a pervasive and persuasive form precisely when they need to make rapid life-or-
of psychological warfare. We have witnessed in death decisions, exacerbating the already dire
recent conflict situations cyber operations that aim circumstances of those caught in the conflict.
to destroy or manipulate the integrity of strategic
data sets and the industrial control systems In the first weeks of Russia’s invasion of Ukraine,
related to cities’ infrastructures (Pauwels 2024). Russian actors launched a wave of disinformation
Such destructive cyber capacity could be used in about frontline developments, manipulating
combination with targeted forms of information insights about areas under Russian control,
and psychological operations to undermine citizens’ including troop numbers and their movements
security and trust in needed critical systems. (Spink 2023). Civilians were frequently misled with
claims that local authorities had abandoned their
These operations can have long-term implications roles or capitulated. Furthermore, they propagated
for the mental health of civilians by inciting alarming but fabricated reports of impending
terror, high levels of anxiety and other distressing offensives, including fictitious threats of nuclear
emotions or mental states. Targeted psychological attacks and strikes on nuclear power plants, meant
operations can lead to paranoia, conspiratorial to induce widespread public fear and terror (ibid.).
thinking, the constant apprehension that basic
human security and family needs are not being Russian and Russian-affiliated actors have
met and a pervasive anxiety about death or heavily focused their information operations on
injury (Katz 2021). These psychological harms, manipulating population movements, with the
though harder to document, can induce long- aim of influencing Ukrainian civilians to either stay
term trauma. By engineering “trust disorders” in Russian-occupied areas or flee toward regions
with public institutions or humanitarian under Russian control (ibid.). These efforts often
organizations, hostile actors also aim to incite contribute to forced displacement, a violation
dissent, destabilize society, exacerbate situations of of international humanitarian law (IHL) that
emergency, and undermine the reputation of enemy encompasses coercion, fear and psychological
institutions, including those serving civilians. pressure.4 Central to these operations has been
manipulating information about options for
For example, in the war of aggression against protection and roads for evacuation. By sowing
Ukraine, Russian-affiliated actors have conducted doubt, these actors seek to hinder civilian attempts
hyper-local disinformation campaigns that have to escape, effectively trapping them in areas
merged with military offensives, targeting specific under Russian influence and exacerbating the
geographic areas with precision (Giles 2023). In humanitarian crisis. To a lesser extent, Russian
the lead up to and aftermath of Russia’s full-scale disinformation efforts also aimed at undermining
invasion, interviews with experts conducted access to or the delivery of life-saving services by
by the Center for Civilians in Conflict revealed claiming that hospitals were overwhelmed and
that networks of Telegram profiles and channels that food and electricity would be unavailable in
emerged at the community and oblast levels and certain zones. Finally, other targeted operations,
that pro-Russian operatives infiltrated local Viber often using video propaganda, were aimed
groups, spreading disinformation tailored to at persuading Ukrainian parents in occupied
these communities (Spink 2023). This hyper-local
approach poses significant challenges for Ukrainian
4 See https://ihl-databases.icrc.org/en/customary-ihl/v1/rule129.

12 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


territories to send their children to Russia, as part passages or shelters, incapacitating their ability
of a broader strategy to relocate as many Ukrainian to make decisions” (ibid.). Humanitarian efforts,
children as possible to Russia, underscoring including the UN Relief and Works Agency and
the civilian impact of Russia’s information Médecins Sans Frontières, have also become the
campaign in the conflict zone (Spink 2023). targets of disinformation, eroding public trust,
complicating aid delivery and endangering workers.
The conflict between Israel and Hamas is another
vivid illustration of the harmful impact on civilians
that can come from tech-led and social media-
Behavioural Control in Repressive
driven disinformation, with platforms such as Regimes and Intrastate Violence
TikTok, X (formerly Twitter) and Instagram being Authoritarian states — sometimes in concert
inundated with AI-generated posts (Morris 2024). with private sector actors in the global security
Social media users have unwittingly dispersed industry — may misuse and abuse AI and civilian
misinformation to multiple platforms, to the data sets for social surveillance and control and
extent that even journalists have reported on ethnic and racial profiling, with the ultimate goal
the conflict basing their sources on manipulated of amplifying propaganda efforts and manipulate
information. Supporters of both Israel and populations. An increasing number of countries,
Hamas have accused one another of victimizing including repressive regimes, are relying on AI
vulnerable civilians by spreading forged images and population data sets to monitor behaviours,
that picture the dead bodies of babies and implement social control and strengthen their
children in order to produce emotional reactions surveillance apparatus (Feldstein 2023). There is a
(Klepper 2023). In some instances, pictures growing “tech assemblage” or “internet of bodies
from previous conflicts or emergency disaster and minds” that can be harnessed to capture
situations have been modified and presented as people’s “behavioural surplus,” involving internet
being current; in others, generative AI programs and communication monitoring, mobile device
have synthesized images from scratch, including hacking, computer interference, financial and
one of a baby crying amid bombing wreckage geo-tracking, facial recognition, mobile biometric
that went viral in the conflict’s earliest days. devices and “below-the-skin” technologies
such as DNA sampling (Pauwels 2020b).
The rationale behind sophisticated and large-
scale disinformation architecture is to immerse Adding generative AI to these converging data-
citizens in a virtual siloed reality in which they capture techniques would allow for the direct
themselves become the producers of information targeting of population subgroups — such as
and emotional manipulation. Interestingly, this partisans, dissenters, youth, women, ethnic
tactic muddies who is supposed to carry the burden majorities or minorities — with tailored,
of intent behind waging information warfare. persuasive and interactive forms of propaganda
and even psychological engineering. The goals
In conflict situations, the consequences are of such targeting may be to inflame existing
corrosive as pervasive forms of disinformation tensions and incite violence between ethnic
undermine the reliability of all available and socioeconomic groups; track, deceive and
information, creating chaos and hindering silent opponents; recruit youth into information
civilians’ ability to make safe decisions (Morris warfare and armed forces; subdue and repress
2024). For instance, disinformation campaigns human rights and women rights’ efforts; and
about areas of Hamas operations misled civilians anticipate and manipulate sub-populations’
about safe areas, while similar tactics targeting the movements during protests or social unrest.
availability of essential supplies disrupted long-
term survival planning. Tamer Morris explains that Illiberal regimes and their proxies, as well as
“when the Israeli government advised civilians other violent actors, may combine behavioural
to flee certain areas, no other information was monitoring with generative AI models to
provided to ensure safe evacuation, for example enhance the persuasive power and credibility of
safe areas and corridors, implementing an psychological operations that could use realistic
atmosphere of chaos…this particular situation impersonations and interactions to deceive
was further exacerbated as Israeli forces cut all specific population subgroups, including dissenters
telecommunications preventing civilians from and human rights defenders. As witnessed in
sharing immediate information regarding safe cybercrime, the combination of psychometric

Preparing for Next-Generation Information Warfare with Generative AI 13


tools and emotional engineering using personal Scaled-Up Information
data sets can help craft attacks so subtle that they
are hardly recognizable as such (Pauwels 2020b).
and Influence Operations
in Ethnic Conflict
Behavioural surveillance through algorithmic and Hostile states and their proxies, violent extremists
cyber techniques is already a pervasive reality of and other threat actors may increasingly rely
intrastate violence and contemporary conflicts. on influence operations to increase political
These practices are reminiscent of the Syrian polarization and sow social unrest and ethnic
conflict that involved several cyber proxy groups, conflict. Combined with predictive behavioural
most prominently the Syrian Electronic Army monitoring, generative AI models could identify
(SEA) that acted in support of the government the emotional triggers that push subgroups to
and President Bashar al-Assad. A 2021 report by violence and tailor disinformation campaigns
cybersecurity and legal experts exposes how, and psychological manipulation techniques to be
“in conjunction with actively monitoring their harnessed by factions in conflict, from ruling elites
own citizens, the Syrian regime, together with and political parties to terrorist groups. Lack of
third party groups, is hacking websites and safeguards in social media networks and immersive
individuals critical of the regime” (The UIC John digital spaces could enable state and non-state
Marshall Law School International Human Rights actors alike to manipulate individuals’ deepest
Clinic [IHRC] and Access Now 2021, 1). The report fears, hatreds and prejudices. For instance, violent
continues, “Through ‘phishing’ operations, social extremist groups may spread false claims of violence
engineering, malware downloads, and gaining committed by their enemies to inflame tensions and
access to passwords and networks through security gain sympathy for their cause. When the Islamic
force intimidation, the SEA and the Assad regime State (IS) increased its power and visibility through
have used these practices to monitor and track social media, its violent propaganda, which used
down activists and human rights defenders in doctored videos and AI bots to magnify messaging,
Syria, who are then tortured and killed” (ibid.). resulted in a wave of online emotional warfare (Ward
2018; Alfifi, Kaghazgaran and Caverlee 2018). The
In 2013, SEA members extracted from a standard
violent anti-Islamic backlash that followed was then
messaging application the personal information
instrumentalized for the group’s recruiting strategies.
(phone numbers, email addresses and contact
details) of millions of people and leaked the data Applying generative AI to population behavioural
sets to the Syrian government (Kastrenakes 2013). data could drastically enhance methods and
Other attacks targeting social media platforms and techniques in influence operations by supercharging
messaging applications led to further breaches the strategic communication environments in which
of civilians’ sensitive data, including people’s conflicts play out. Both state and non-state actors
birthdays, personal serial numbers, ID cards, CVs can already feed their own narratives and mis- and
and blood types. The report by IHRC and Access disinformation to their constituents both within and
Now claimed that “the monitoring and hacking across borders. Russian troll factories outsourced
of devices are suspected to have informed kinetic business to trolls in Ghana and Nigeria working to
operations that have cost the lives of many and foment racial tensions around police brutality in
undermined the crucial work being done by the United States ahead of the 2020 election (Ward
doctors and human rights defenders” (2021, 21). The et al. 2020). In India’s West Bengal region, Rohingya
deceptive tactics used by the SEA include social refugees have been demonized by the same kinds
engineering and impersonation to manipulate of extreme threats and online hate mongering that
anti-Assad activists into revealing the identities of caused them to flee Myanmar (Goel and Rahman
dissidents and meeting locations. In the wake of 2019). In Kenya and South Africa, disinformation
such pervasive surveillance practices, surgeons and and hate speech, manufactured in part by political
doctors have been advised not to provide medical elites, inflamed the racial and socioeconomic
mentorship over the internet to colleagues in divisions that have plagued both countries for
Syria for fear of revealing the location of sheltered decades (Segal 2018). With AI technologies that can
and underground hospitals (Baraniuk 2018). synthesize media from scratch, including graphically
violent video propaganda, the art of emotional
manipulation could become ever more powerful

14 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


and has the potential to inflict harm on specific “cybercrime as service”). Ultimately, all of these
ethnic groups and other vulnerable communities. trends will make it increasingly complicated
to trace the source and the supply chains of
For example, since February 2022, pro-Russian information operations, establish evidence and
social media outlets have propagated narratives truth, and obtain material proof of instructions,
aimed at inflaming social and linguistic tensions directions or control in order to attribute state and/
between population subgroups living in the or criminal responsibility across jurisdictions.
western and eastern parts of Ukraine (Spink 2023).
Information operations have raised concerns Russia’s strategic use of information operations to
surrounding Russian-speaking internally displaced undermine Western support for Ukraine has been
persons, alleging attacks, exorbitant rental fees particularly evident in Germany, a key player in
and challenges in accessing education in western the European response to the conflict (Watts 2022).
Ukraine. Additionally, stories have emerged claiming By exploiting societal divisions and economic
a disproportionate conscription of individuals fears, as well as leveraging a sophisticated blend of
from eastern regions of Ukraine into the military disinformation and propaganda via both traditional
and unfair electricity rationing between western and social media, Russia has sought to erode the
and eastern areas of the country. These narratives resolve of one of Ukraine’s key European allies.
can significantly impact civilian well-being and Germany’s significant Russian-speaking population,
social cohesion, perpetuating or worsening a legacy of historical migration patterns, has been
societal divisions, discrimination and violence. the Russian media’s key target, with tailored
content that reinforces pro-Kremlin viewpoints and
Weakening Global Alliances and disseminates disinformation directly aligned with

Public Support to Conflict Parties Moscow’s strategic goals. This approach is intended
not only to bolster support for Russia within this
In present and future information warfare, community, but also to create a potential internal
authoritarian and hostile states have a strategic pressure group that can influence broader public
interest in influencing large public audiences and opinion and political discourse in Germany.
distorting global perceptions of a conflict. Their
goals include degrading access to trustworthy
information, manipulating narratives, persuading
global audiences of the futility of the fight and
weakening strategic political alliances and public
support for a conflict party. For instance, influence
Targeting Military
operations backed by Russia’s Federal Security
Service and other state-affiliated proxies have relied
Personnel and Operations
on “flooding social media platforms with misleading
messages around the need for the ‘denazification’ of Adversarial Information
Ukraine and accusing the United States of creating Manipulation to Thicken
bioweapons in clandestine laboratories in Ukraine”
(Burt 2023, 15). As Annie Fixler underlines, “Russia
the Fog of War
has adjusted video evidence to deny war crimes, In military domains, generative AI models are used
deployed operators on social media to create fake to pioneer new ways to synthesize intelligence
personas and news sites, and hacked user accounts data and support human decision making, provide
to promulgate disinformation” (Fixler 2023, 11). high-level strategic recommendations and new
problem-solving techniques, generate different
Beyond supercharging these global battles of plans of attack and organize the jamming of
influence, the development of generative AI enemy communications (Feldstein 2023; Stewart
models could accelerate and amplify synthetic and Hinds 2023). Integrated into intelligence,
data and forgeries to obfuscate criminal and state surveillance and reconnaissance scenarios,
responsibility in the conduct of kinetic war and generative AI models could support target tracking
potential IHL violations. The industrialization of in drone missions. These AI capacities will also
information warfare could result from generative likely enhance the operation of a new wave of
AI’s trends, including democratization, automation low-cost, adaptive and modular autonomous
and outsourcing (with information warfare weapon systems that are designed to kill on a
becoming a global and partially to fully automated rapid scale (Federspiel et al. 2023). If misused

Preparing for Next-Generation Information Warfare with Generative AI 15


and abused in emerging forms of hybrid warfare military operations, from inside tacit military
that do not respect the rules of engagement by expertise to large collected data sets, including
targeting non-military objectives, the number of extremely sensitive information about civilian
resulting civilian harms could be unprecedented. populations and critical targets and infrastructure.
Failure by military institutions to prevent
Military, legal and humanitarian experts have adversarial attacks or the misuse of generative AI
drawn attention to the promises of these kinds of could be exploited subsequently by the enemy in
AI decision support systems if used with a human- further waves of information warfare targeted at
centred approach (Stewart and Hinds 2023; ICRC destroying trust in local and global audiences.
2021): they could be used to foster the protection
of civilians (by recognizing distinctive emblems The merging of the cyber arms and cybercrime
and alerting forces about the presence of civilian industries is leading to a proliferation of dual-use
populations), increase situational awareness and expertise that can be harnessed to repurpose and
accelerate decision-making cycles. But these experts re-engineer existing cybersecurity and AI systems. In
have also highlighted the potential problems this context, adversarial attacks could be performed
and limitations of these generative AI systems, by malicious actors with relatively sophisticated
such as a greater reliance on rapid AI-generated AI and cybersecurity skills or those able to acquire
analysis detached from battlefield observation and this knowledge, and could increasingly integrate
human experience, a subsequent accidental or the offensive tool kit of state and non-state violent
intended escalation, the unpredictable risk-prone actors, advanced persistent threats and cybercrime
properties of these systems; and the challenges groups acting as proxies (Pauwels 2024).
for humans interacting with AI reasoning (in
particular, the problem of automation bias). All Psychological Operations
of these challenges show that for military forces,
assessing and trusting the application of generative
Undermining Resilience
AI models will be a complicated decision. of Military Forces
The integration of generative AI into psychological
In strategic military situations, one corrosive operations offers unprecedented avenues to
use of information warfare could be to wage manipulate military forces in ways that can
adversarial attacks on generative AI models via profoundly impact combat strategies, disrupt
the manipulation of data and signals. Such attacks command and control and undermine resilience
could both poison the training data sets or the flow through emotional engineering. By simulating
of insights captured into the system and manipulate authentic human communication patterns and
its functioning, performance and predictive value. producing deepfake audio and video, AI can
For example, ICRC experts posit “adversarial fabricate convincing messages purportedly from
techniques [that] could conceivably be used in military leaders or trusted sources, potentially
conflict to affect a targeting assistance system’s causing chaos and eroding resilience within
source code such that it identifies school buses as opposing forces (Byman et al. 2023; Fecteau 2021).
enemy vehicles, with devastating consequences”
(Stewart and Hinds 2023, 2). Adversarial attacks AI-driven disinformation could be used to mislead
could virtually alter the performance and reliability enemy forces about strategic decisions, impacting
of generative AI models in their different military combatant strategies. For instance, generative
configurations, from strategic and logistic planning AI could create convincing but manipulated
and training and decision making, to intelligence, intelligence reports or communications that
surveillance and reconnaissance, command suggest a non-existent troop movement or supply
and control, cyberoperations and autonomous route. By hacking a combination of personal and
weaponry. With adversarial attacks, the increasing official communication channels and feeding
dependence of military forces on generative AI this fabricated information to enemy analysts,
could thicken the fog of war, undermining in-depth military forces might be misdirected to either
human understanding, situational awareness defend or attack the wrong locations, thereby
and compromising decision making, alternative compromising their operational effectiveness.
options and on-the-ground operations. Additionally, AI-generated deepfake videos or
audio messages from supposed high-ranking
What could ultimately be engineered for large-scale
harm is the entire “intelligence life cycle” irrigating

16 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


officers could order troops to execute defective Behavioural Engineering to
strategies, leading to failures on the battlefield.
Recruit Youth in Proxy Forces
Generative AI can be instrumental in creating chaos Combining generative AI with the profiling
within the command structure of an enemy force. of population data could lead to new forms of
By producing forged communications that appear behavioural engineering that could become
to come from legitimate military sources, AI can pathways to recruitment into cyber and information
sow confusion and mistrust among commanders warfare, as well as into kinetic warfare waged
and their subordinates (Fecteau 2021). For example, by armed forces and non-state armed groups.
a deepfake video of a commanding officer issuing Russia’s ongoing war of aggression toward Ukraine
contradictory orders could lead to paralysis and confirms the proliferation of proxy groups that
indecision among troops. Furthermore, AI can have engaged local and foreign remote hackers
generate false alerts about imminent threats, causing in offensive cyber and information operations
units to constantly reposition or retreat, thereby on behalf of both parties to the conflict (Pauwels
exhausting resources and morale. Automated 2024). Cybersecurity experts have talked about
bots and generative AI assistants could flood the increase in young, cyber-skilled populations
communication networks with manipulated reports, available for deployment by cyber proxy groups
overwhelming the command’s ability to process and states (McGuire 2021, 7). When adolescents
real-time information and making it difficult to are recruited or engaged in offensive cyber
execute coordinated manoeuvres (Lahmann 2020). operations, their status may convert to that of an
active combatant and they may become legitimate
Generative AI can also tailor psychological targets for retaliation. They may also unwittingly
operations to exploit specific vulnerabilities in an be part of conduct that involves war crimes.
adversary’s cultural or social fabric, undermining
its resilience. By analyzing vast amounts of data The advent of generative AI, and its trends toward
from social media and other digital footprints, AI hyper-personalization and mentorship, could
systems could identify key psychological triggers act as a catalyst to recruit different demographic
to craft personalized propaganda and emotional groups, including youth, into information warfare,
engineering strategies to manipulate soldiers. further blurring the lines between civilian and
Generative AI could also create realistic but forged military functions and complicating responses by
video or audio messages and social media posts military forces (see Box 5). In the context of youth
from family members suggesting personal crises, recruitment, socio-technical pressures could arise
health emergencies or threats at home. For example, from the capacity for violent actors to exploit young
a soldier might receive a highly convincing deepfake users’ data profiles and spheres of communication.
video call from a loved one, fabricated to appear The convergence of generative AI and cyber
as if they are in immediate danger, prompting surveillance could amplify the ease with which these
distraction, distress and a compromised focus on actors are able to reach out to vulnerable groups and
their duties. Military units could also reveal their scale both their digital involvement and physical
positions by attempting to connect with audiences enlistment with non-state armed actors. Through
at home. Additionally, AI-driven misinformation generative AI techniques in social networks and
campaigns could spread rumours about widespread immersive digital spaces, young users could become
threats to soldiers’ families, leading to heightened psychologically isolated from traditional support
anxiety and decreased morale across the ranks. systems and the victims of emotional targeting
based on viral video propaganda, impersonations
By deploying AI and generative AI in these targeted and group pressure on networking platforms.
and sophisticated ways, adversaries can significantly
degrade the operational effectiveness, cohesion Evidence from the field confirms that the
and resilience of military forces (Byman et al. phenomenon of online recruitment has continued
2023). These psychological operations exploit the to grow, contaminating rising tech platforms, such
very fabric of human trust and communication, as TikTok, and reaching ever younger generations
making them a powerful tool in modern warfare. (Pandith and Ware 2021; Meisenzahl 2019). In recent
years, governments and intelligence services have
gathered evidence that gaming platforms that
incorporate voice-to-voice video conferencing, chat
and messaging services are incubators for non-

Preparing for Next-Generation Information Warfare with Generative AI 17


state actors and violent groups to communicate
propaganda and groom young recruits across
different regions and cultures (Concentric 2019).
Scenario: Information
Exploiting online gaming platforms is a method
reportedly used by a diversity of actors, including
Warfare on
IS, the Lebanese group Hezbollah and white
supremacist groups across Europe and the United
Biological Threats
States. Security firms have reported that mentorship
An even more powerful and radical shift will come
on how to maximize gaming platforms to profile and
from the convergence of generative AI with other
enlist new members is part of strategic recruiting
dual-use technologies and its integration within
discussions on IS’s deep web forum (ibid.). As Joseph
infrastructures that are critical to national security.
Guay and his co-authors write, “Social media can also
For instance, generative AI is used in cybersecurity
be a vehicle to facilitate both kinetic and digitally
to improve threat detection and response and
derived forms of violence in which cyber militias
predict future polymorphic attacks. But the same
have engaged in online defamation campaigns and
AI models could also help conceptualize and plan
have weaponized rumours and false information to
how to modify, deliver and disseminate biological
incite panic and/or violence” (Guay et al. 2019, 52).
agents (Sandbrink 2023). In the near future, it is also
Singer and Brooking have powerfully summarized likely that generative AI models will merge different
how the weaponization of social media has types of “live scientific mentorship” (text, audio,
“represented a momentous development in the immersive video) that could increasingly support
history of conflict” (Singer and Brooking 2018, 9). lab work for less sophisticated threat actors.
Young internet users have been instrumentalized
An increasing number of threat actors could
in online “Twitter wars” that could help shape their
therefore access and process dual-use knowledge
perceptions on real conflicts taking place on the
that could subsequently be used in information
ground. When IS increased its visibility and reach
warfare to credibly impact both military and
through social media, its violent propaganda, which
civilian populations. What is at stake is the
sometimes features children perpetrating executions
weaponization of dual-use knowledge itself, and
and other acts of violence, was further exploited to
possibly all forms of dual-use expertise developed
fundraise as well as recruit and train new members
by human civilization. This is particularly salient
(Almohammad 2018). With youth constituting a
in the case of AI and biotechnologies, for which
specific demographic that has been increasingly
there is a false dichotomy of dual use, as almost
targeted as future perpetrators of disinformation and
all aspects of both of these technologies that are
hate speech, network effects and immersive digital
deployed in service of human security can also
spaces could then augment the risk of adolescents
be subverted for misuse by hostile actors.
becoming increasingly active in recruiting others
to participate in information warfare and physical The convergence of generative AI and biotechnology
acts of violence. In the near future, enhanced by could be weaponized to create disinformation
mentorship and personalized interactions brought campaigns about biological threats, tailored to
by generative AI, these recruitment dynamics destabilize civilian populations and undermine
would result in both individual violations of rights trust in public health systems and governing
and collective harms, and would have potentially institutions (Gisselsson 2022). With these aims in
corrosive implications for military forces’ plans. mind, information warfare would not be waged
to impact military contingents and achieve
kinetic advantage, but instead to cause a massive
psychological impact on civilian populations
and weaken allied countries’ support. The below
hypothetical scenario explores how such tactics
might reach strategic effects in a situation of
protracted conflict between two states (see Box 4).

Imagine a conflict in which a hostile state leverages


generative AI to craft a sophisticated disinformation
campaign, exploiting advances in biotechnology.

18 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


The adversary aims to create panic and chaos into local news stations and inserts forged breaking
within the civilian population of a targeted nation news segments about the outbreak. These segments
by fabricating threats of biological attacks. are crafted to look as authentic as possible, with
realistic graphics and credible-sounding reports,
Phase 1 — Crafting the narrative: Relying on the further blurring the line between reality and fiction.
expertise and mentorship of a generative AI lab
assistant, the hostile actors access critical, but Phase 3 — Exploiting AI and biotechnology: To
vulgarized knowledge about biotechnology and lend credence to their claims, the hostile state
learn enough about how to conceptualize and uses a generative AI lab assistant and access AI-
plan a simulated release of biological agents. Then, led bio-design tools to create real but non-lethal
using LLMs, the hostile actors create detailed and unfamiliar strains of biological agents. These
and realistic scenarios involving the release of a agents are released in select locations, causing
genetically engineered pathogen. These scenarios noticeable symptoms similar to those described in
are carefully tailored to resonate with existing fears the disinformation campaign. When people in these
and vulnerabilities within the target population. areas start experiencing symptoms, it fuels the belief
Generative AI models synthesize a series of fabricated that the fabricated outbreak is real. Moreover, the
news articles, social media posts and manipulated adversary plants and releases (using micro-drone
scientific reports, all suggesting that a lethal, highly technologies) manipulated biological samples in
contagious virus has been released in key urban hospitals and research labs, showing the presence
centres. For example, AI-generated content might of the engineered pathogen. These samples are
describe a supposed outbreak of this novel virus in a designed to be detected by standard testing methods,
major city, combined with deepfake videos quoting leading to false positives that further corroborate
medical experts and government officials, showing the forged reports. Cyber proxies of the hostile state
overwhelmed hospitals and quarantine zones. The conduct a series of sophisticated adversarial attacks
content would then be distributed through a variety of on health and biotech infrastructures to disrupt the
channels, including social media platforms, fake news production of prophylactic medicines, as well as
websites and even hacked legitimate news outlets. to suppress or manipulate the content of data sets
used in medical and epidemiological reporting.
Phase 2 — Amplifying the disinformation: To ensure
that the disinformation spreads rapidly, the adversary Phase 4 — Medical and psychological impact on
employs bots and trolls to share and comment on the civilians: As news of the outbreak spreads, public
AI-generated content. These automated operatives trust in health authorities begins to erode. People
flood social media with alarming posts, creating flock to hospitals, overwhelming health-care systems
trending topics and hashtags that draw widespread that are already strained from the protracted
attention. Enhanced by generative AI, the bots engage conflict. Pharmacies run out of basic medical
in personal and group discussions using scientific and supplies as panicked citizens try to stockpile what
vulgarized arguments, counteracting attempts to stop remains of medications and personal protective
disinformation with formal investigation and posing equipment. strained from the protracted conflict.
as concerned citizens or medical/expert whistle- Pharmacies run out of basic medical supplies as
blowers, sometimes even impersonating individuals’ panicked citizens try to stockpile what remains of
close contacts: all tactics designed to amplify a sense medications and personal protective equipment.
of urgency and panic. In tandem, the adversary hacks

Preparing for Next-Generation Information Warfare with Generative AI 19


Box 4: Reverberating Impacts on Vulnerable Groups

For groups in situations of vulnerability, including children, pregnant women and persons
with disabilities, access to specific medical, child and disability care services could be
strictly limited and the subsequent health and psychological impact harmful. For instance,
in contexts where disinformation would blur the nature and scale of a biological attack,
pregnant women and children may not benefit from appropriate medical countermeasures
or may undergo unnecessary stressful procedures. For children and women in situations of
vulnerability, the reverberating implications of a public health crisis — caused by information
warfare around a biological event — may also include hindering access to food support
services, schools and other needed resources, exacerbating the harmful psychological impact.

The disinformation campaign also targets specific communities with tailored messages,
exploiting existing socioeconomic and ethnic tensions. For instance, messages in
predominantly immigrant neighbourhoods might suggest that the outbreak is being used as a
pretext to enforce repressive controls and commit violence. In rural areas, the disinformation
might claim that urban elites are being prioritized for treatment and vaccines. The resulting
chaos hampers the government’s ability to respond effectively. Emergency services are
stretched thin, and misinformation about safe practices spreads rapidly, undermining public
health efforts. Civil unrest begins to simmer as people demand answers and accountability
from their leaders. Public health institutions struggle to regain their authority. General
vaccination rates drop as anti-vaccine sentiments, bolstered by the disinformation
campaign, take root. The societal divisions exacerbated by the tailored messages deepen,
making it harder for the nation to recover and heal. In allied countries of the victim state,
levels of political and public support drastically drop as paralysis is fuelled by fears of a
spreading epidemic amid a lack of understanding on the origin of the biological threat.

These converging security risks would have corrosive implications for every country, but
particularly those that have poor and outdated medical, biotech and cyber infrastructure
or those that have a limited capacity to protect their vulnerable populations from the
weaponization of pandemic and technological threats in situations of protracted conflict. The
COVID-19 pandemic has provided state and non-state hostile actors with a real-time window
into societies’ strengths and weaknesses in emergency situations. The pandemic has shown
how a biological threat could break down hospitals and food supply chains, shatter citizens’
trust in public institutions and bring social unrest, disinformation and even violence. In a
similar way, information operations leveraging access to dual-use expertise and mentorship
provided by generative AI and threatening large-scale casualties could be used to multiply the
threat in hybrid warfare scenarios. The most enduring harm would be to civilian resilience
and trust: trust in public health institutions, emergency data systems, laboratories, hospitals
and critical infrastructures. As generative AI learns to democratize strategic military and
civilian expertise and tacit knowledge in complex technological domains, such capacity
will change not only the scale, but also the nature and power of information warfare,
impacting dual-use knowledge asymmetries between threat actors involved in conflicts.

20 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


sensitive technologies and subsequent democratized

Legal Section and access to dual-use knowledge and expertise, which


could be exploited in future information warfare.

Concluding Thoughts Increasingly, international legal experts recognize


the need and urgency of clarifying the existing rules
Information warfare has become a powerful business imposed by IHL on offensive information operations
and a pervasive threat, global in scope with real and (Gisel, Rodenhäuser and Dormann 2020; FP Analytics
unprecedented ramifications for the security and 2023). It is equally urgent to weigh whether the
survival of civilian populations in armed conflict. current international legal framework adequately
captures the humanitarian and civilian protection
Three sobering observations can be made based
needs that arise from waging information warfare in
on how information warfare is waged in modern
conflict. These goals go beyond the purpose of this
conflicts. First, with Russia’s extensive use of
paper. Yet, it proposes to succinctly review some of
hybrid warfare techniques in the war in Ukraine,
the protective measures afforded in IHL and some of
we witness how information and psychological
operations can be combined with cyberattacks the legal ambiguities and critical protection gaps.
and integrated within kinetic warfare.
IHL
Second, to a more pervasive extent than ever before,
Manipulating information is not a new phenomenon
the manipulation of information is purportedly
of warfare. As long as they infringe no rule of
designed to significantly impair civilians’ decision-
international law applicable in armed conflict,
making process for self-protection and survival
deceptive information and psychological operations
(Morris 2024; Katz 2021; United Nations General
have been allowed in past hostilities, including
Assembly 2022). When information is weaponized
misinformation as a ruse of war and the use of
that way, it impacts critical elements of civilian
propaganda targeting civilians (Rodenhäuser and
protection, causing direct and indirect harm to
D’Cunha 2023; Katz 2021; Lahmann 2020). Yet, as
populations, in particular vulnerable groups such
eloquently underlined by Morris, “while there is
as women, youth/children as well as humanitarian
no doubt that ruses of war and propaganda are
and emergency personnel. As highlighted by Khan,
permissible under IHL, this does not mean that
modern armed conflicts and information and
all deceptive conduct is legal” and “this does not
psychological operations, including those inciting
consequentially permit all information warfare
violence, increasingly target civilian populations
in the future” (Morris 2024, 2). In the words of
rather than military forces (United Nations General
ICRC legal experts, “we must recall that there
Assembly 2022). It follows from this deeply
is a red line between an information operation
worrisome trend that the core doctrine regulating
that complies with IHL and one that violates it”
armed conflicts within IHL — the protection of the
(Rodenhäuser and D’Cunha 2023, 1). Since IHL is
“person” — is disregarded and now often violated.
fundamentally aimed at protecting civilians, its
Third, the advent of generative AI will rapidly lead provisions should be interpreted with this principle
to an industrialization of information warfare, giving in mind. Therefore, information warfare must be
states and their proxies, non-state armed groups governed by conflict parties’ obligations to civilians
and other violent actors, enhanced, adaptive, self- under IHL: these obligations bind both state and
refining techniques to influence and manipulate non-state parties, including proxy actors, political
human behaviour in conflict. This new diffusion parties or other forms of civilian leadership.
of power will first manifest through dynamic,
The prohibition to encourage unlawful violence
interactive and persuasive ways to influence
entails that “civilian or military leadership of a party
populations captured in digitally siloed forms of
to an armed conflict must not order or encourage
existence (Hiebert 2024). In future conflicts, there
IHL violations by their own forces” or by groups
could be even more techniques enhanced by
of civilians when the commission of such violence
generative AI to manipulate people’s ability for
is foreseeable (ibid.). To illustrate, a state party to
self-determination, self-protection and decision
conflict would violate its obligations under IHL
making in emergencies. Yet, as shown in the
if it conducted information and/or psychological
scenario, another wave of implications will come
operations to incite combatants or civilians to
from the convergence of generative AI with other
attack and harm other civilians and civilian objects

Preparing for Next-Generation Information Warfare with Generative AI 21


(Lahmann 2020). Information operations that incite or Legal ambiguities remain as to whether a certain
intently lead to violent attacks against humanitarian level and type of information operations may
organizations are also prohibited under IHL. Conflict meet the “attack” threshold under IHL, subjecting
situations that involve inter-ethnic tensions, the it to the rules on targeting, such as the principle
proliferation of proxies and the tacit reliance on of distinction, proportionality and precaution
armed groups as surrogates for attacks could be prone (Lahmann 2020). It is relevant to compare with
to information warfare as incitement to violence. existing discussions on what constitutes a
cyberattack in IHL. Important technical questions
A limited range of severely harmful types of persist about how to define and qualify — in the
information and psychological operations could be context of an armed conflict — technical terms such
under the protective reach of IHL if it amounts to as “attack” when they rely exclusively on cyber
prohibited acts or threats of violence, the primary and digital means. Yet, it is increasingly recognized
purpose of which is to spread terror among the that cyberoperations designed to bring physical
civilian population. Two considerations may limit destruction or death meet the attack threshold.
the effective application of this rule to modern In its 2020 position paper, the ICRC underlines
information warfare (Lahmann 2020). First, the importance of considering “harm due to the
information or psychological operations would not foreseeable direct and indirect (or reverberating)
meet the threshold if they do not come with an effects of an attack, for example, the death of patients
actual or threatened act of violence, which might in intensive care-units caused by a cyberoperation
disqualify many operations even if they result in on an electricity network that results in cutting off a
extreme fear among the civilian population. Second, hospital’s electricity supply” (Gisel, Rodenhäuser and
legal experts would need to demonstrate that the Dormann 2020, 313). Consensus among states is still
main purpose of the act or threat of violence is to lacking as to whether cyberoperations that would not
spread terror and that no other motives or objectives cause physical damage but would result in disruption
are predominant. The case study described in this and loss of essential services, or in erosion of public
paper presents a situation where the spreading trust in critical systems, would qualify as an attack
of fear and terror related to biological threats and thus violate IHL. Such discussion is relevant
is exploited for civilian destabilization and for to this paper’s case study in which information
weakening their trust in experts and governing operations are used in a conflict to manipulate
institutions. In regard to the two above thresholds, knowledge and information about pretend biological
it remains unclear whether such a scenario would threats to the extent of destabilizing civilians,
clearly qualify as an act or threat of violence to leading them not to seek or trust medical care
terrorize the population, even if its impact could and pushing them to endanger their health.
cause harm to civilians. While most instances of
information warfare may not be so clearcut as in the While the issue is still debated, some experts argue
following example, ICRC legal experts provide an that “just like other types of military violence, if the
illustration involving a cyber intrusion into digital causal nexus between an instance of disinformation
networks that “propagate false air raid alarms” to and physical harm is sufficiently strong so as to
“keep inhabitants in a state of terror, or to displace render such operation an attack, it must respect the
them” (Rodenhäuser and D’Cunha 2023, 3). When distinction, precaution, and proportionality triad”
the target of spreading terror is military forces, (Lahmann 2020, 1241). It is likely that legal ambiguities
IHL also provides certain limits, including that would remain as to the “causal nexus,” as well as to
“threatening to kill, rape, torture, or ill-treat captured the matter of scale and effect and meeting thresholds.
or wounded soldiers is a violation of IHL” (ibid.).

22 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


and genocide. To prove individual responsibility
for these international crimes, two elements have
Box 5: Recruiting Youth in Information to be established: actus reus (the physical parts of
Warfare the crime) and mens rea (the intent to commit the
crime). The principle of command responsibility
To prevent youth’s recruitment and (article 28 of the Rome Statute), established in
use in information warfare, the CRC customary international law, stipulates that
Optional Protocol on the Involvement military commanders may be held criminally
of Children in Armed Conflict, relevant responsible for crimes committed by armed forces
Security Council Resolutions and other under their effective command and control.5
related normative standards (Paris
For several reasons, discussions on how offensive
Principles) could serve as a basis for
cyberoperations could be regulated under the
legal interpretation and protection of
Rome Statute are relevant to information warfare.
children and adolescents. In addition,
First, different definitions of information warfare
the International Criminal Court (ICC)
coexist with several aspects of cyberoperations,
includes in its list of war crimes the
including the manipulation of information, ranging
active involvement of children in
from raw data and signals to complex concepts
hostilities. Further legal interpretation
and ideas. Second, information and psychological
would be needed to determine under
operations might become increasingly combined
which specific conditions recruitment
with cyberattacks and even integrated within
of adolescents and children into
kinetic warfare. In practice, there is a permeability
information operations would be
between offensive cyberspace tactics, where data
prohibited under the Optional Protocol
exfiltration and monetization, cyber intrusion
and other mechanisms. In particular,
and espionage can support and merge with
legal experts would need to clarify
methods of information warfare. For instance,
whether recruiting children to become
as shown in the technical section, a cyber
perpetrators of offensive information
intrusion would likely be needed to launch an
operations could constitute, in certain
adversarial attack that would compromise the
circumstances, direct participation
functioning and the integrity of data within AI
in ongoing hostilities. To ensure that
models in civilian or military systems. Under
children are not recruited or used in
the impulse of convergence with generative AI,
conflicts, including armed conflicts,
the permeability and overlap between cyber and
through cyberspace, the Committee on
information operations will likely increase. Third,
the Rights of the Child encourages states
in legal reasoning and proceedings, it might be
parties to better control, even criminalize
more strategic to consider these “digital means”
and sanction, the forms of behavioural
as having synergistic and cumulative impacts.
targeting and grooming of children
that are enabled by digital technologies In 2019 and 2020, a Council of Advisers’ Report
on social networking platforms on the Application of the Rome Statute of the ICC
and online games (United Nations to Cyberwarfare provided critical insights into
Convention on Rights of a Child 2021). how the ICC may regulate cyber and information
operations that have the potential to cause grave
suffering for the civilian population, including
International Criminal Law suffering equal to that caused by the most serious
A limited category of harmful cyber and international crimes (Permanent Mission of
information operations in situations of armed Liechtenstein to the United Nations 2021). For
conflict may also be regulated under international instance, members of the Council of Advisers
criminal law, which applies to any natural person confirmed that an adversarial attack altering or
who commits an international crime. Under
this regime, individuals or groups engaging
5 Command responsibility is a jurisprudential doctrine in international
in information warfare may be prosecuted for criminal law permitting the prosecution of military commanders for war
conducting information operations that would crimes perpetrated by their subordinates. The first legal implementations
of command responsibility are found in the Hague Conventions IV and X.
constitute war crimes, crimes against humanity
See ICC (2021, art. 28).

Preparing for Next-Generation Information Warfare with Generative AI 23


deleting civilian medical data may be considered a well-being, and mental suffering. The conduct of
violation of IHL, and therefore possibly a war crime information warfare can increasingly be automated
(ibid., 39). The council also specified conditions and waged remotely via outsourcing to proxies,
under which cyber and information operations yet the rules of war apply to areas that are
could lead to crimes against humanity: for instance, controlled by conflict parties (Katz 2021). Because
by inflicting serious and systematic harm to the disinformation often causes harm indirectly, it
mental health of a targeted group to the extent is unlikely to be classified as an attack or act of
that it would amount to torture or persecution violence under IHL, nor would it be considered
(ibid., 65–67). In particular, the council agreed incitement unless it explicitly advocates violence
with the UN Special Rapporteur on torture and or hostility. Next, although disinformation can
other cruel, inhuman or degrading treatment or cause direct harm to mental health, these injuries
punishment that “cybertechnology can also be used are difficult to assess and document in real time
to inflict, or contribute to, severe mental suffering and are not adequately or sufficiently considered
while avoiding the conduit of the physical body, in IHL. Finally, while adversarial information
most notably through intimidation, harassment, operations can impede the function of civilian
surveillance, public shaming and defamation, as infrastructure, it often does so in ways that do not
well as appropriation, deletion or manipulation of constitute an attack under existing legal definitions.
information” (Bowcott 2020). Regarding the crime
of genocide, members of the council concluded Second, existing international legal doctrines do
that cyber and information operations may not not cover an array of offensive information and
only contribute to severe psychological and mental psychological operations because they do not
harm, but also help initiate and amplify physical clearly qualify as mere instances to terrorize or
acts of violence that could threaten the destruction incite violence, even if they aim to significantly
of a specific minority (Permanent Mission of degrade the integrity of the information ecosystem
Liechtenstein to the United Nations 2021, 80–88). during armed conflict (Lahmann 2020). The goals
of these operations are to systematically target

Critical Civilian Protection Gaps civilian populations by weakening resilience and


trust in governing institutions, undermining self-
What surfaces through legal analysis is that determination and decision-making processes
existing international legal frameworks might and even exploiting the democratization of
not be adequate and comprehensive enough to dual-use expertise to wage powerful forms of
address the emerging issues posed by information information warfare that could lead to large-scale
warfare and the converging AI and dual-use destabilization and insecurity. The nature, scope
technologies it involves. While a few provisions and impact of these manipulative operations, along
in existing IHL impose constraints on information with their enduring divisive and corrosive effects
and psychological operations, these rules rely on on public trust and societal stability, underscore the
definitions, criteria and thresholds that do not need for greater scrutiny and attention, including
necessarily reflect the way that information warfare and particularly when they are waged during
is integrated with hybrid tactics in cyberspace armed conflict.
and waged in modern armed conflict (Lahmann
2020). With a rigid approach to the application And in the case of information warfare that
of IHL, we will face legal ambiguities and grey targets critical elements of civilian protection
areas that allow information and psychological and infrastructures and integrates with kinetic
operations to continue harming civilians (Katz military operations, there might be an argument
2021; United Nations General Assembly 2022). to make about the need to assess their intensity
and potential cumulative impact on resilience and
Several key challenges and critical civilian survival and the subsequent physical and mental
protection gaps persist that will require attention suffering it has imposed on the civilian population.
in the coming years. First, when information and Similar legal discussions about considering
psychological operations are conducted in armed cumulative quantitative and qualitative impacts of
conflict, IHL provisions might not adequately offensive cyberoperations in armed conflict — and
cover instances of harm that are pervasive but their qualification as “cyber war crimes” — are
difficult to attribute and qualify legally, such as happening under the auspices of the ICC (Khan
exposure to foreseeable violence, the manipulation 2023). The prosecutor of the ICC highlights that,
of information undermining civilian security and

24 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


“as states and other actors increasingly resort to populations and industries. Private sector actors
operations in cyberspace, this new and rapidly not only bear a major responsibility, but also are
developing means of statecraft and warfare best placed to use oversight and foresight in the
can be misused to carry out or facilitate war rapid development of generative AI, its convergence
crimes, crimes against humanity, genocide, and with other dual-use technologies and its integration
even the aggression of a state against another” within increasingly blurred military and civilian
(ibid., 50). Yet, ICC proceedings require very high critical infrastructures. There is a need for the
evidentiary standards for attribution, and this is private sector to recognize its role in collaborating
particularly relevant to the involvement of proxies with military leadership and disarmament
in information warfare. How to qualify, document architectures to prepare for the diffusion of power
and attribute international crimes in the digital in conflict that will come from the proliferation and
context and how to proceed across jurisdictions democratization of AI and converging technologies.
will continue to create legal ambiguities and
challenges. We need a whole-of-society response Preliminary Recommendations
for these types of attacks that affect us all.
As AI and generative AI systems reshape how
Third, with the advent of generative AI and other knowledge, expertise and information is used
dual-use technologies, as well as the weaponization and potentially manipulated in conflict and
of cyber capabilities, we face the prospect of a rapid the grey zone between war and peace, now
proliferation, commoditization and privatization is the time to think forward and assess risks,
of information warfare. Already, nation-states vulnerabilities and forms of resilience. While there
are outsourcing information and psychological will be specific implications for military forces
operations to a growing number of cyber proxy and strategic thinking, prevention and resilience
actors, including in armed conflict. At the same will depend on a whole-of-society response.
time, cyber proxy activity is becoming increasingly
It is crucial to adopt a multi-stakeholder,
difficult to decrypt, trace and attribute (Pauwels
collaborative strategy that includes the active
2024). The frameworks used to categorize forms
participation of civil society, traditional media,
of deputization in cyberspace do not adequately
governments and military forces, international
capture the increased permeability and intense
entities and digital corporations. It is equally
knowledge and technical transfer that exist
important that states dialogue with rights holders
among non-state actors, as well as between state
and civil society to forge a vision of how to
and non-state actors (ibid.). The polymorphous,
best build social resilience against information
multi-jurisdictional nature of cyber proxy activity
manipulation. States will need to continuously
therefore drastically complicates technical and,
map how these new deception tools influence
to an even greater degree, legal attribution of
public discourse and opinion. They will also need
wrongful conduct in cyberspace. The consequence
to foster cybersecurity and (bio)technological
is that deniability remains more than ever a
literacy in their civilian populations. As Khan
winning strategy for states using cyber proxies
emphasizes, “more attention should be given
to wage information warfare and advance
in fragile situations to media information and
their geostrategic interests, particularly in the
digital literacy, particularly for young people,
absence of an independent and multilaterally
women, the elderly and other marginalized
recognized attribution authority. The normative
groups, healthy community relations, community-
gap that will persist for the coming years in this
based fact-checking, and education programs
regard — coupled with the potential involvement
to counter hatred, violence and extremism.”
of decentralized private actors in the design,
management and procurement of generative AI and This paper does not aspire to provide exhaustive
other dual-use technologies — will give allowance solutions for addressing every aspect and actor
to new types of abuses being left unaddressed. involved in preventing and mitigating information
operations in conflict scenarios. For example, the
Fourth, complex accountability and compliance
legal analysis highlights the pressing need to clarify
challenges should raise questions about the role
and reinforce the application of international
of both military institutions and the defence and
humanitarian law to safeguard civilians and ensure
civilian private sector in strengthening responsible
their access to crucial survival information. In
innovation and protecting governments,
this regard, the 2022 report, Disinformation and

Preparing for Next-Generation Information Warfare with Generative AI 25


freedom of opinion and expression during armed the commercial biotech industry and medical
conflicts, by Special Rapporteur Irene Khan offers research institutions (including a network of expert
far more comprehensive recommendations. scientists and health-care professionals for crisis
Another important entry point for international mobilization) are crucial. These partnerships are
collective action is the United Nations Global essential for developing countermeasures to combat
Principles for Information Integrity presented by the information operations that might accompany
the UN Secretary-General António Guterres on or exploit the threat of a biological attack.
June 24, 2024. In the words of Guterres, “These
five principles — [societal] trust and resilience; Foresight efforts should include cooperation with
independent, free, and pluralistic media; healthy states affected by conflict: experts in conflict
incentives; transparency and research; and public prevention should partner with private sector
empowerment — are based on an overriding actors and civil society to better tailor prevention
vision of a more humane [information] ecosystem” strategies to the specific threats and ethical needs
(United Nations General Assembly 2022). of vulnerable communities. Such “inclusive
foresight” could equip countries and agencies with
Nonetheless, in the present context, by building the tools to articulate scenarios from which risk
on its technical analysis and scenario planning, prioritization can emerge, particularly in conflict
this paper aims to demonstrate that fostering zones, as well as develop responsible approaches
new collaborations and adopting anticipatory, to leverage emerging technologies for prevention.
foresight-based methods will be essential to
driving meaningful change, particularly as current
threats in sectors such as AI, cyber security and
biosecurity are still being governed in silos.

Inclusive Foresight for Better Works Cited


Prevention and Mitigation Agranovich, David. 2023. “Detect, Disrupt, Deter.” In
Close, effective and sustainable partnerships Digital Front Lines: A sharpened focus on the risks
between civilian private sector actors, technology of, and responses to, hybrid warfare, a Special
leaders, civil society organizations, governments Report from FP Analytics with support from
and military institutions should be convened Microsoft, 32. https://digitalfrontlines.io/wp-
to conduct combined foresight analyses across content/uploads/sites/8/2023/08/digital-front-
technological domains, including generative lines-report-FP-analytics-microsoft-2023.pdf.
AI, cyber security and biosecurity. With an
aim toward understanding the convergence of Alfifi, Majid, Parisa Kaghazgaran and James
generative AI and other dual-use technologies Caverlee. 2018. “Measuring the Impact of
with high-impact biological events, such groups ISIS Social Media Strategy.”
could define a shared approach to prevention and http://snap.stanford.edu/mis2/
mitigation. This paper’s scenario demonstrates files/MIS2_paper_23.pdf.
how disinformation campaigns about biological
Almohammad, Asaad. 2018. “ISIS Child Soldiers
threats could be tailored to destabilize civilian
in Syria: The Structural and Predatory
populations and undermine trust in health systems
Recruitment, Enlistment, Pre-Training
and governing institutions, producing even more
Indoctrination, Training, and Deployment.”
strategic effects and harmful impacts in situations
International Centre for Counter-
of protracted conflict between two states.
Terrorism Research Paper. February 19.
It is urgent that governments collaborate with https://doi.org/10.19165/2018.1.02.
the private sector to create more efficient early
Baraniuk, Chris. 2018. “Surgeon David Nott: Hack
warning systems to detect and analyze the
led to Syria air strike.” BBC, March 21.
sources, actors and modus operandi behind the
www.bbc.com/news/technology-43486131.
information and psychological operations that
target civilian populations. As shown in the paper’s
scenario, closer collaborations among policy
makers, the defence sector, health-care providers,

26 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


Beauchamp-Mustafaga, Nathan. 2024. “Exploring Catastrophe. Nuclear Threat Initiative. October.
the Implications of Generative AI for Chinese www.nti.org/analysis/articles/the-convergence-
Military Cyber-Enabled Influence Operations: of-artificial-intelligence-and-the-life-sciences/.
Chinese Military Strategies, Capabilities
and Intent.” Testimony presented before the Concentric. 2019. “E-Recruits: How Gaming is
US-China Economic and Security Review Helping Terrorist Groups Radicalize and Recruit
Commission on February 1. RAND Corporation. a Generation of Online Gamers.” March 17.
www.rand.org/content/dam/ www.concentric.io/blog/e-recruits-how-
rand/pubs/testimonies/CTA3100/ gaming-is-helping-terrorist-groups-radicalize-
CTA3191-1/RAND_CTA3191-1.pdf. and-recruit-a-generation-of-online-gamers.

Bingle, Morgan. 2023. “What is Information Dorn, Sara. 2023. “Republicans Launch Eerie
Warfare?” The Henry M. Jackson School AI-Generated Attack Ad On Biden.”
of International Studies, University Forbes, April 25. www.forbes.com/sites/
of Washington. September 25. saradorn/2023/04/25/republicans-launch-
https://jsis.washington.edu/news/ eerie-ai-generated-attack-ad-on-biden/.
what-is-information-warfare/.
Fecteau, Matthew. 2021. “The Deepfakes Are
Bowcott, Owen. 2020. “UN warns of rise of Coming.” War Room, April 23.
‘cybertorture’ to bypass physical ban.” The https://warroom.armywarcollege.
Guardian, February 21. www.theguardian.com/ edu/articles/deep-fakes/.
law/2020/feb/21/un-rapporteur-warns-of-rise-
Federspiel, Frederik, Ruth Mitchell, Asha Asokan,
of-cybertorture-to-bypass-physical-ban.
Carlos Umana and David McCoy. 2023. “Threats
Brandt, Jessica. 2023. “Propaganda, foreign by artificial intelligence to human health and
interference, and GenAI.” Brookings, human existence.” BMJ Global Health 8 (5):
November 8. www.brookings.edu/ e010435. https://doi.org/10.1136/bmjgh-
articles/propaganda-foreign- 2022-010435.
interference-and-generative-ai/.
Fedorov, Mykhailo. 2023. “Lessons from Ukraine in
Burt, Tom. 2023. “The Face of Modern Hybrid the Heat of an Ongoing Hybrid War.” In Digital
Warfare.” In Digital Front Lines: A sharpened Front Lines: A sharpened focus on the risks of, and
focus on the risks of, and responses to, hybrid responses to, hybrid warfare, a Special Report
warfare, a Special Report from FP Analytics from FP Analytics with support from Microsoft,
with support from Microsoft, 14–15. 12–13. https://digitalfrontlines.io/wp-content/
https://digitalfrontlines.io/wp-content/ uploads/sites/8/2023/08/digital-front-lines-
uploads/sites/8/2023/08/digital-front-lines- report-FP-analytics-microsoft-2023.pdf.
report-FP-analytics-microsoft-2023.pdf.
Feldstein, Steven. 2023. “The Consequences of
Byman, Daniel L., Chongyang Gao, Chris Meserole Generative AI for Democracy, Governance
and V. S. Subrahmanian. 2023. Deepfakes and War.” Survival 65 (5): 117–42. https://
and International Conflict. Foreign Policy at doi.org/10.1080/00396338.2023.2261260.
Brookings. January. www.brookings.edu/
Fixler, Annie. 2023. “Cyber-Resilience Helps
wp-content/uploads/2023/01/FP_20230105_
Democracies Prevail Against Authoritarian
deepfakes_international_conflict.pdf.
Disinformation.” In Digital Front Lines: A
Carballo, Rebecca. 2023. “Using AI To Talk to the sharpened focus on the risks of, and responses
Dead.” The New York Times, December 11. to, hybrid warfare, a Special Report from
www.nytimes.com/2023/12/11/technology/ FP Analytics with support from Microsoft,
ai-chatbots-dead-relatives.html. 11. https://digitalfrontlines.io/wp-content/
uploads/sites/8/2023/08/digital-front-lines-
Carter, Sarah R., Nicole E. Wheeler, Sabrina report-FP-analytics-microsoft-2023.pdf.
Chwalek, Christopher R. Isaac and Jaime Yassif.
2023. The Convergence of Artificial Intelligence
and the Life Sciences: Safeguarding Technology,
Rethinking Governance, and Preventing

Preparing for Next-Generation Information Warfare with Generative AI 27


FP Analytics. 2023. “Strategies for Reconciling articles/generative-ai-risks-further-
International Humanitarian Law and Cyber atomizing-democratic-societies/.
Operations: A Q&A with Dr. Peter Maurer.” In
Digital Frontlines: A sharpened focus on the risks Hutchinson, William. 2006. “Information
of, and responses to, hybrid warfare, a Special Warfare and Deception.” Informing
Report from FP Analytics with support from Science: The International Journal of
Microsoft, 28–29. https://digitalfrontlines.io/wp- an Emerging Transdiscipline 9: 213–23.
content/uploads/sites/8/2023/08/digital-front- https://doi.org/10.28945/480.
lines-report-FP-analytics-microsoft-2023.pdf.
ICC. 2021. Rome Statute of the International
Giles, Keir. 2023. “Russian cyber and information Criminal Court. The Hague, The Netherlands:
warfare in practice: Lessons observed ICC. www.icc-cpi.int/sites/default/
from the war on Ukraine.” Chatham files/2024-05/Rome-Statute-eng.pdf.
House Research Paper. December 14.
ICRC. 2021. Harmful Information – Misinformation,
https://doi.org/10.55317/9781784135898.
Disinformation and Hate Speech in Armed Conflict
Gisel, Laurent, Tilman Rodenhäuser and and Other Situations of Violence. ICRC Initial
Knut Dörmann. 2020. “Twenty years on: Findings and Perspectives on Adapting Protection
International humanitarian law and the Approaches. July. https://shop.icrc.org/
protection of civilians against the effects of harmful-information-misinformation-
cyber operations during armed conflicts.” disinformation-and-hate-speech-in-armed-
International Review of the Red Cross 102 (913): conflict-and-other-situations-of-violence-
287–334. https://doi.org/10.1017/ icrc-initial-findings-and-perspectives-on-
S1816383120000387. adapting-protection-approaches-pdf-en.html.

Gisselsson, David. 2022. “Next-Generation Jackson, Jasper, Lucy Kassa, Mark Townsend.
Biowarfare: Small in Scale, Sensational 2022. “Facebook ‘lets vigilantes in Ethiopia
in Nature?” Health Security 20 (2): 182–86. incite ethnic killing.’” The Guardian,
https://doi.org/10.1089/hs.2021.0165. February 20. www.theguardian.com/
technology/2022/feb/20/facebook-lets-
Goel, Vindu and Shaikh Azizur Rahman. 2019. vigilantes-in-ethiopia-incite-ethnic-killing.
“When Rohingya Refugees Fled to India, Hate
on Facebook Followed.” The New York Times, Jingnan, Huo. 2024. “How Israel tried to use AI to
June 14. www.nytimes.com/2019/06/14/ covertly sway Americans about Gaza,” NPR,
technology/facebook-hate-speech- June 6. www.npr.org/2024/06/05/nx-s1-4994027/
rohingya-india.html. israel-us-online-influence-campaign-gaza.

Guay, Joseph, Stephen Gray, Meghann Rhynard- Kastrenakes, Jacob. 2013. “Syrian Electronic Army
Geil and Lisa Inks. 2019. The Weaponization alleges stealing ‘millions’ of phone numbers
of Social Media: How social media can spark from chat app Tango.” The Verge, July 22.
violence and what can be done about it. Mercy www.theverge.com/2013/7/22/4545838/sea-
Corps. November. www.mercycorps.org/ giving-hacked-tango-database-government.
sites/default/files/2020-01/Weaponization_
Katz, Eian. 2021. “Liar’s war: Protecting civilians
Social_Media_FINAL_Nov2019.pdf.
from disinformation during armed conflict.”
Hart, Robert. 2022. “Clearview AI Fined $9.4 Million International Review of the Red Cross 102 (914):
In U.K. For Illegal Facial Recognition Database.” 659–82. https://doi.org/10.1017/
Forbes, May 23. www.forbes.com/ S1816383121000473.
sites/roberthart/2022/05/23/clearview-
Khan KC, Karim A. A. 2023. “Technology Will Not
ai-fined-94-million-in-uk-for-illegal-
Exceed Our Humanity.” In Digital Front Lines:
facial-recognition-database/.
A sharpened focus on the risks of, and responses
Hiebert, Kyle. 2024. “Generative AI Risks Further to, hybrid warfare, a Special Report from
Atomizing Democratic Societies.” Opinion, FP Analytics with support from Microsoft,
Centre for International Governance 50–51. https://digitalfrontlines.io/wp-content/
Innovation, February 26. www.cigionline.org/

28 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


uploads/sites/8/2023/08/digital-front-lines- wp-content/uploads/2023/04/
report-FP-analytics-microsoft-2023.pdf. HAI_AI-Index-Report_2023.pdf.

Klepper, David. 2023. “Fake babies, real horror: McGuire, Mike. 2021. “Nation States, Cyberconflict
Deepfakes from the Gaza war increase fears and the Web of Profit.” HP Threat Research Blog,
about AI’s power to mislead.” AP News, April 8. https://threatresearch.ext.hp.com/
November 28. https://apnews.com/ web-of-profit-nation-state-report/.
article/artificial-intelligence-hamas-
israel-misinformation-ai-gaza-a1bb3 Meisenzahl, Mary. 2019. “ISIS is reportedly using
03b637ffbbb9cbc3aa1e000db47. popular Gen Z app TikTok as its newest
recruitment tool.” Business Insider, October 21.
Lahmann, Henning. 2020. “Protecting the www.businessinsider.com/isis-using-tiktok-
global information space in times of to-target-teens-report-2019-10?r=US&IR=T.
armed conflict.” International Review of
the Red Cross 102 (915): 1227–48. https:// Morris, Tamer. 2024. “Israel-Hamas 2024
doi.org/10.1017/S1816383121000400. Symposium — Information Warfare and the
Protection of Civilians in the Gaza Conflict.”
MacDonald, Andrew and Ryan Ratcliffe. 2023. The Lieber Institute, West Point. January 23.
“Cognitive Warfare: Maneuvering in the https://lieber.westpoint.edu/information-
Human Dimension.” Proceedings 149 (4). warfare-protection-civilians-gaza-conflict/.
The U.S. Naval Institute. www.usni.org/
magazines/proceedings/2023/april/cognitive- Mouton, Christopher A., Caleb Lucas and Ella
warfare-maneuvering-human-dimension. Guest. 2023. The Operational Risks of AI in Large-
Scale Biological Attacks: A Red-Team Approach.
Marcellino, William, Nathan Beauchamp-Mustafaga, Research report, RAND Corporation.
Amanda Kerrigan, Lev Navarre Chao and www.rand.org/pubs/research_
Jackson Smith. 2023. The Rise of Generative AI reports/RRA2977-1.html.
and the Coming Era of Social Media Manipulation
3.0: Next-Generation Chinese Astroturfing and Moy, Wesley R. and Kacper T. Gradon. 2023.
Coping with Ubiquitous AI. RAND Corporation, “Artificial intelligence in hybrid and
September 7. www.rand.org/pubs/ information warfare: A double-edged sword.”
perspectives/PEA2679-1.html. In Artificial Intelligence and International Conflict
in Cyberspace, edited by Fabio Cristiano, Dennis
Marlatt, Greta E. 2008. “Information Warfare Broeders, François Delerue, Frédérick Douzet
and Information Operations (IW/IO): A and Aude Géry, 47–74. Abingdon, UK: Routledge.
Bibliography.” Monterey, CA: Dudley Knox https://doi.org/10.4324/9781003284093-4.
Library, Naval Postgraduate School.
Mubarak, Rami, Tariq Alsboui, Omar Alshaikh, Isa
Marten, Kimberly. 2022. “Russia’s Use of the Inuwa-Dute, Saad Khan and Simon Parkinson.
Wagner Group: Definitions, Strategic 2023. “A Survey on the Detection and Impacts
Objectives, and Accountability.” Testimony of Deepfakes in Visual, Audio, and Textual
before the Committee on Oversight and Formats.” IEEE Access 11: 144497–529.
Reform Subcommittee on National Security, https://doi.org/10.1109/ACCESS.2023.3344653.
United States House of Representatives,
Hearing on “Putin’s Proxies: Examining Pandith, Farah and Jacob Ware. 2021. “Teen
Russia’s Use of Private Military Companies,” terrorism inspired by social media is on
September 15. https://docs.house.gov/ the rise. Here’s what we need to do.” NBC
meetings/GO/GO06/20220921/115113/HHRG- Think, March 22. www.nbcnews.com/think/
117-GO06-Wstate-MartenK-20220921.pdf. opinion/teen-terrorism-inspired-socialmedia-
rise-here-s-what-we-ncna1261307.
Maslej, Nestor, Loredana Fattorini, Erik
Brynjolfsson, John Etchemendy, Katrina
Ligett, Terah Lyons, James Manyika et al. 2023.
Artificial Intelligence Index Report 2023. Stanford,
CA: Institute for Human-Centered AI, Stanford
University. April. https://aiindex.stanford.edu/

Preparing for Next-Generation Information Warfare with Generative AI 29


Pauwels, Eleonore. 2020a. The Anatomy of impact of social media on young people in conflict-
Information Disorders in Africa: Geostrategic affected regions of Myanmar. Yangon, Myanmar:
Positioning & Multipolar Competition Over Save the Children Myanmar and The University
Converging Technologies. September 9. New of Sydney. www.savethechildren.es/
York, NY: Konrad Adenauer Stiftung. sites/default/files/imce/docs/mobile_
www.kas.de/en/web/newyork/ myanmar_report_short_final.pdf.
single-title/-/content/the-anatomy-of-
information-disorders-in-africa. Rodenhäuser, Tilman and Samit D’Cunha. 2023.
“Foghorns of war: IHL and information
———. 2020b. Artificial Intelligence and Data operations during armed conflict.”
Capture Technologies in Violence and Conflict Humanitarian Law and Policy (blog), October 12.
Prevention: Opportunities and Challenges https://blogs.icrc.org/law-and-policy/2023/10/12/
for the International Community. Global foghorns-of-war-ihl-and-information-
Center on Cooperative Security Policy Brief. operations-during-armed-conflict/.
September. www.globalcenter.org/wp-
content/uploads/GCCS_AIData_PB_H-1.pdf. Sandbrink, Jonas B. 2023. “Artificial intelligence
and biological misuse: Differentiating
———. 2024. Regulating the Role and Involvement risks of language models and biological
of Offensive Proxy Actors in Cyberconflict. design tools.” arXiv, June 24. https://
March 27. New York, NY: Konrad Adenauer doi.org/10.48550/arXiv.2306.13952.
Stiftung. www.kas.de/en/web/newyork/
veranstaltungsberichte/detail/-/content/ Schafer, Bret, Nathan Kohlenberg, Amber Frankland
regulating-the-role-and-involvement-of- and Etienne Soula. 2021. “Influence-enza:
offensive-proxy-actors-in-cyberconflict-1. How Russia, China, and Iran Have Shaped and
Manipulated Coronavirus Vaccine Narratives.”
Permanent Mission of Liechtenstein to the United Alliance for Securing Democracy. March 6.
Nations. 2021. The Council of Advisers’ Report https://securingdemocracy.gmfus.org/russia-
on the Application of the Rome Statute of the china-iran-covid-vaccine-disinformation/.
International Criminal Court to Cyberwarfare.
August. www.regierung.li/files/ Segal, Dave. 2018. “How Bell Pottinger, P.R. Firm
medienarchiv/The-Council-of-Advisers- for Despots and Rogues, Met Its End in South
Report-on-the-Application-of-the- Africa.” The New York Times, February 4.
Rome-Statute-of-the-International- www.nytimes.com/2018/02/04/business/bell-
Criminal-Court-to-Cyberwarfare.pdf. pottinger-guptas-zuma-south-africa.html.

Pomerantsev, Peter and Michael Weiss. 2014. The Singer, Peter Warren and Emerson T. Brooking. 2018.
Menace of Unreality: How the Kremlin Weaponizes LikeWar: The Weaponization of Social Media.
Information, Culture and Money. Special Report Boston, MA: Houghton Mifflin Harcourt.
presented by The Interpreter, a Project of the
Spink, Lauren. 2023. When Words Become Weapons:
Institute of Modern Russia. https://imrussia.org/
The Unprecedented Risks to Civilians from the
media/pdf/Research/Michael_Weiss_and_Peter_
Spread of Disinformation in Ukraine. Center for
Pomerantsev__The_Menace_of_Unreality.pdf.
Civilians in Conflict. October.
Prier, Jarred. 2017. “Commanding the Trend: https://civiliansinconflict.org/wp-
Social Media as Information Warfare.” content/uploads/2023/11/CIVIC_
Strategic Studies Quarterly 11 (4): 50–85. Disinformation_Report.pdf.

Raymond, Nate. 2022. “Facebook parent Meta Stanham, Lucia. 2023. “Generative AI
to settle Cambridge Analytica scandal case (GenAI) in Cybersecurity.” Crowdstrike.
for $725 million.” Reuters, December 23. November 26. www.crowdstrike.com/
www.reuters.com/legal/facebook-parent- cybersecurity-101/secops/generative-ai/.
meta-pay-725-mln-settle-lawsuit-relating-
cambridge-analytica-2022-12-23/.

Ridout, Brad, Melyn McKay, Krestina Amon and


Andrew Campbell. 2019. Mobile Myanmar: The

30 CIGI Paper No. 310 — December 2024 • Eleonore Pauwels


Stewart, Ruben and Georgia Hinds. 2023. “Russian election meddling is back — via
“Algorithms of war: The use of artificial Ghana and Nigeria — and in your feeds.” CNN,
intelligence in decision making in armed April 11. https://edition.cnn.com/
conflict.” Humanitarian Law and Policy (blog), 2020/03/12/world/russia-ghana-troll-
October 24. ICRC. https://blogs.icrc.org/ farms-2020-ward/index.html.
law-and-policy/2023/10/24/algorithms-
of-war-use-of-artificial-intelligence- Watts, Clint. 2022. “Preparing for a Russian cyber
decision-making-armed-conflict/. offensive against Ukraine this winter.” Microsoft
On the Issues (blog), December 3.
The UIC John Marshall Law School IHRC and https://blogs.microsoft.com/on-the-
Access Now, supported by Syrian Justice & issues/2022/12/03/preparing-russian-
Accountability Centre and MedGlobal. 2021. cyber-offensive-ukraine/.
Digital Dominion: How the Syrian Regime’s Mass
Digital Surveillance Violates Human Rights. Whiskeyman, Andrew and Michael Berger. 2021.
March. www.accessnow.org/ “Axis of Disinformation: Propaganda from
cms/assets/uploads/2021/03/Digital- Iran, Russia, and China on COVID-19.” Policy
dominion-Syria-report.pdf. Analysis. Washington, DC: The Washington
Institute for Near East Policy. February 25.
United Nations Convention on Rights of a Child. www.washingtoninstitute.org/
2021. General comment No. 25 on children’s policy-analysis/axis-disinformation-
rights in relation to the digital environment. propaganda-iran-russia-and-china-covid-19.
GC/25. March 2. www.ohchr.org/en/
documents/general-comments-and- Wirtschafter, Valerie. 2024. “The implications of
recommendations/general-comment- the AI boom for non-state armed actors.”
no-25-2021-childrens-rights-relation. Brookings. January 16. www.brookings.edu/
articles/the-implications-of-the-ai-
United Nations General Assembly. 2022. boom-for-nonstate-armed-actors/.
Disinformation and freedom of opinion
and expression during armed conflicts. Yang, Zeyi. 2024. “Deepfakes of your dead loved
A/77/288. August 12. www.ohchr.org/en/ ones are a booming Chinese business.” MIT
documents/thematic-reports/a77288- Technology Review, May 7.
disinformation-and-freedom-opinion- www.technologyreview.
and-expression-during-armed. com/2024/05/07/1092116/deepfakes-
dead-chinese-business-grief/.
United Nations Human Rights Commission.
2018a. Report of the independent international Zuboff, Shoshana. 2019. The Age of
fact-finding mission on Myanmar. Surveillance Capitalism: The Fight for a
A/39/64. www.ohchr.org/sites/default/ Human Future at the New Frontier of
files/Documents/HRBodies/HRCouncil/ Power. London, UK: Profile Books.
FFM-Myanmar/A_HRC_39_64.pdf.
Zuidijk, Daniel. 2023. “Deepfakes in Slovakia
———. 2018b. Report of the detailed findings of the Preview How AI Will Change the Face of
Independent International Fact-Finding Mission Elections.” Bloomberg. October 4.
on Myanmar. A/39/CRP.2. https://reliefweb.int/ www.bloomberg.com/news/
report/myanmar/report-detailed- newsletters/2023-10-04/deepfakes-
findings-independent-international- in-slovakia-preview-how-ai-will-
fact-finding-mission-myanmar. change-the-face-of-elections.

Ward, Antonia. 2018. “ISIS’s Use of Social Media Still


Poses a Threat to Stability in the Middle East
and Africa.” RAND, December 11.
www.rand.org/blog/2018/12/isiss-use-of-social-
media-still-poses-a-threat-to-stability.html.

Ward, Clarissa, Katie Polglase, Sebastian Shukla,


Gianluca Mezzofiore and Tim Lister. 2020.

Preparing for Next-Generation Information Warfare with Generative AI 31


67 Erb Street West
Waterloo, ON, Canada N2L 6C2
www.cigionline.org

You might also like