Pauwels-Nov2024
Pauwels-Nov2024
À propos du CIGI
Le Centre pour l’innovation dans la gouvernance internationale (CIGI) est un
groupe de réflexion indépendant et non partisan dont les recherches évaluées
par des pairs et les analyses fiables incitent les décideurs à innover. Grâce
à son réseau mondial de chercheurs pluridisciplinaires et de partenariats
stratégiques, le CIGI offre des solutions politiques adaptées à l’ère numérique
dans le seul but d’améliorer la vie des gens du monde entier. Le CIGI, dont le
siège se trouve à Waterloo, au Canada, bénéficie du soutien du gouvernement
du Canada, du gouvernement de l’Ontario et de son fondateur, Jim Balsillie.
The opinions expressed in this publication are those of the author and do not
necessarily reflect the views of the Centre for International Governance Innovation
or its Board of Directors.
The text of this work is licensed under CC BY 4.0. To view a copy of this licence,
visit http://creativecommons.org/licenses/by/4.0/.
For reuse or distribution, please include this copyright notice. This work may
contain content (including but not limited to graphics, charts and photographs)
used or reproduced under licence or with permission from third parties.
Permission to reproduce this content must be obtained from third parties directly.
1 Executive Summary
1 Introduction
2 Framing Section
7 Technical Section
26 Works Cited
About the Author
Eleonore Pauwels is an international expert
in the security, societal and governance
implications generated by the convergence of
artificial intelligence (AI) with other dual-use
technologies, including cybersecurity, genomics
and neurotechnologies. Eleonore provides expertise
to the World Bank, the United Nations and the
Global Center on Cooperative Security in New
York. She also works closely with governments and
private sector actors on AI-Cyberthreats Prevention,
the changing nature of conflict, foresight and
global security. In 2018 and 2019, she served as
research fellow on emerging cybertechnologies for
the United Nations University’s Centre for Policy
Research. At the Woodrow Wilson International
Center for Scholars, she spent 10 years within
the Science and Technology Innovation Program,
leading the Anticipatory Intelligence Lab. She is also
part of the Scientific Committee of the International
Association for Responsible Research and
Innovation in Genome Editing. Eleonore regularly
testifies before US and European authorities,
including the US Department of State, the National
Intelligence Council, the US Bipartisan Commission
on Biodefense, the Council of Europe, the European
Commission and the United Nations. She writes
for Nature, The New York Times, The Guardian,
Scientific American, Le Monde, Slate, UN News, The
UN Chronicle and the World Economic Forum.
The world has entered a complex and dangerous Second, the integration of AI, including generative
decade. As new and old threats converge AI, with other powerful technologies is a radical
and challenge the multilateral order, one of shift as this convergence broadens and complexifies
the most seismic shifts is taking place at the the potential of information warfare. What is at
intersection of war, technology and cyberspace. stake is the weaponization of dual-use knowledge
Modern conflicts — whether they are declared, itself, including possibly all forms of dual-use
contested or waged in the grey zone — are expertise developed by human civilization.
amplified by a technological revolution that
is inherently dual use. These conflicts merge
physical and digital fronts, invading cities and
factories, homes and everyday devices, and
producing new targets and victims in their
wake. Frontiers between peace and war, offence
Yet there is a common language or matrix of key terms and concepts that has been defined by
scholarship and refined by humanitarian practitioners (a synthesis has been provided in Box
2). This matrix is important for applying policy and international legal frameworks, yet it also
reflects the complexity for practitioners on the ground to recognize and label different types
of operations that manipulate information at a time of conflict (International Committee of
the Red Cross [ICRC] 2021).
For the purpose of this paper, which focuses on situations of conflict, “information warfare”
is defined as the collection, dissemination, manipulation, corruption and degradation
of information with the goal of achieving strategic advantage over a conflict party and/
or its population (Marlatt 2008; Prier 2017). Another comprehensive and more recent
conceptualization of information warfare is “a struggle to control or deny the confidentiality,
integrity, and availability of information in all its forms, ranging from raw data to complex
concepts and ideas” (Bingle 2023, 6). As Morgan Bingle explains, “Offensively, information
warfare occurs when one side within a conflict seeks to impose their desired information
state on their adversary’s information and affect how target individuals or populations
interpret or learn from the information they possess or are collecting” (ibid.). Bingle stipulates
that “the offensive actor can target at either the information itself or the individuals and
larger group forming their target audience” (ibid.).
In this paper, “information” or “influence operations” are defined as “the strategic and
calculated use of information and information-sharing systems to influence, disrupt, or
divide society,” for instance by involving “the collection of intelligence on specific targets,
disinformation and propaganda campaigns, or the recruitment of online influencers” (Spink
2023, 48). Psychological warfare can be framed as “the planned use of propaganda and other
psychological operations to influence the opinions, emotions, attitudes, and behavior of
opposition groups” (ibid.). A useful definition of “adversarial information operations” is the
one proposed by the “Oxford Statement on International Law Protections in Cyberspace:
The Regulation of Information Operations and Activities” as “any coordinated or individual
deployment of digital resources for cognitive purposes to change or reinforce attitudes or
behaviours of the targeted audience.”3
3 See www.elac.ox.ac.uk/the-oxford-process/the-statements-overview/the-oxford-statement-on-the-regulation-of-information-operations-and-activities/.
Misinformation: False information that is spread by individuals who believe the information to be
true or who have not taken the time to verify it.
Disinformation: False information that is fabricated or disseminated with malicious intent. This can
include terms such as propaganda and information operations.
Hate speech: All forms of expression, including text, images, audio or video, that incite, promote
or justify hatred and violence based on intolerance toward identity traits such as gender, religion,
ethnicity or sexual orientation. This speech often blends misinformation, disinformation and
rumours, and is manipulated by its perpetrators to fuel animosity. Utilizing both traditional and
digital communication channels, hate speech exacerbates tensions between groups and can incite
violence against individuals based on their identity.
Dual-use technologies: Technologies that have a primary civilian and commercial application, but
also have the potential to be weaponized or used for military applications.
AI: The simulation of human intelligence in machines that are programmed to think and learn like
humans. These machines can perform tasks that typically require human cognitive functions, such as
visual perception, speech recognition, decision making and language translation.
Generative AI: AI systems that use advanced algorithms, such as generative adversarial networks
and transformer models, to create new, realistic content, such as text, images and audio, that is often
indistinguishable from human-generated content.
Foundational models: Large-scale AI models trained on broad data sets that can be fine-tuned for
a variety of specific tasks. These models serve as a base or “foundation” upon which specialized
models for specific applications can be built. The term has become prominent with the rise of large
language models (LLMs) such as GPT-4, which are pre-trained on vast amounts of text data and can
be adapted for tasks ranging from language translation to sentiment analysis with relatively little
additional training.
Deepfake: An image or recording that has been convincingly altered and manipulated to
misrepresent a person as doing or saying something that was not actually done or said.
Grey-zone tactics: The acts of state parties in relation to a dispute that maintain high-level
diplomatic relations while interacting antagonistically below the threshold of war.
Hybrid warfare: The use of non-military tactics alongside conventional kinetic warfare to achieve
foreign policy goals.
Capabilities and What Is technologies, but are also merging with our daily
experiences, monitoring how humans live, move
the Critical Leap Achieved and feel. By progressively learning and simulating
The interplay between armed conflict and disinformation is intricately intertwined with
existing grievances, amplifying human suffering, stoking hatred and disproportionately
impacting vulnerable groups (United Nations General Assembly 2022).
Minorities and marginalized racial and ethnic groups often bear the brunt of the destructive
effects of information warfare. For instance, in conflicts in Myanmar (United Nations
Human Rights Commission 2018a, 2018b) and Ethiopia (Jackson, Kassa and Townsend 2020),
combatants have exploited mass communication platforms to incite hatred, dehumanize
opponents and trigger violations of human rights. In past conflicts in Kenya, Nigeria and
South Africa, political leaders have employed divisive and inflammatory rhetoric to deny
established facts, escalate tensions and incriminate national, ethnic and religious groups
(Pauwels 2020a). Refugees, internally displaced persons and migrants are frequently
depicted as threats to national security or social cohesion, fuelling hatred against them.
Generative AI: A Revolution for 2019). Our “patterns of life” — our conversations
Information Warfare? How Can and emotions, biometric features and behaviours —
can now be turned into predictive insights to
the Confluence of AI Generative fuel information warfare. The vast amount of
Techniques Act as a Catalyst to digital information now generated by populations
Amplify Information Warfare? means that more of these routine behaviours
can be understood through AI computing. A
AI technologies, in particular generative AI
confluence of AI functions and techniques makes
models, are making information warfare both
it increasingly possible to analyze, classify,
more powerful and more accessible. The capacity
profile and, to some extent, predict and influence
of generative AI, merged with diverse forms
human behaviour (Pauwels 2020b). The global
of human behavioural data capture, provides
AI industry posits that significant amounts of
more impactful techniques to drastically
raw information about human experience can be
improve, tailor scale up and even industrialize
turned into actionable intelligence; in other words,
the offensive use of disinformation.
a critical mass of behavioural insights allows for
individuals to be influenced remotely. For instance,
Behavioural profiling and influencing targeted advertisements and content can exploit
We have entered a technological era where our psychological triggers to influence purchasing
private and collective experiences have become decisions, voting behaviours or social interactions,
free material for behavioural surveillance (Zuboff creating an environment where individual
Targeting Primarily
and proxy forces involved in conflict.
Public Support to Conflict Parties Moscow’s strategic goals. This approach is intended
not only to bolster support for Russia within this
In present and future information warfare, community, but also to create a potential internal
authoritarian and hostile states have a strategic pressure group that can influence broader public
interest in influencing large public audiences and opinion and political discourse in Germany.
distorting global perceptions of a conflict. Their
goals include degrading access to trustworthy
information, manipulating narratives, persuading
global audiences of the futility of the fight and
weakening strategic political alliances and public
support for a conflict party. For instance, influence
Targeting Military
operations backed by Russia’s Federal Security
Service and other state-affiliated proxies have relied
Personnel and Operations
on “flooding social media platforms with misleading
messages around the need for the ‘denazification’ of Adversarial Information
Ukraine and accusing the United States of creating Manipulation to Thicken
bioweapons in clandestine laboratories in Ukraine”
(Burt 2023, 15). As Annie Fixler underlines, “Russia
the Fog of War
has adjusted video evidence to deny war crimes, In military domains, generative AI models are used
deployed operators on social media to create fake to pioneer new ways to synthesize intelligence
personas and news sites, and hacked user accounts data and support human decision making, provide
to promulgate disinformation” (Fixler 2023, 11). high-level strategic recommendations and new
problem-solving techniques, generate different
Beyond supercharging these global battles of plans of attack and organize the jamming of
influence, the development of generative AI enemy communications (Feldstein 2023; Stewart
models could accelerate and amplify synthetic and Hinds 2023). Integrated into intelligence,
data and forgeries to obfuscate criminal and state surveillance and reconnaissance scenarios,
responsibility in the conduct of kinetic war and generative AI models could support target tracking
potential IHL violations. The industrialization of in drone missions. These AI capacities will also
information warfare could result from generative likely enhance the operation of a new wave of
AI’s trends, including democratization, automation low-cost, adaptive and modular autonomous
and outsourcing (with information warfare weapon systems that are designed to kill on a
becoming a global and partially to fully automated rapid scale (Federspiel et al. 2023). If misused
For groups in situations of vulnerability, including children, pregnant women and persons
with disabilities, access to specific medical, child and disability care services could be
strictly limited and the subsequent health and psychological impact harmful. For instance,
in contexts where disinformation would blur the nature and scale of a biological attack,
pregnant women and children may not benefit from appropriate medical countermeasures
or may undergo unnecessary stressful procedures. For children and women in situations of
vulnerability, the reverberating implications of a public health crisis — caused by information
warfare around a biological event — may also include hindering access to food support
services, schools and other needed resources, exacerbating the harmful psychological impact.
The disinformation campaign also targets specific communities with tailored messages,
exploiting existing socioeconomic and ethnic tensions. For instance, messages in
predominantly immigrant neighbourhoods might suggest that the outbreak is being used as a
pretext to enforce repressive controls and commit violence. In rural areas, the disinformation
might claim that urban elites are being prioritized for treatment and vaccines. The resulting
chaos hampers the government’s ability to respond effectively. Emergency services are
stretched thin, and misinformation about safe practices spreads rapidly, undermining public
health efforts. Civil unrest begins to simmer as people demand answers and accountability
from their leaders. Public health institutions struggle to regain their authority. General
vaccination rates drop as anti-vaccine sentiments, bolstered by the disinformation
campaign, take root. The societal divisions exacerbated by the tailored messages deepen,
making it harder for the nation to recover and heal. In allied countries of the victim state,
levels of political and public support drastically drop as paralysis is fuelled by fears of a
spreading epidemic amid a lack of understanding on the origin of the biological threat.
These converging security risks would have corrosive implications for every country, but
particularly those that have poor and outdated medical, biotech and cyber infrastructure
or those that have a limited capacity to protect their vulnerable populations from the
weaponization of pandemic and technological threats in situations of protracted conflict. The
COVID-19 pandemic has provided state and non-state hostile actors with a real-time window
into societies’ strengths and weaknesses in emergency situations. The pandemic has shown
how a biological threat could break down hospitals and food supply chains, shatter citizens’
trust in public institutions and bring social unrest, disinformation and even violence. In a
similar way, information operations leveraging access to dual-use expertise and mentorship
provided by generative AI and threatening large-scale casualties could be used to multiply the
threat in hybrid warfare scenarios. The most enduring harm would be to civilian resilience
and trust: trust in public health institutions, emergency data systems, laboratories, hospitals
and critical infrastructures. As generative AI learns to democratize strategic military and
civilian expertise and tacit knowledge in complex technological domains, such capacity
will change not only the scale, but also the nature and power of information warfare,
impacting dual-use knowledge asymmetries between threat actors involved in conflicts.
Bingle, Morgan. 2023. “What is Information Dorn, Sara. 2023. “Republicans Launch Eerie
Warfare?” The Henry M. Jackson School AI-Generated Attack Ad On Biden.”
of International Studies, University Forbes, April 25. www.forbes.com/sites/
of Washington. September 25. saradorn/2023/04/25/republicans-launch-
https://jsis.washington.edu/news/ eerie-ai-generated-attack-ad-on-biden/.
what-is-information-warfare/.
Fecteau, Matthew. 2021. “The Deepfakes Are
Bowcott, Owen. 2020. “UN warns of rise of Coming.” War Room, April 23.
‘cybertorture’ to bypass physical ban.” The https://warroom.armywarcollege.
Guardian, February 21. www.theguardian.com/ edu/articles/deep-fakes/.
law/2020/feb/21/un-rapporteur-warns-of-rise-
Federspiel, Frederik, Ruth Mitchell, Asha Asokan,
of-cybertorture-to-bypass-physical-ban.
Carlos Umana and David McCoy. 2023. “Threats
Brandt, Jessica. 2023. “Propaganda, foreign by artificial intelligence to human health and
interference, and GenAI.” Brookings, human existence.” BMJ Global Health 8 (5):
November 8. www.brookings.edu/ e010435. https://doi.org/10.1136/bmjgh-
articles/propaganda-foreign- 2022-010435.
interference-and-generative-ai/.
Fedorov, Mykhailo. 2023. “Lessons from Ukraine in
Burt, Tom. 2023. “The Face of Modern Hybrid the Heat of an Ongoing Hybrid War.” In Digital
Warfare.” In Digital Front Lines: A sharpened Front Lines: A sharpened focus on the risks of, and
focus on the risks of, and responses to, hybrid responses to, hybrid warfare, a Special Report
warfare, a Special Report from FP Analytics from FP Analytics with support from Microsoft,
with support from Microsoft, 14–15. 12–13. https://digitalfrontlines.io/wp-content/
https://digitalfrontlines.io/wp-content/ uploads/sites/8/2023/08/digital-front-lines-
uploads/sites/8/2023/08/digital-front-lines- report-FP-analytics-microsoft-2023.pdf.
report-FP-analytics-microsoft-2023.pdf.
Feldstein, Steven. 2023. “The Consequences of
Byman, Daniel L., Chongyang Gao, Chris Meserole Generative AI for Democracy, Governance
and V. S. Subrahmanian. 2023. Deepfakes and War.” Survival 65 (5): 117–42. https://
and International Conflict. Foreign Policy at doi.org/10.1080/00396338.2023.2261260.
Brookings. January. www.brookings.edu/
Fixler, Annie. 2023. “Cyber-Resilience Helps
wp-content/uploads/2023/01/FP_20230105_
Democracies Prevail Against Authoritarian
deepfakes_international_conflict.pdf.
Disinformation.” In Digital Front Lines: A
Carballo, Rebecca. 2023. “Using AI To Talk to the sharpened focus on the risks of, and responses
Dead.” The New York Times, December 11. to, hybrid warfare, a Special Report from
www.nytimes.com/2023/12/11/technology/ FP Analytics with support from Microsoft,
ai-chatbots-dead-relatives.html. 11. https://digitalfrontlines.io/wp-content/
uploads/sites/8/2023/08/digital-front-lines-
Carter, Sarah R., Nicole E. Wheeler, Sabrina report-FP-analytics-microsoft-2023.pdf.
Chwalek, Christopher R. Isaac and Jaime Yassif.
2023. The Convergence of Artificial Intelligence
and the Life Sciences: Safeguarding Technology,
Rethinking Governance, and Preventing
Gisselsson, David. 2022. “Next-Generation Jackson, Jasper, Lucy Kassa, Mark Townsend.
Biowarfare: Small in Scale, Sensational 2022. “Facebook ‘lets vigilantes in Ethiopia
in Nature?” Health Security 20 (2): 182–86. incite ethnic killing.’” The Guardian,
https://doi.org/10.1089/hs.2021.0165. February 20. www.theguardian.com/
technology/2022/feb/20/facebook-lets-
Goel, Vindu and Shaikh Azizur Rahman. 2019. vigilantes-in-ethiopia-incite-ethnic-killing.
“When Rohingya Refugees Fled to India, Hate
on Facebook Followed.” The New York Times, Jingnan, Huo. 2024. “How Israel tried to use AI to
June 14. www.nytimes.com/2019/06/14/ covertly sway Americans about Gaza,” NPR,
technology/facebook-hate-speech- June 6. www.npr.org/2024/06/05/nx-s1-4994027/
rohingya-india.html. israel-us-online-influence-campaign-gaza.
Guay, Joseph, Stephen Gray, Meghann Rhynard- Kastrenakes, Jacob. 2013. “Syrian Electronic Army
Geil and Lisa Inks. 2019. The Weaponization alleges stealing ‘millions’ of phone numbers
of Social Media: How social media can spark from chat app Tango.” The Verge, July 22.
violence and what can be done about it. Mercy www.theverge.com/2013/7/22/4545838/sea-
Corps. November. www.mercycorps.org/ giving-hacked-tango-database-government.
sites/default/files/2020-01/Weaponization_
Katz, Eian. 2021. “Liar’s war: Protecting civilians
Social_Media_FINAL_Nov2019.pdf.
from disinformation during armed conflict.”
Hart, Robert. 2022. “Clearview AI Fined $9.4 Million International Review of the Red Cross 102 (914):
In U.K. For Illegal Facial Recognition Database.” 659–82. https://doi.org/10.1017/
Forbes, May 23. www.forbes.com/ S1816383121000473.
sites/roberthart/2022/05/23/clearview-
Khan KC, Karim A. A. 2023. “Technology Will Not
ai-fined-94-million-in-uk-for-illegal-
Exceed Our Humanity.” In Digital Front Lines:
facial-recognition-database/.
A sharpened focus on the risks of, and responses
Hiebert, Kyle. 2024. “Generative AI Risks Further to, hybrid warfare, a Special Report from
Atomizing Democratic Societies.” Opinion, FP Analytics with support from Microsoft,
Centre for International Governance 50–51. https://digitalfrontlines.io/wp-content/
Innovation, February 26. www.cigionline.org/
Klepper, David. 2023. “Fake babies, real horror: McGuire, Mike. 2021. “Nation States, Cyberconflict
Deepfakes from the Gaza war increase fears and the Web of Profit.” HP Threat Research Blog,
about AI’s power to mislead.” AP News, April 8. https://threatresearch.ext.hp.com/
November 28. https://apnews.com/ web-of-profit-nation-state-report/.
article/artificial-intelligence-hamas-
israel-misinformation-ai-gaza-a1bb3 Meisenzahl, Mary. 2019. “ISIS is reportedly using
03b637ffbbb9cbc3aa1e000db47. popular Gen Z app TikTok as its newest
recruitment tool.” Business Insider, October 21.
Lahmann, Henning. 2020. “Protecting the www.businessinsider.com/isis-using-tiktok-
global information space in times of to-target-teens-report-2019-10?r=US&IR=T.
armed conflict.” International Review of
the Red Cross 102 (915): 1227–48. https:// Morris, Tamer. 2024. “Israel-Hamas 2024
doi.org/10.1017/S1816383121000400. Symposium — Information Warfare and the
Protection of Civilians in the Gaza Conflict.”
MacDonald, Andrew and Ryan Ratcliffe. 2023. The Lieber Institute, West Point. January 23.
“Cognitive Warfare: Maneuvering in the https://lieber.westpoint.edu/information-
Human Dimension.” Proceedings 149 (4). warfare-protection-civilians-gaza-conflict/.
The U.S. Naval Institute. www.usni.org/
magazines/proceedings/2023/april/cognitive- Mouton, Christopher A., Caleb Lucas and Ella
warfare-maneuvering-human-dimension. Guest. 2023. The Operational Risks of AI in Large-
Scale Biological Attacks: A Red-Team Approach.
Marcellino, William, Nathan Beauchamp-Mustafaga, Research report, RAND Corporation.
Amanda Kerrigan, Lev Navarre Chao and www.rand.org/pubs/research_
Jackson Smith. 2023. The Rise of Generative AI reports/RRA2977-1.html.
and the Coming Era of Social Media Manipulation
3.0: Next-Generation Chinese Astroturfing and Moy, Wesley R. and Kacper T. Gradon. 2023.
Coping with Ubiquitous AI. RAND Corporation, “Artificial intelligence in hybrid and
September 7. www.rand.org/pubs/ information warfare: A double-edged sword.”
perspectives/PEA2679-1.html. In Artificial Intelligence and International Conflict
in Cyberspace, edited by Fabio Cristiano, Dennis
Marlatt, Greta E. 2008. “Information Warfare Broeders, François Delerue, Frédérick Douzet
and Information Operations (IW/IO): A and Aude Géry, 47–74. Abingdon, UK: Routledge.
Bibliography.” Monterey, CA: Dudley Knox https://doi.org/10.4324/9781003284093-4.
Library, Naval Postgraduate School.
Mubarak, Rami, Tariq Alsboui, Omar Alshaikh, Isa
Marten, Kimberly. 2022. “Russia’s Use of the Inuwa-Dute, Saad Khan and Simon Parkinson.
Wagner Group: Definitions, Strategic 2023. “A Survey on the Detection and Impacts
Objectives, and Accountability.” Testimony of Deepfakes in Visual, Audio, and Textual
before the Committee on Oversight and Formats.” IEEE Access 11: 144497–529.
Reform Subcommittee on National Security, https://doi.org/10.1109/ACCESS.2023.3344653.
United States House of Representatives,
Hearing on “Putin’s Proxies: Examining Pandith, Farah and Jacob Ware. 2021. “Teen
Russia’s Use of Private Military Companies,” terrorism inspired by social media is on
September 15. https://docs.house.gov/ the rise. Here’s what we need to do.” NBC
meetings/GO/GO06/20220921/115113/HHRG- Think, March 22. www.nbcnews.com/think/
117-GO06-Wstate-MartenK-20220921.pdf. opinion/teen-terrorism-inspired-socialmedia-
rise-here-s-what-we-ncna1261307.
Maslej, Nestor, Loredana Fattorini, Erik
Brynjolfsson, John Etchemendy, Katrina
Ligett, Terah Lyons, James Manyika et al. 2023.
Artificial Intelligence Index Report 2023. Stanford,
CA: Institute for Human-Centered AI, Stanford
University. April. https://aiindex.stanford.edu/
Pomerantsev, Peter and Michael Weiss. 2014. The Singer, Peter Warren and Emerson T. Brooking. 2018.
Menace of Unreality: How the Kremlin Weaponizes LikeWar: The Weaponization of Social Media.
Information, Culture and Money. Special Report Boston, MA: Houghton Mifflin Harcourt.
presented by The Interpreter, a Project of the
Spink, Lauren. 2023. When Words Become Weapons:
Institute of Modern Russia. https://imrussia.org/
The Unprecedented Risks to Civilians from the
media/pdf/Research/Michael_Weiss_and_Peter_
Spread of Disinformation in Ukraine. Center for
Pomerantsev__The_Menace_of_Unreality.pdf.
Civilians in Conflict. October.
Prier, Jarred. 2017. “Commanding the Trend: https://civiliansinconflict.org/wp-
Social Media as Information Warfare.” content/uploads/2023/11/CIVIC_
Strategic Studies Quarterly 11 (4): 50–85. Disinformation_Report.pdf.
Raymond, Nate. 2022. “Facebook parent Meta Stanham, Lucia. 2023. “Generative AI
to settle Cambridge Analytica scandal case (GenAI) in Cybersecurity.” Crowdstrike.
for $725 million.” Reuters, December 23. November 26. www.crowdstrike.com/
www.reuters.com/legal/facebook-parent- cybersecurity-101/secops/generative-ai/.
meta-pay-725-mln-settle-lawsuit-relating-
cambridge-analytica-2022-12-23/.