Psychological Warfare in the Age of GenAI
This kind of manipulation may seem new because the tools are new. But the tactics—confusing, misleading, and emotionally manipulating audiences—are not.
In 2021, researchers at the Center for Naval Analyses (CNA) published two companion reports outlining how disinformation campaigns exploit ordinary psychological processes to spread falsehoods.
Their findings are now playing out on a global scale, as generative AI (GenAI) enables more actors to manipulate more people more cheaply and rapidly than ever before.
This article connects the dots between CNA’s psychological framework and a 2024 report from the University of British Columbia’s Centre for the Study of Democratic Institutions (CSDI).
By mapping recent cases of GenAI-enhanced disinformation onto CNA’s four core mechanisms of cognitive vulnerability, we show how disinformation’s psychological playbook has been upgraded—but not rewritten.
The Original Blueprint: CNA’s 2021 Psychology of Disinformation
In 2021, the Center for Naval Analyses (CNA) released two companion studies that established a foundational understanding of the psychological mechanisms exploited by disinformation.
The first report, The Psychology of (Dis)information: A Primer on Key Psychological Mechanisms, authored by Katerina Tsetsura, Lauren Dickey, and Alexi Drew, identified four recurring psychological processes that make individuals susceptible to false or misleading information.
These are not symptoms of irrationality or pathology—instead, they are normal cognitive shortcuts that help people process complex information in daily life.
These four mechanisms are:
- Initial Information Processing
Drawing on dual-process theory, CNA emphasizes that humans rely heavily on “fast thinking” (System 1), which is intuitive, automatic, and emotionally driven. This mode of thinking often governs how we engage with content on social media, where rapid consumption and minimal reflection are incentivized. Disinformation exploits this by presenting emotionally charged, simple messages that feel plausible and are easy to absorb without analytical scrutiny.
Cognitive Dissonance
People experience psychological discomfort when new information conflicts with existing beliefs. To resolve this dissonance, they may reject the new information or reinterpret it to fit their worldview. Disinformation campaigns exploit this tendency by crafting narratives that affirm rather than challenge prior beliefs, making the falsehoods more palatable and persistent.
Group Membership, Beliefs, and Novelty (GBN)
Individuals are more likely to accept and share information that aligns with in-group values or appears novel. Disinformation benefits from this through selective targeting—messages framed as “insider knowledge” or shared within trusted networks (e.g., partisan groups or ethnic communities) are more likely to be believed and circulated.
Emotion and Arousal
Strong emotions such as anger, fear, or awe increase message retention and spread. Disinformation is frequently designed to provoke these emotions, bypassing rational skepticism. This emotional intensity also increases the likelihood that individuals will share the content, contributing to viral spread.
In the companion case studies report, The Psychology of (Dis)information: Case Studies and Implications, the authors applied this framework to analyze real-world disinformation campaigns, including Russia’s “Secondary Infektion,” China’s Hong Kong protests narratives, and Iran’s “Endless Mayfly” operation.
The takeaway across these examples was clear: disinformation is most effective when it leverages psychological shortcuts rather than presenting persuasive arguments.
These insights were forward-looking. Though written before the rise of ChatGPT and widespread generative AI adoption, the CNA framework remains highly applicable to how synthetic content is used to deceive and disrupt in the present day.
The 2024 Update: GenAI as a Force Multiplier
In Harmful Hallucinations: Generative AI and Elections, Spencer McKay, Chris Tenove, Nishtha Gupta, Jenina Ibañez, Netheena Mathews, and Heidi Tworek present one of the most comprehensive accounts to date of how generative artificial intelligence (GenAI) has reshaped the risks to electoral integrity.
Their central argument is measured but clear: GenAI doesn’t necessarily introduce new threats to democracy—it amplifies existing ones.
The report documents how GenAI has enabled a dramatic increase in the scale, speed, and personalization of disinformation campaigns.
Rather than requiring technical expertise or extensive resources, the creation of convincing synthetic content—including text, audio, images, and video—can now be performed by almost anyone with access to basic tools.
As the authors state plainly: “AI lowers the cost for ordinary people to become propagandists” (McKay et al., 2024, p. 15).
The research is grounded in five case studies from 2023–2024 that illustrate how GenAI has been used to mislead voters, harass candidates, and pollute the information environment:
- Slovakia: Two days before the 2023 national election, a fake audio recording emerged in which opposition leader Michal Šimečka appeared to discuss rigging the vote. Another deepfake had earlier depicted him proposing to raise beer prices—an emotionally resonant but false message. Both incidents were widely circulated and timed to avoid rebuttal due to Slovakia’s media blackout period.
India: In the 2023 state election in Telangana, a manipulated video falsely showed a sitting minister endorsing his opponent. The video was initially circulated through unofficial WhatsApp groups before being shared by an official party account on X (formerly Twitter).
United States: In January 2024, tens of thousands of voters in New Hampshire received AI-generated robocalls imitating President Biden’s voice. The message urged them not to vote in the Democratic primary. Though the incident was later linked to a political consultant and criminal charges were filed, the timing and reach of the call demonstrated how GenAI can be rapidly weaponized.
United Kingdom: Ahead of the 2024 UK general election, over 400 deepfake pornographic images of women politicians were posted on a synthetic media website. Victims included high-profile figures from multiple parties. The images were widely viewed, but legal accountability remained limited.
France: Content farms used GenAI to flood TikTok with nearly 10,000 low-quality videos related to the French election, including unlabelled synthetic images and AI-narrated misinformation. The content was not overtly persuasive but contributed to an environment of distrust and confusion.
These cases illustrate several key dynamics: synthetic content is often timed for maximum disruption, targeted at marginalized groups, and difficult to trace.
Despite public attention and platform policies, most of the material circulated without labelling or accountability.
As the authors note, “almost all near-term uses of GenAI are extensions of existing techniques to misinform, manipulate, or misdirect” (McKay et al., 2024, p. 2).
The report does not engage in alarmism. It cautions against exaggerating GenAI’s transformative potential, but calls for urgent attention to its real-world harms—particularly in undermining democratic goods such as accurate information, free electoral participation, and mutual respect.
Mechanism-by-Mechanism: Psychological Exploitation in Action
While generative AI has accelerated the production and spread of disinformation, the tactics it enables remain grounded in enduring psychological patterns.
CNA’s 2021 framework identified four such mechanisms, and each is clearly evident in the GenAI-enabled case studies documented by McKay et al. (2024).
Below, we revisit each mechanism through the lens of recent events.
Initial Information Processing: Fast Thinking Meets Synthetic Simplicity
CNA's research emphasized how people rely on intuitive, rapid “System 1” thinking when processing information—especially on fast-paced platforms like TikTok or Instagram.
GenAI content often mirrors the look and feel of authentic media, making it more likely to be accepted without scrutiny.
In France’s 2024 parliamentary election, thousands of short, AI-narrated videos were pushed out by content farms on TikTok. Although many were easily debunked, their familiar visual style and engaging format made them cognitively “sticky.”
Most users likely encountered this content while scrolling quickly, with minimal opportunity—or motivation—for verification.
Cognitive Dissonance: Reinforcing Belief Through Fabricated Evidence
Cognitive dissonance theory holds that people are more likely to believe disinformation when it aligns with pre-existing beliefs and reduces psychological discomfort.
In Slovakia, a fake audio recording of opposition leader Michal Šimečka discussing vote-rigging emerged two days before the 2023 election. The message reinforced narratives of political corruption already circulating among skeptical voters.
Even though the clip’s authenticity was questioned, the alignment with existing distrust likely made it more persuasive—and harder to dislodge.
Group Membership, Beliefs, and Novelty (GBN): In-Group Virality and False Familiarity
Disinformation spreads more effectively when it is shared within trusted groups or presented as novel insider information. CNA described this as the GBN effect—a potent combination of social belonging, belief congruence, and perceived exclusivity.
In the Indian state of Telangana, a deepfake video falsely depicted Minister KT Rama Rao endorsing the opposition. Initially shared within unofficial WhatsApp groups, the video gained credibility due to its source and apparent urgency.
The fact that it was circulated among supporters and sympathizers first amplified its reach and reduced critical scrutiny.
Emotion and Arousal: Manipulating Outrage and Fear for Engagement
Emotionally provocative content—especially that which incites fear, anger, or disgust—is more likely to be remembered and shared. Disinformation campaigns exploit this by crafting messages designed to provoke strong affective responses.
In the United Kingdom, over 400 deepfake pornographic images of women politicians appeared online in the months leading up to the 2024 election.
The content was explicitly designed to humiliate and intimidate, targeting women across party lines. As McKay et al. note, such tactics not only damage reputations but may discourage candidates from continuing in public life—disproportionately affecting already underrepresented groups.
These cases demonstrate that generative AI’s impact is not simply a matter of technological advancement. Rather, it lies in how effectively the technology can activate latent cognitive vulnerabilities—the same vulnerabilities CNA warned about in 2021.
The psychological playbook hasn’t changed. The tools to deploy it have just become cheaper, faster, and harder to trace.
Policy and Platform Responses: What’s Working, What’s Not
While the psychological vulnerabilities exploited by disinformation are deeply rooted, both CNA (2021) and McKay et al. (2024) emphasize that effective countermeasures do exist.
However, they require coordination across sectors—technology companies, governments, civil society, and the public—and must be tailored not only to the content but to the cognitive mechanisms being targeted.
From CNA (2021): Countering Cognitive Manipulation
CNA’s primer emphasized responses grounded in cognitive science. Rather than focusing exclusively on identifying and removing false content, their recommendations targeted the psychological conditions that allow disinformation to thrive:
- Inoculation Strategies: Much like a vaccine, “prebunking” techniques can build resistance to manipulation by exposing individuals to weakened forms of disinformation and explaining how the deception works. Research shows this can improve critical thinking and reduce susceptibility.
Cognitive Friction: Introducing subtle speed bumps—such as nudges prompting users to verify before sharing—can shift people from fast, intuitive processing to slower, more analytical thinking. Even small moments of pause can disrupt the automatic spread of falsehoods.
Media Literacy: Long-term resilience depends on building citizens’ understanding of how and why they are targeted. Educational initiatives should focus on emotional awareness, source verification, and narrative framing—not just fact-checking.
From McKay et al. (2024): Responding to GenAI-Specific Risks
While echoing many of CNA’s recommendations, McKay and colleagues highlight the new operational realities created by GenAI. Their proposed countermeasures span four phases: model design, content generation, dissemination, and reception.
Key proposals include:
- Accountability for Synthetic Content: Many incidents in 2024 revealed gaps in enforcement. For example, while TikTok requires labelling of realistic AI content, synthetic election videos in France were shared without such disclosures. McKay et al. call for clearer, enforceable standards for labelling and provenance tracking.
Legal Enforcement: In the U.S., the AI-generated robocalls using President Biden’s voice resulted in federal charges only after public outcry. The authors stress that “voluntary commitments” from GenAI firms and platforms are insufficient. Regulatory clarity and active enforcement—especially from electoral management bodies—are essential.
Cross-Sector Preparedness: Government agencies, civil society, and platforms must prepare for “high-risk scenarios,” such as deepfakes released during pre-election blackout periods (as in Slovakia). This includes red-teaming exercises, coordinated response protocols, and investments in trust and safety infrastructure.
Provenance Infrastructure and Transparency Tools: Technical solutions like watermarking, content provenance metadata (e.g., C2PA standards), and model-side safeguards are critical—but limited. These tools must be supported by detection capacity and journalist access to platform data for independent verification.
A Shared Emphasis: Building Resilient Information Ecosystems
Both CNA and McKay et al. converge on one major point: disinformation cannot be addressed solely at the level of individual content moderation.
The larger challenge is building epistemic and psychological resilience—ensuring that societies can absorb, contextualize, and respond to falsehoods without spiraling into distrust or manipulation.
This includes platform design, media ethics, citizen education, and institutional credibility. As McKay et al. conclude, "Policymakers should support a resilient information system with institutions capable of producing and disseminating accurate, trusted information... regardless of the communication technologies we use."
Conclusion: The Same Human Mind, Newer Tools
The landscape of disinformation is changing—but the underlying psychology remains remarkably stable. As early as 2021, CNA warned that the most effective disinformation campaigns do not rely on technical sophistication or factual distortion alone.
They succeed by manipulating how people think, feel, and connect with their social groups. Generative AI has not changed that reality—it has simply amplified it.
As the 2024 report by McKay, Tenove, and colleagues demonstrates, GenAI has reduced the cost of deception, increased the speed of dissemination, and blurred the line between amateur and professional propagandists.
From AI-generated robocalls impersonating political leaders to deepfakes designed to humiliate and silence women in public life, these tools are being used to exploit well-known psychological mechanisms—intuitive thinking, belief confirmation, emotional provocation, and social trust.
And yet, both reports resist technological determinism. Disinformation is not inevitable.
The same mechanisms that make people vulnerable can also be leveraged for resilience—through media literacy, institutional transparency, responsible platform governance, and psychological inoculation.
Understanding the cognitive architecture of disinformation is not just an academic exercise. It is a prerequisite for building systems—technological, legal, and civic—that protect democratic integrity in the face of rapid change.
The mind remains the central battleground. The tools may evolve, but the playbook remains familiar. And so must our response.
Further Reading and Sources
Primary Sources Cited in This Article
- CNA (2021).
The Psychology of (Dis)information: A Primer on Key Psychological Mechanisms.
Authors: Katerina Tsetsura, Lauren Dickey, Alexi Drew.
Arlington, VA: Center for Naval Analyses. September 2021.
The Psychology of (Dis)information: A Primer on Key Psychological Mechanisms | CNA - CNA (2021).
The Psychology of (Dis)information: Case Studies and Implications.
Authors: Katerina Tsetsura, Lauren Dickey, Alexi Drew.
Arlington, VA: Center for Naval Analyses. October 2021.
The Psychology of (Dis)information: Case Studies and Implications | CNA - McKay, S., Tenove, C., Gupta, N., Ibañez, J., Mathews, N., & Tworek, H. (2024).
Harmful Hallucinations: Generative AI and Elections.
Vancouver: Centre for the Study of Democratic Institutions, University of British Columbia.
Harmful Hallucinations : Generative AI and Elections - UBC Library Open Collections