The Digital Battlefield

The 2025 study by Ana Romero-Vicente, conducted under the Vera AI project, exposes the hidden architecture of disinformation campaigns that seek to control public opinion.

By analyzing three major influence operations, the study reveals how these campaigns are structured, amplified, and designed to erode trust in institutions, distort reality, and divide societies.

The findings are alarming: these campaigns don’t just spread lies—they incite violence, undermine democracy, and reshape history in real-time.


This article breaks down the study’s findings, exposing the mechanics of Coordinated Inauthentic Behavior (CIB) and its real-world consequences.


Most importantly, it arms you with the tools to recognize and resist these manipulations.


Understanding Coordinated Inauthentic Behavior (CIB)

Disinformation is not just about individual lies.

It’s an orchestrated effort to manufacture reality through deceptive means.

The EU DisinfoLab’s CIB Detection Tree provides a framework of 50 indicators used to detect whether an information operation is genuine or artificially manipulated.


These indicators span across:

  • Content Manipulation: Identical or nearly identical posts across multiple accounts, AI-generated images, emotional triggers designed to incite fear or anger.
  • Fake Identities: Accounts using stolen profile pictures, fabricated endorsements, or AI-generated voices.
  • Coordinated Behavior: Large numbers of accounts engaging in highly synchronized activity.
  • Cross-Platform Amplification: Disinformation originating on one platform but spreading rapidly across Facebook, Twitter, TikTok, and fringe media.

To understand these tactics, let’s examine three case studies where they were deployed with devastating effectiveness.


Operation Overload – The Art of Flooding Fact-Checkers

In early 2024, European journalists found themselves drowning in a flood of false fact-checking requests.

Every day, emails and social media messages urged them to verify claims that ranged from suspicious to outright absurd.

The problem? Most of these claims weren’t real.

They were part of Operation Overload, a pro-Russian disinformation campaign designed to overwhelm fact-checkers and distort media narratives.


Exposed by CheckFirst, the campaign worked by:

Flooding journalists with fabricated claims to divert their attention from actual misinformation.
Using networks of fake accounts to create a false sense of virality.
Leveraging AI-generated visuals to give credibility to outright fabrications.


"The goal was never to get fact-checkers to confirm these stories—it was to exhaust them," explained Guillaume Kuster, an investigator at CheckFirst.


The more time spent debunking false leads, the less time journalists had to expose real threats.

🔗 Further reading: CheckFirst Report


The TikTok War – Russian AI Propaganda Against Ukraine

TikTok is often seen as a platform for entertainment and memes, but in 2024, it became a battlefield of state-sponsored disinformation.

A joint investigation by DFRLab and BBC Verify uncovered a Russian influence operation targeting former Ukrainian Defense Minister Oleksii Reznikov.

More than 12,800 coordinated accounts spread false corruption allegations about Reznikov, using AI-generated voices and multilingual videos to lend credibility.

These videos were viewed millions of times, damaging Ukraine’s international reputation and undermining Western support.


"By the time fact-checkers debunked these videos, the damage was already done," said an analyst from DFRLab.


The campaign manipulated TikTok’s algorithm to ensure viral reach, proving that even fake voices can shape real-world policies.

🔗 Further reading: DFRLab Investigation


QAnon’s ‘Save the Children’ – Weaponizing Compassion

In 2020, QAnon hijacked the #SaveTheChildren hashtag, twisting a legitimate movement into a gateway for conspiracy theories.

The movement, originally aimed at raising awareness about child trafficking, was co-opted to spread outlandish claims—most notably, the “adrenochrome” myth, which falsely suggested that elites harvest chemicals from children for longevity.


"I thought I was helping, but I was being recruited into a cult," admitted a former QAnon follower in an interview with The New York Times.


The study found that while the campaign lacked some hallmarks of CIB, it relied heavily on:

Emotional hijacking – Exploiting fears about child safety.
Networked amplification – Coordinated sharing of misleading posts across platforms.
Real-world mobilization – Encouraging followers to protest, harass individuals, and spread more conspiracies.

🔗 Further reading: NYT Analysis


How to Protect Yourself and Your Family

In a world where disinformation is weaponized, vigilance is essential.


Here’s how to protect yourself:

🔹 Verify before you share – Cross-check information from multiple sources.
🔹 Identify emotional triggers – If a post is designed to make you feel intense anger or fear, be skeptical.
🔹 Be cautious of anonymous sources – Many disinformation campaigns use fake experts.
🔹 Use fact-checking resources – Websites like Snopes and EU DisinfoLab are invaluable.


The Fight for Reality

Disinformation isn’t just an abstract issue—it’s a direct attack on democracy, public safety, and social cohesion.

As the Vera AI study shows, these campaigns are increasingly sophisticated, AI-driven, and emotionally manipulative.

By staying vigilant, questioning sources, and pushing for accountability from tech platforms, we can resist manipulation.

The truth is under siege—but we still have the power to defend it.


Sources & Further Reading

  1. CheckFirst: Operation Overload
  2. DFRLab: TikTok Disinformation
  3. NYT: QAnon & #SaveTheChildren

Did you learned something new today? Do you enjoy my work?

Keep it going for just $2! 🎉

Grab a membership, buy me a coffee, or support via PayPal or GoFundMe. Every bit helps! 🙌🔥

BMAC:https://buymeacoffee.com/nafoforum/membership

PP: https://www.paypal.com/donate/?hosted_button_id=STDDZAF88ZRNL

GoFundMe: https://www.gofundme.com/f/support-disinformation-education-public-education-forum


Study overview


 Detailed Summary of the Study: Visual Assessment of CIB in Disinformation Campaigns

Introduction

The study, conducted by Ana Romero-Vicente under the Vera AI project, analyzes three major disinformation campaigns using visual indicators to detect Coordinated Inauthentic Behavior (CIB). It aims to make CIB more identifiable by examining coordination, authenticity, source, impact, and distribution across different case studies.

Methodology

  • The study utilizes a 50-indicator framework to determine if a campaign exhibits CIB.
  • Each indicator is marked Yes (Y) or No (N) based on its presence.
  • Scores range from 0 to 100% probability of CIB, with color-coded results:
    • Red (low likelihood, below 25%)
    • Yellow (medium likelihood, 25–75%)
    • Green (high likelihood, above 75%)
  • The framework assesses content manipulation, metadata, identity indicators, visual elements, behavioral patterns, and network coordination.

Case Study 1: Operation Overload

Overview

  • A pro-Russian disinformation campaign targeting European fact-checkers.
  • Identified by CheckFirst, it aimed to overwhelm journalists with false claims to divert their efforts.

Tactics Used

  • Fake fact-checking requests flooded newsrooms.
  • AI-generated visuals and false viral traction created a misleading perception of mass belief.
  • Synchronized content sharing across multiple platforms.

Key CIB Indicators

Identical or similar content repeated across accounts.Manipulated visuals and fabricated endorsements.Cross-platform coordination of false claims.

Impact

  • Diverted resources from real investigative journalism.
  • Amplified disinformation within legitimate media ecosystems.

🔗 Further reading: CheckFirst Report

Case Study 2: Massive Russian Influence Operation on TikTok

Overview

  • A Russian AI-powered disinformation campaign targeting former Ukrainian Defense Minister Oleksii Reznikov.
  • Exposed by DFRLab and BBC Verify.

Tactics Used

  • 12,800 fake accounts pushed false corruption allegations.
  • AI-generated voices and multilingual videos increased reach.
  • Leveraged TikTok’s algorithm for rapid viral spread.

Key CIB Indicators

Bot-like posting behavior and coordinated amplification.Fake voices mimicking public figures.High engagement driven by artificial traction.

Impact

  • Spread millions of views of fabricated corruption allegations.
  • Eroded trust in Ukrainian leadership and Western support.

🔗 Further reading: DFRLab Investigation

Case Study 3: QAnon’s ‘Save the Children’ – Weaponizing Compassion

Overview

  • QAnon hijacked the #SaveTheChildren movement, inserting conspiracy theories.
  • The campaign falsely claimed that elites harvest adrenochrome from children.

Tactics Used

  • Exploited public concern about child trafficking.
  • Used emotional manipulation to gain support.
  • Coordinated sharing of misleading content across platforms.

Key CIB Indicators

Emotional triggers to manipulate audiences.Synchronized messaging to maximize spread.Fringe media amplification and real-world mobilization.

Impact

  • Encouraged offline protests and harassment.
  • Increased radicalization and distrust in institutions.

🔗 Further reading: NYT Analysis

Key Takeaways from the Study

  • Disinformation campaigns exploit crises and legitimate movements to insert false narratives.
  • AI-generated content is a growing threat, making deepfakes and synthetic voices difficult to detect.
  • Cross-platform coordination is key to increasing the reach of false narratives.
  • Emotional manipulation (fear, outrage, and moral panic) is a powerful driver of virality.
  • State-backed disinformation campaigns target elections, institutions, and individuals to undermine trust.

How to Protect Yourself Against CIB

Verify information before sharing.Check sources and cross-reference with fact-checkers.Be wary of content designed to provoke strong emotional reactions.Watch for identical messaging appearing across different platforms.Use resources like Snopes and EU DisinfoLab to fact-check claims.

Conclusion

This study provides a comprehensive analysis of how CIB campaigns operate and how they exploit human psychology, AI tools, and social media algorithms to manipulate public perception. Understanding these tactics is essential for resisting the growing wave of disinformation that threatens democracy, social stability, and public trust.