A Crisis of Trust in an Age of Speed
False information spreads faster, travels further, and triggers stronger reactions than verified facts. The reason? Fear, outrage, and surprise are powerful catalysts for virality.
Studies like those by Vosoughi et al. (2018) confirm that lies aren’t random mistakes; they’re often crafted with psychological precision, targeting our emotions, biases, and craving for certainty in chaos.
In The Nature and Circulation of False Information, Vian Bakir and Andrew McStay dissect the anatomy of deception in the digital age. While propaganda and political lies are age-old, they argue, the scale, speed, and technological sophistication of modern disinformation are unprecedented.
Today, deception isn’t confined to state actors or intelligence agencies; politicians, media personalities, bots, and sometimes, unwittingly, all of us are part of the machinery.
In today’s environment, skepticism isn’t cynicism — it’s survival.
The History of Political Deception: From Philosopher Kings to Twitter Trolls
Political deception is not a modern invention. Over two millennia ago, Plato proposed the concept of “noble lies” — fabrications used by rulers to maintain social harmony and stability, assuming the public could not handle complex truths.
Centuries later, Niccolò Machiavelli, writing in Renaissance Italy, took a more unflinching stance: deception, he argued, was a tool every ruler needed to wield skillfully if they hoped to govern effectively.
The 20th century saw deception become institutionalized. American communication theorists like Harold Lasswell and Walter Lippmann viewed propaganda as not merely acceptable, but necessary in mass democracies. In their view, large populations required narrative guidance; unfiltered truth, they believed, could lead to panic or destabilization.
During the Cold War, state-sponsored disinformation — what the Soviet Union called active measures — became part of geopolitical strategy. Soviet operatives blended truth and falsehood to erode trust in Western institutions and fuel social discord. The United States and its allies responded in kind, employing their own forms of selective leaks and covert narrative manipulation.
What has changed today is the speed, reach, and accessibility of deception. No longer the sole domain of intelligence agencies, disinformation is now crowdsourced and industrialized. Politicians craft narratives designed to mislead.
Media outlets — both fringe and mainstream — sometimes amplify unverified or distorted stories. Online influencers, driven by ideology or profit, participate knowingly. Even ordinary users share manipulated content, sometimes out of outrage, sometimes out of sheer amusement.
The line from philosopher kings to digital troll armies is not as long as we might imagine — and the tools of deception have only become sharper.
The Players: Who Spreads False Information?
The spread of disinformation isn’t orchestrated solely by state actors working in secret. The reality is more crowded, complex, and concerning.
Russia remains one of the most studied purveyors of state-sponsored disinformation. The Kremlin’s strategy — detailed by bodies such as NATO and the EU’s East StratCom Task Force — involves troll factories, bot armies, and disinformation portals like SouthFront and NewsFront.
These so-called investigative outlets masquerade as independent journalism while disseminating Kremlin-approved narratives. Hacked materials are selectively leaked and amplified by state-controlled media such as Sputnik and RT, with the goal of flooding the information space until truth and falsehood are indistinguishable (Bakir & McStay, 2022).
China, too, is an active player in the disinformation landscape. According to the Australian Strategic Policy Institute, Beijing’s campaigns have focused on identity-based narratives designed to deepen political division and mistrust, particularly in Taiwan.
During the COVID-19 pandemic, Chinese state media and affiliated accounts also worked to cast doubt on the virus’s origins and paint democratic governments as incompetent in their responses.
Yet it would be naive to think disinformation is the exclusive domain of autocracies. Liberal democracies have also engaged in narrative manipulation when political stakes are high. The run-up to the 2003 Iraq War is a case in point. Intelligence findings were cherry-picked; inconvenient evidence was buried.
Public messaging around weapons of mass destruction became less about fact and more about persuasion. British intelligence agencies, as exposed by Edward Snowden’s revelations, developed sophisticated online manipulation capabilities of their own.
Then there are non-state actors, who often prove just as influential. Bots — silent but relentless — play an early amplification role, targeting low-credibility content and giving it artificial momentum. According to Bakir and McStay, bots were critical in the early spread of disinformation during the 2016 U.S. election, seeding stories before they were picked up by real users.
Human amplifiers are even more powerful. Politicians, celebrities, and media personalities may represent only 20 percent of the origin points for false information, but they account for nearly 70 percent of total engagement (Brennen et al., 2020). Their reach can transform fringe conspiracies into mainstream controversies almost overnight.
Even journalists are not immune. Under pressure to be first rather than thorough, 80 percent of journalists surveyed in the U.S. and U.K. admitted to having fallen for false information at least once (Bakir & McStay, 2022).
Disinformation today is not a monologue. It’s a chorus — some voices deliberate, some unwitting, all contributing to the same cacophony.
The Tools: Fake News, Deepfakes, and Cheapfakes
Disinformation adapts to the times, and its most potent forms are designed not just to mislead, but to provoke, enrage, and exploit our cognitive shortcuts. Among its sharpest tools are fake news, deepfakes, and cheapfakes — each more dangerous in the context of our fast-scrolling digital habits.
Fake news has existed long before the term became mainstream. From sensational 16th-century broadsheets to 20th-century yellow journalism, emotionally charged fabrications have always had a market. But digital platforms have turned this trickle into a flood.
Research during the 2016 U.S. presidential election revealed that just 1% of Twitter users were responsible for 80% of exposures to fake news sources (Grinberg et al., 2019). Fake news stories are often short, sensationalist, and engineered to evoke fear, anger, or disgust — emotions that bypass rational scrutiny and accelerate sharing.
Deepfakes are a more recent — and more alarming — addition to the disinformation arsenal. These AI-generated synthetic media files can convincingly superimpose faces and voices onto real footage, challenging our trust in audiovisual evidence. First gaining notoriety through non-consensual pornographic videos of celebrities, deepfakes have since entered the political arena.
In March 2022, during Russia’s invasion of Ukraine, a deepfake video surfaced depicting President Volodymyr Zelensky urging Ukrainian troops to surrender — a digital deception designed to erode morale and sow chaos (Simonite, WIRED, 2022).
Even more troubling is the weaponization of doubt: as deepfakes become more widely known, politicians and public figures now have plausible deniability. The suggestion that “this might be a deepfake” allows them to dismiss authentic evidence as fabrication — a phenomenon researchers call the liar’s dividend.
Yet disinformation doesn’t always rely on cutting-edge technology. Cheapfakes, or shallowfakes, manipulate perception through basic video editing techniques: slowing down footage, altering speed, or splicing statements out of context.
A notorious example is the viral video of U.S. House Speaker Nancy Pelosi, digitally slowed to make her appear incoherent and intoxicated (The Washington Post, 2019). No sophisticated AI was needed — only basic editing and a social media echo chamber eager for scandal.
From sophisticated GAN-generated deepfakes to clumsy cheapfakes, the common thread is manipulation that appeals to emotion first and reason later. And in a world where speed trumps verification, that’s often all it takes.
Why Does False Information Spread So Fast?
The speed at which false information travels is not an accident; it’s by design. Disinformation exploits the architecture of human psychology and the algorithms of digital platforms, making lies not only more appealing but more contagious than the truth.
Research led by Vosoughi et al. (2018) found that falsehoods spread significantly faster and farther than verified facts across all categories of information. Why? The answer lies in emotion. False stories are carefully engineered to provoke surprise, disgust, and fear — the very emotions that prompt us to share without thinking.
Truthful content, by contrast, tends to evoke more subdued emotions like sadness or anticipation, which don’t inspire the same viral impulse.
Bots add yet another accelerant. According to Bakir and McStay, bots play a critical role in the earliest stages of disinformation spread, especially on platforms like Twitter. They latch onto low-credibility content and amplify it through retweets and mentions, targeting influential accounts to give fringe stories early visibility.
Human users, seeing this artificially inflated popularity, are more likely to engage — creating a self-reinforcing cycle that catapults lies into the mainstream.
But this is not just about bad actors or mechanical amplification. Our own cognitive biases make us complicit. Confirmation bias — the tendency to accept information that aligns with our existing beliefs — ensures that disinformation tailored to political or ideological preferences travels fastest in echo chambers.
Disinformation also thrives where legitimate information is lacking. During the initial months of the COVID-19 pandemic, research from Italy revealed that false claims surged precisely when public demand for information — as measured by Google search activity — outpaced mainstream media supply (Bakir & McStay, 2022).
Where credible answers are slow, speculation and fabrication fill the void.
Ultimately, the viral spread of false information is a product of psychological design, algorithmic engineering, and structural media gaps. It’s not just that lies travel faster; they’re built to.
Who’s Most Likely to Share False Information?
It’s comforting to believe that only the gullible fall for falsehoods — but the data tells a more uncomfortable story. Anyone can be a vector for disinformation, and certain groups are statistically more likely to amplify it, often without malicious intent.
In the United States, analysis of the 2016 presidential election revealed that just 1% of Twitter users were responsible for a staggering 80% of exposures to fake news sources (Grinberg et al., 2019). These “super-spreaders” were predominantly older adults, ideologically conservative or far-right, and highly politically engaged.
Cognitive aging, combined with lower digital literacy and stronger motivated reasoning, may help explain this vulnerability.
The United Kingdom mirrors this trend. Studies show that individuals who frequently share political news on social media are more likely to spread misinformation — sometimes knowingly. Nearly 30% admitted to having shared falsehoods by mistake, and 17% confessed to doing so deliberately (Bakir & McStay, 2022). Both younger adults (under 45) and seniors (over 65) were overrepresented in these findings, likely due to high political investment and, in some cases, weaker media literacy.
In China, disinformation spread skews rural. Rural citizens share false information more frequently than urban populations, often due to limited media literacy and less access to diverse, credible information sources.
Public figures, however, remain the most potent amplifiers of disinformation. While politicians, celebrities, and influencers represent only about 20% of disinformation origin points, their posts drive nearly 70% of total engagement (Brennen et al., 2020).
Their platforms transform niche conspiracy theories into cultural talking points.
Even journalists, trained to question and verify, are not immune. A significant majority — 80% — of journalists surveyed in the U.S. and U.K. admitted to having been deceived by false information at least once (Bakir & McStay, 2022).
In the race to publish first, even the most diligent reporters can fall prey to well-crafted manipulation.
Disinformation spreads not because people are foolish, but because it exploits trust, emotion, and the social instinct to share — vulnerabilities that cut across education levels and professions.
Why This Matters: The Consequences of Unchecked Disinformation
Disinformation is not a harmless annoyance; its impacts ripple outward, undermining institutions, fracturing societies, and endangering lives.
Elections are among its most visible victims. The 2016 U.S. presidential race and the Brexit referendum both highlighted how coordinated disinformation campaigns can shape public opinion — not necessarily by convincing people of a single falsehood, but by fostering doubt, division, and fatigue.
In Kenya and Zimbabwe, disinformation around electoral violence — often recycling old images and presenting them as breaking news — undermined public confidence in democratic processes. In Nigeria’s 2019 elections, candidates were found paying influencers for as little as $14 per month to flood platforms with fabricated smears against opponents (Bakir & McStay, 2022).
Public health has suffered profoundly. Early in the COVID-19 pandemic, false information about cures, vaccine dangers, and the virus’s origins proliferated across platforms.
A study of YouTube’s top COVID-related videos found that 25% contained misleading or false information, collectively garnering over 62 million views (Brennen et al., 2020).
Politicians and public figures magnified the problem: though they represented only 20% of misinformation sources, their posts drove 69% of total engagement.
Perhaps most insidious is the emergence of the liar’s dividend — the idea that the mere existence of sophisticated fabrications like deepfakes allows real evidence to be dismissed as fake. In 2017, then-President Donald Trump suggested that the notorious Access Hollywood recording might be fabricated.
While no evidence supported this claim, the growing public awareness of deepfakes provided convenient cover for doubt and denial.
Unchecked disinformation erodes more than facts; it corrodes public trust in media, science, governance, and even shared reality. In a world where anything can be questioned, nothing can be discussed — and that void is quickly filled by manipulation and extremism.
What Can You Do? Practical Steps to Become More Media Literate
The scale and sophistication of modern disinformation may seem overwhelming, but individuals are not powerless. Small, deliberate habits can help build personal resilience and contribute to a healthier information ecosystem.
The first — and often hardest — step is to slow down. Disinformation preys on urgency. If a headline spikes your heart rate or stirs immediate outrage, that’s precisely when you should pause. Before sharing, ask yourself: Who benefits if I believe this? Who benefits if I amplify it?
Next, interrogate the source. Familiarity breeds trust, but trust by association is one of the strongest predictors of misinformation spread. Just because a post comes from a friend, a celebrity, or even a journalist doesn’t guarantee its accuracy. Verify information through independent, nonpartisan fact-checking organizations like First Draft News, EU DisinfoLab, or Poynter’s International Fact-Checking Network.
Learn to spot manipulation tactics. Cheapfakes — slowed-down videos, clipped audio, misleading captions — are far more common than AI-generated deepfakes. If a video or story feels too perfect in its outrage or embarrassment factor, dig deeper. A quick reverse image search or a check of the upload date can often expose deception.
Be aware of your own cognitive biases. Confirmation bias makes us more likely to believe — and spread — information that aligns with our beliefs. Recognizing this doesn’t mean abandoning convictions; it means practicing intellectual humility and staying curious rather than reactive.
Finally, understand that algorithms are not neutral. Social media platforms reward emotional, sensational content with higher visibility. The loudest stories are not necessarily the truest. In an environment engineered for emotional speed, conscious slowness is not just self-defense — it’s a form of digital citizenship.
We’re All Targets — But We Can Be Better Defenders
The war against disinformation is not fought in secret backrooms or by anonymous hackers alone. It’s fought in the everyday decisions we make about what to read, what to believe, and what to share.
From printed pamphlets in centuries past to Cold War propaganda, deception has always been part of the human story. But what distinguishes today’s information environment is its unprecedented scale, speed, and the ease with which falsehoods can be weaponized. Lies no longer spread through whispers; they ricochet around the globe in seconds, powered by algorithms tuned to reward outrage over nuance.
As Bakir and McStay argue in The Nature and Circulation of False Information, disinformation is more than just the distortion of facts. It is the corrosion of trust, the blurring of reality, and the gradual dismantling of the shared frameworks on which democracy depends. But understanding these forces is itself a powerful act of resistance.
The question is no longer if disinformation will reach us — it’s when. The choice we face is whether to become passive amplifiers or active defenders of truth. That doesn’t mean becoming cynical or paranoid; it means becoming more curious, more skeptical, and more deliberate.
The truth may not always travel as fast or feel as exhilarating as a viral lie. But in the long run, it is far more durable. In a world that floods us with manipulation, choosing to pause, question, and verify isn’t just a personal habit — it’s a civic responsibility.
And if this article has only scratched the surface, that’s intentional. To fully grasp the forces reshaping our information landscape, I urge you to dive deeper into The Nature and Circulation of False Information. Because the most powerful defense we have is an informed, questioning mind.
Further Reading and Resources
If navigating this complex information landscape feels daunting, you’re not alone — but you’re not powerless either. There are excellent resources designed to help individuals, educators, and professionals sharpen their media literacy skills and understand the mechanics of disinformation.
Start with the foundation of this article: The Nature and Circulation of False Information by Vian Bakir and Andrew McStay. It’s an essential, research-rich exploration of how false information is created, circulated, and consumed — and why it matters for every democracy.
If you’re looking for practical tools, First Draft News offers training, guides, and toolkits for spotting misleading content and understanding disinformation tactics. They focus on empowering both journalists and everyday users to verify before sharing.
The EU DisinfoLab regularly publishes investigations and exposes influence operations targeting European — and increasingly global — audiences. Their reports are invaluable for anyone interested in understanding the geopolitical dimension of modern disinformation.
For those who want to understand the broader cultural and technological dynamics, the Center for Information Technology and Society at UC Santa Barbara provides academic research and analysis on how digital misinformation shapes society.
Verification work doesn’t have to be reserved for professionals. The Poynter Institute’s International Fact-Checking Network offers a global registry of trusted fact-checking organizations, while Poynter’s MediaWise project is especially helpful for younger audiences, offering accessible tools to build critical digital literacy.
Finally, for case studies on recent manipulation efforts:
- To read more about the 2022 Zelensky deepfake incident, see WIRED’s coverage.
- For insight into the viral cheapfake video of Nancy Pelosi, BBC's fact-check and analysis is an instructive example of how simple edits can mislead millions.
Learning to spot manipulation isn’t just a media skill — it’s an essential civic practice. And these resources are an excellent starting point for building that defense.