The Myth of "Just Digital"
British tabloids spent decades fueling xenophobic myths, as Greenslade (2005) documented in his analysis of how newspapers fabricated stories about asylum seekers.
That same fear-mongering model has migrated online, supercharged by algorithms that favor rage over truth.
Sindoni (2017) exposed how dehumanizing rhetoric—once confined to the editorial pages of right-wing tabloids—now thrives on social media, amplified by coordinated bot networks.
And DiResta (2021) laid bare how social media platforms profit from disinformation, allowing state-sponsored propaganda to flourish unchecked.
Ignoring this crisis means accepting the slow decay of truth itself.
The notion that digital disinformation is "just online noise" has already led to catastrophic consequences, from Brexit to the Capitol riot to vaccine denialism.
The question is no longer whether digital propaganda is dangerous—it’s whether those in power will do anything to stop it.
Disinformation isn't just an unfortunate side effect of the digital era—it’s a weapon, wielded deliberately by politicians, media outlets, and tech platforms.
And make no mistake, the damage is real.
Let's dive into how right-wing politicians, tabloid media, and tech platforms exploit disinformation for power and profit, using findings from:
- Greenslade (2005) on how British tabloids fuel xenophobia.
- Sindoni (2017) on hate speech and dehumanization in media.
- DiResta (2021) on how social media platforms fuel political warfare.
If we continue to shrug off digital disinformation as just online noise, we risk letting democracy die in the algorithmic dark.
The Political Weaponization of Disinformation
Let’s be blunt—right-wing politicians are not just complicit in the spread of digital disinformation; they are its chief architects.
They don’t merely benefit from misinformation; they manufacture and deploy it as a strategic weapon to manipulate public perception, inflame social divisions, and secure their grip on power.
The tactic is simple: flood social media with carefully crafted falsehoods, exploit algorithms that favor outrage, and watch as misinformation metastasizes into political capital.
The playbook is well-documented.
Greenslade (2005) exposed how British tabloids fabricated sensationalist stories to stoke fear about immigration, portraying asylum seekers as criminals, freeloaders, and even cannibals.
One infamous example was the claim that immigrants were “roasting the Queen’s swans”—a complete fabrication that nonetheless fueled public hostility toward migrants.
Today, those same tactics have gone digital.
Disinformation farms and automated bot networks amplify racial fear-mongering at an industrial scale, making old tabloid-style propaganda even more potent.
Disinformation in Action: From Brexit to the Capitol Riot
The same fear-based manipulation that fueled anti-immigrant hysteria in Britain helped drive two of the most destabilizing political events of the last decade: Brexit and the January 6th U.S. Capitol riot.
- Brexit: British voters were bombarded with false claims, like Turkey’s imminent EU membership or the infamous “£350 million a week to the NHS” lie.
- The disinformation worked—fueling a 57% increase in hate crimes and steering the UK into an irreversible political fracture.
The Capitol Riot: In the U.S., Donald Trump’s “Stop the Steal” campaign weaponized social media, spreading disproven election fraud claims at a scale never before seen.- Studies from Stanford confirmed that a majority of those arrested for storming the Capitol had consumed disinformation about election fraud on social media, proving that digital lies don’t just mislead—they mobilize.
The Role of Coordinated Disinformation Networks
These political operations are not random.
They are highly coordinated efforts involving bot networks, troll farms, and state-sponsored disinformation campaigns.
In her research, DiResta (2021) exposed how foreign and domestic political actors exploit social media to spread propaganda.
Russian disinformation campaigns didn’t end after the 2016 U.S. election—they evolved, adapting to new crises and fanning the flames of division wherever possible.
When politicians can incite mass panic, racial hatred, and even violent insurrections with a few keystrokes—and face no consequences—it’s not just a failure of accountability.
It’s a crisis for democracy itself.
Consider the UK’s anti-immigrant hysteria. Greenslade (2005) documented how British tabloids fabricated stories about asylum seekers—like the now-infamous lie that migrants were “roasting the Queen’s swans”.
Today, the same tactics play out online, where bot networks amplify racial fear-mongering at an industrial scale.
The playbook doesn’t stop at immigration.
Trump’s "Stop the Steal" campaign weaponized social media, spreading disproven election fraud claims that led directly to the January 6th Capitol riot.
We know from the Stanford study that a majority of those arrested for the riot had consumed disinformation about election fraud on social media.
When politicians can incite chaos with a few keystrokes and face no consequences, it’s not just a failure—it’s a crisis.
Platforms as Enablers: The Business of Viral Lies
Tech companies love to talk about "connecting the world," "empowering voices," and "bringing people together."
But when it comes to disinformation, the numbers tell a different story.
Misinformation on Twitter is 70% more likely to be retweeted than factual content (MIT, 2018).
Facebook’s own internal research revealed that 64% of people who joined extremist groups on the platform were led there by its recommendation system.
In other words, the very platforms claiming to combat fake news are also its biggest amplifiers.
Why? Because outrage sells.
Social media companies operate on an attention economy, where more engagement means more ad revenue.
And nothing drives engagement quite like rage, fear, and conspiracy theories.
How the Algorithm Rewards Lies
The system is designed to keep people scrolling, clicking, and sharing.
Algorithms prioritize content that generates strong emotional reactions—whether it’s true or not.
Posts that trigger anger, fear, or outrage spread faster, creating an ecosystem where falsehoods outperform facts.
- DiResta (2021) exposed how Facebook knowingly allowed state-sponsored disinformation campaigns to operate unchecked. Russian-backed fake accounts weren’t just ignored—they were amplified because their content was highly engaging.
Sindoni (2017) demonstrated how tabloid dehumanization tactics—such as calling migrants “cockroaches”—have been absorbed into digital hate speech and spread by social media algorithms.
A 2020 study by the Center for Countering Digital Hate found that 12 individuals (the so-called 'Disinformation Dozen') were responsible for nearly 65% of anti-vaccine misinformation on Facebook and Twitter. Despite repeated warnings, the platforms took years to act.
This isn’t a glitch. This is the business model.
Who Profits from Viral Disinformation?
Social media giants have every incentive to let misinformation thrive.
Fake news generates more clicks, and more clicks mean more advertising revenue.
The problem isn’t that tech CEOs don’t know how to stop it—the problem is they don’t want to.
Doing so would mean sacrificing profits.
- Facebook earned $84 billion in ad revenue in 2020, even as its platform was flooded with election misinformation and COVID-19 conspiracies.
Twitter made $3.7 billion in 2022, while studies showed its recommendation algorithm favored low-credibility sources over factual ones (Corsi et al., 2023).
YouTube’s algorithm has been repeatedly caught recommending extremist content, pushing users deeper into radicalization spirals because longer watch times mean higher ad revenue.
The Cost of Inaction
When platforms allow lies to spread unchecked, the consequences aren’t just digital—they’re deadly.
Anti-vaccine conspiracy theories led to thousands of preventable deaths during the COVID-19 pandemic.
Disinformation about election fraud incited violent riots in multiple countries.
Hate speech has led to genocides, such as Facebook’s role in spreading anti-Rohingya propaganda in Myanmar.
Social media platforms are not passive players in this crisis—they are complicit.
They profit from lies, amplify extremism, and refuse to take meaningful action unless forced.
And until that changes, disinformation will remain a feature, not a bug, of the digital ecosystem.
And it’s profitable. Platforms make billions from ad revenue on viral content, whether it’s true or not.
We can't repeat this often enough: This isn’t a glitch. This is the business model.
Why This is Not "Just Digital": Real-World Consequences
Dismissing disinformation as “just online” is a dangerous mistake.
The lies spread in digital spaces do not stay confined to screens—they shape beliefs, influence policies, and have deadly consequences.
The Bottom Line: Digital Lies Have Real Consequences
From a pandemic made worse by misinformation to an insurrection fueled by online conspiracy theories, the pattern is clear:
- Digital disinformation doesn’t just mislead—it mobilizes.
- Online propaganda doesn’t just shape opinions—it shapes history.
- Ignoring the dangers of unchecked falsehoods isn’t an option.
Politicians and tech platforms know this. And yet, they continue to let it happen.
Greenslade (2005) demonstrated how tabloid-driven fear campaigns fueled xenophobia, leading to a 57% spike in hate crimes after the Brexit referendum.
Today, that same fear-mongering is amplified by social media algorithms at an exponential scale.
From the Capitol riot to the COVID-19 death toll, the evidence is clear: online disinformation kills.
And politicians and platforms are complicit.
Holding Politicians and Platforms Accountable
Enough is enough.
Disinformation is not free speech—it’s fraud. And fraud should have consequences.
For too long, politicians and social media platforms have hidden behind the idea that online falsehoods are an unfortunate but unavoidable aspect of digital life.
But when lies incite violence, undermine democracy, and cost lives, the argument for inaction collapses.
Regulate Social Media Like Traditional Publishers
Tech companies like Facebook, Twitter, and YouTube have long maintained that they are neutral platforms, not publishers.
This distinction allows them to evade responsibility for the content they promote, even when it spreads demonstrable harm.
Yet, as DiResta (2021) and others have shown, these platforms do not simply host content—they actively amplify it through algorithms designed to maximize engagement.
It’s time for social media giants to be held to the same standards as traditional media.
- Require transparency about how recommendation algorithms work and impose penalties for knowingly promoting falsehoods.
- Enforce liability laws that treat social media companies as publishers when they repeatedly allow disinformation to spread unchecked.
- Mandate fact-checking interventions for high-reach content to prevent harmful falsehoods from achieving virality.
Prosecute Politicians Who Spread Disinformation
Politicians are not above the law, and yet, time and time again, they escape accountability for deliberately spreading falsehoods that cause harm.
When elected officials knowingly push disinformation that leads to real-world violence or public health crises, they should be treated as perpetrators, not public servants.
- Election disinformation: Trump’s “Stop the Steal” campaign directly led to the January 6th insurrection. Holding leaders accountable for inciting violence must be a legal priority.
- Public health lies: Leaders who promoted anti-vaccine misinformation contributed to preventable deaths. Criminal negligence laws should apply when public figures knowingly spread harmful medical falsehoods.
- Hate speech and xenophobia: As Sindoni (2017) documented, media narratives labeling migrants as “cockroaches” directly correlate with increases in hate crimes. Politicians who weaponize such rhetoric must face consequences.
Treat Bot Farms and Troll Networks as Cybercriminal Organizations
Coordinated disinformation campaigns—often run by state actors, political operatives, and paid influencers—are a growing threat to global stability.
Bot farms don’t just spread fake news; they manufacture public sentiment, creating the illusion of consensus where none exists.
- Pass international cybercrime laws to classify organized disinformation campaigns as criminal activity.
- Impose financial and legal penalties on companies and individuals who operate troll networks.
- Increase cooperation between intelligence agencies to track and dismantle state-sponsored disinformation efforts.
The Need for Immediate Action
The spread of disinformation is not an unsolvable problem—but it is a problem that requires immediate action.
Social media platforms must be forced to change their profit-driven incentives, politicians must be held accountable for the lies they spread, and disinformation networks must be treated as criminal enterprises.
If action isn’t taken now, democracy, public trust, and even lives will continue to be collateral damage in the digital war on truth.
- Regulate social media platforms the way we regulate traditional publishers. If they profit from falsehoods, they should be held accountable for them.
- Prosecute politicians who knowingly spread falsehoods that endanger public safety. If inciting riots and obstructing democracy aren’t criminal acts, what is?
- Treat bot farms and troll networks as cybercriminal operations. These are not random internet users; they are coordinated disinformation actors engaging in criminal deception.
If platforms and politicians refuse to act, then they must be forced to act.
The Urgency of Action
Digital disinformation is not an accident—it’s an engineered system of manipulation.
Politicians weaponize it. Platforms profit from it. And we are the collateral damage.
The consequences are already here.
False election claims have incited riots. Vaccine misinformation has cost lives. Hate speech has fueled real-world violence.
These are not isolated incidents; they are symptoms of a digital ecosystem built to reward division and outrage over truth and accountability.
The Choice We Face
The era of digital deception will not end on its own.
There are only two options: intervention or escalation.
If no action is taken, disinformation will continue to shape our political landscape, erode trust in institutions, and place vulnerable communities at greater risk.
We need a fundamental shift in how society views digital lies:
- Stop treating disinformation as a side effect of free speech. Fraudulent claims that endanger public safety should be met with real consequences.
- Recognize the power of social media platforms as modern propaganda machines. They must be held to higher standards, just as traditional media outlets are.
- Understand that digital warfare is already happening. State-backed troll farms and disinformation campaigns are influencing elections and public discourse worldwide.
A Call to Action
Policymakers, tech companies, and the public must act now:
- Governments must regulate social media platforms and hold politicians accountable for weaponizing falsehoods.
- Tech companies must prioritize accuracy over profit-driven engagement metrics. If they won’t, they must be forced to do so.
- Individuals must develop media literacy and actively challenge misinformation in their own networks.
This is not just about stopping fake news. It’s about protecting democracy, truth, and the very fabric of informed society.
The real question isn’t whether digital disinformation is a problem—it’s whether we are willing to fight back before it’s too late.
It’s time to stop pretending that digital lies don’t have real-world consequences.
The Capitol riot happened because of online disinformation. Brexit was fueled by propaganda. Vaccine conspiracies have cost lives.
We have two choices: regulate, prosecute, and dismantle these systems—or watch democracy be eroded by lies, one viral hoax at a time.
Because if we don’t stop this now, we may not get another chance.
Sources and Further Reading
- Greenslade, R. (2005). Seeking Scapegoats: The Coverage of Asylum in the UK Press.
- Sindoni, M. G. (2017). Migrants are Cockroaches: Hate Speech in British Tabloids.
- DiResta, R. (2021). Social Media and Political Warfare.
- MIT Study (2018). The Spread of True and False News Online.
- Corsi et al. (2023). Evaluating Twitter’s Algorithmic Amplification of Low-Credibility Content.
- World Health Organization (2020). Managing the COVID-19 Infodemic.
Take a Stand Against Disinformation
Be a legend—support our efforts to keep our website running and debunk disinformation every day.
We need your help—not just for a few bucks, but for the moral support and validation that your donation brings.
Make a commitment to fighting the propaganda machine. By becoming a member, you’re not just contributing financially—you’re joining a grassroots effort to push back against the corrosive lies infecting our world.
If nothing else, use our content. Share it. Teach others.
Help shine a light on the growing threat of disinformation and stand against those who weaponize falsehoods.
Words have power. Let’s use them to fight back.