Instinctively, people are naturally drawn to emotionally provocative content.
On the other hand our brains are wired to react before we reflect, making us more likely to believe and share misinformation if it triggers a strong response.
Recent research provides clear evidence of how these two forces collide to create the perfect storm for fake news.
A 2023 study by Corsi et al. found that Twitter’s algorithm systematically amplifies low-credibility content, especially when it's divisive.
Another study, by Martel, Pennycook, & Rand (2020), demonstrates how emotion-driven thinking makes people more susceptible to believing fake news.
Together, these findings reveal why fake news isn’t just a problem of falsehoods—it’s a deeply ingrained feature of the digital world we live in.
The Role of Algorithms: How Twitter Pushes Misinformation
Imagine a news ecosystem where the most outrageous, emotionally charged stories rise to the top.
That’s not an accident—that’s an algorithm at work.
According to Corsi et al. (2023), Twitter’s recommendation system boosts low-credibility content by 20% more than factual sources.
The study found that:
- Misinformation from low-credibility domains received significantly higher visibility.
- Tweets containing high toxicity or coming from verified accounts were prioritized by the algorithm.
- Right-leaning misinformation was 1.5 times more likely to be amplified than left-leaning misinformation.
This happens because engagement is king.
Twitter—and other platforms—are built to keep users scrolling, clicking, and reacting.
Posts that spark outrage, fear, or strong opinions drive more interaction, and algorithms reward that engagement without considering accuracy.
Another problem? Verification systems are easily exploited.
Twitter’s old blue-check system was meant to legitimize credible voices, but bad actors used it to spread falsehoods with a veneer of credibility.
When verified accounts shared misinformation, the algorithm amplified it even more.
In short: Fake news doesn’t just spread on its own—it gets a powerful boost from the very platforms designed to keep us informed.
The Role of Emotion: Why People Fall for Fake News
But technology alone isn’t enough.
The other half of the equation is us—our emotions, instincts, and how we process information.
The 2020 study by Martel, Pennycook, & Rand found that people who rely on emotion-driven thinking are 35% more likely to believe fake news.
Why? Because misinformation is designed to trigger emotion before logic.
When participants in the study were encouraged to think analytically, their likelihood of believing fake news dropped by 25%.
But when they relied on gut reactions, they were far more susceptible.
Misinformation thrives on fear, outrage, and excitement because emotional content spreads faster and wider.
Other research backs this up—an MIT study from 2018 found that false news spreads six times faster than the truth.
The takeaway? People share before they think.
And when an algorithm prioritizes engagement over accuracy, that means misinformation spreads 70% faster than factual news.
The Consequences: When Tech and Psychology Work Together
When platform design and human nature reinforce each other, the effects are not just digital—they spill into the real world, shaping politics, public health, and trust in society.
Fake news isn’t just misleading—it’s destructive.
Political Instability and Social Unrest
False claims of election fraud, amplified by social media algorithms and emotionally charged narratives, have led to real-world violence.
A striking example is the U.S. Capitol riot on January 6, 2021, where misinformation about a “stolen election” fueled an attempted insurrection.
Platforms like Twitter and Facebook allowed these narratives to spread unchecked, reinforcing beliefs within echo chambers and escalating tensions.
This pattern isn’t unique to the U.S.
In countries like Brazil, Myanmar, and India, politically charged misinformation has incited riots, fueled ethnic violence, and undermined democratic processes.
When algorithms reward outrage, misinformation doesn’t just spread—it incites action.
Public Health Risks and Medical Misinformation
During the COVID-19 pandemic, misinformation about vaccines, treatments, and the virus itself spread faster than the science could counteract it.
A 2021 study found that exposure to vaccine misinformation on social media reduced vaccine willingness by 20-30% in some populations.
False claims about miracle cures, microchips in vaccines, and exaggerated risks weren’t just wrong—they had deadly consequences.
People delayed getting vaccinated, refused life-saving treatments, and ignored health warnings, leading to higher infection rates and preventable deaths.
In some countries, government responses were hampered by misinformation, with leaders themselves repeating viral falsehoods.
The Erosion of Trust in Journalism and Institutions
Perhaps the most insidious effect of misinformation is how it corrodes trust—in journalism, in science, and in democratic institutions.
When people are constantly exposed to conflicting, misleading, or outright false information, they begin to question everything, including legitimate sources of information.
This phenomenon, known as the “truth decay” effect, creates a society where no one knows what to believe.
The result? People either retreat into their ideological bubbles or disengage entirely, leaving a vacuum that bad actors can exploit with even more disinformation.
The Bigger Picture
We are not just dealing with individual cases of fake news—we are witnessing a systematic weakening of truth itself.
When falsehoods spread unchecked, they don’t just mislead individuals—they reshape society, influencing laws, public behavior, and global stability.
If we fail to address the consequences of misinformation, we risk a world where reality itself becomes optional, where belief is dictated by virality rather than fact.
What Can Be Done?
Misinformation thrives because of two key issues: social media platforms reward engagement over truth, and human psychology makes us vulnerable to emotional manipulation.
Tackling fake news means addressing both the technology and human behavior that fuel it.
So, what can actually be done?
Reform Social Media Algorithms
Right now, the systems designed to inform us are actually designed to enrage us.
Social media platforms have built algorithms that prioritize controversy, not credibility.
If platforms continue to amplify high-engagement, low-accuracy content, misinformation will remain a systemic problem.
What needs to change?
- Prioritizing accuracy over engagement: Platforms should boost factual content while deprioritizing falsehoods, especially from repeat offenders.
Labeling misleading information: Facebook and Twitter have experimented with fact-checking labels, but they need to be more prominent and more aggressive in stopping misinformation from spreading.
Transparency in recommendations: Users should know why they are being shown certain content. Are they seeing it because of their interests, or because the algorithm knows it will spark outrage?
Some governments have begun regulating tech companies to demand algorithmic transparency.
For instance, the European Union’s Digital Services Act (2022) requires platforms like Twitter and Facebook to explain how their recommendation systems work and take action against harmful content.
If these regulations become the norm, social media companies may be forced to rethink their approach.
Improve Media Literacy & Critical Thinking
Fixing the tech problem is one thing. Fixing how people process information is another.
Studies show that people who rely on emotional thinking are 35% more likely to believe fake news.
The solution? Teaching people how to think critically about what they consume.
What can be done?
- Teach digital literacy in schools. Finland, for example, has one of the best responses to fake news: it trains students to spot disinformation from an early age.
- Finland’s model has been so effective that it was ranked #1 in Europe for resilience against fake news.
Encourage “accuracy priming.” A 2019 study found that reminding people to think about accuracy before consuming news makes them significantly less likely to share misinformation.- Simply asking, “Is this actually true?” before sharing can disrupt the cycle of falsehoods.
Slow down social media consumption. Misinformation spreads because people share before they think.
One possible solution? A built-in delay before users can repost a link, prompting them to read it first.- Twitter briefly tested a similar feature in 2020, and it led to a 33% increase in people actually reading articles before sharing.
Strengthen Misinformation Detection
As fake news tactics evolve, AI must evolve too.
Detection systems need to be faster, smarter, and harder to evade.
What’s being developed?
- AI-driven fact-checking: Researchers are developing machine learning models that can detect patterns in misinformation and flag false claims in real time.
Crowdsourced fact-checking: Platforms like Wikipedia and initiatives like Community Notes on Twitter have shown that community-driven corrections can be effective.- These programs should be expanded and better integrated into social media feeds.
Legal consequences for disinformation campaigns: Some governments are moving toward criminalizing organized fake news operations, especially those targeting elections or public health.
A Multi-Layered Approach
No single solution will stop misinformation.
Algorithms must be reformed, people must be trained to think critically, and AI must detect disinformation faster than it spreads.
This isn’t just about fixing fake news—it’s about making society more resilient against manipulation.
If we fail to act, misinformation will keep evolving—faster than our ability to counter it.
The time to act isn’t tomorrow—it’s now.
Breaking the Cycle
Fake news isn’t just a byproduct of the digital age—it’s a feature of the system.
Platforms are designed to amplify what grabs attention, not necessarily what’s true.
At the same time, human psychology is wired to respond emotionally before thinking critically.
These two forces work together, creating a perfect storm where falsehoods spread faster than facts.
The effects are clear: political destabilization, public health crises, and the erosion of trust in institutions.
The question is not whether misinformation will continue to exist—it always will—but whether we can build resilience against it before it reshapes reality entirely.
So what can be done? The answer isn’t simple, but it is possible.
We need a multi-layered approach:
- Social media platforms must rethink their algorithms, prioritizing accuracy over outrage-fueled engagement.
- Education systems must incorporate critical thinking and media literacy, giving people the tools to recognize misinformation before it spreads.
- AI and detection tools must be improved to identify and mitigate fake news at scale.
But beyond structural fixes, there’s something even more important: a cultural shift.
Society needs to place a higher value on truth over virality, nuance over clickbait, and accountability over engagement metrics.
In the end, fighting misinformation isn’t just about technology or policy—it’s about shaping the way we interact with information itself.
If we fail to act, falsehoods will keep evolving, adapting, and spreading faster than our ability to counter them.
The real battle isn’t just stopping fake news—it’s rebuilding trust in truth itself.
To break the cycle, we need action on both sides:
- Social media platforms must change the way content is prioritized.
- Individuals must learn to spot emotional manipulation and think critically.
- AI-driven tools must evolve to detect misinformation faster.
The challenge isn’t just stopping fake news—it’s building resilience against it.
The real question is: Can we act fast enough to prevent falsehoods from becoming the new reality?
Sources and Further Reading
- Corsi et al. (2023). Evaluating Twitter’s Algorithmic Amplification of Low-Credibility Content: An Observational Study. Link report
- Martel, Pennycook, & Rand (2020). Reliance on Emotion Promotes Belief in Fake News. Link report
- MIT Study (2018). The Spread of True and False News Online. Link report
- Pennycook & Rand (2019). Fighting Misinformation on Social Media Using “Accuracy Primes” Link report
- European Digital Media Observatory (EDMO). How Disinformation Spreads in Social Networks Link
Take a Stand Against Disinformation
Be a legend—support our efforts to keep our website running and debunk disinformation every day.
We need your help—not just for a few bucks, but for the moral support and validation that your donation brings.
Make a commitment to fighting the propaganda machine. By becoming a member, you’re not just contributing financially—you’re joining a grassroots effort to push back against the corrosive lies infecting our world.
If nothing else, use our content. Share it. Teach others.
Help shine a light on the growing threat of disinformation and stand against those who weaponize falsehoods.
Words have power. Let’s use them to fight back.