A recent study, "An Explainable XGBoost-based Approach on Assessing Detection of Deception and Disinformation" by Alex V. Mbaziira and Maha F. Sabir, explores how artificial intelligence (AI) can analyze patterns in language to detect deception.


Why Lies Are Winning on the Internet


Their research shows that lies follow a predictable structure, making them detectable with the right tools.

However, until major tech platforms implement these methods, disinformation will continue to thrive.


“Our model demonstrates that deceptive writing often lacks personal involvement and relies heavily on manipulative language rather than factual content.”


Understanding how lies are crafted—and how AI can catch them—can help us become more informed consumers of information.


How Lies Are Designed to Trick You


Disinformation doesn’t spread by accident—it’s engineered to bypass critical thinking and appeal to emotions.

The study found that deceptive content often follows a specific formula:

Shorter sentences – Makes content easier to absorb and feel more "real."
Avoiding first-person pronouns – Scammers and propagandists distance themselves from their own deception.
Vague, emotional language – Designed to make people react rather than think critically.
Exaggerated claims – Makes content seem urgent or exclusive, leading to impulsive sharing.
Increased use of modal verbs (could, might, should) – Suggests possibility rather than concrete evidence.


Example – Spot the Fake Headline

Consider these two headlines:

  1. "Scientists Confirm Earth’s Temperature Rising Faster Than Expected"
  2. "Alarming New Report Reveals the Truth About Climate Change Big Tech Doesn’t Want You to See!"

The second headline uses exaggeration, emotional appeal, and vague language to sound more compelling—even if it’s misleading.

This is how disinformation spreads.


“Deceptive narratives often rely on fear and urgency, compelling readers to act without verifying facts.”


The Science of Spotting Lies – How AI Detects Fake Content

The researchers used machine learning algorithms to analyze deception in:
Fake news from troll farms
Scam emails from cybercriminals
Fraudulent product reviews
Political propaganda

Using XGBoost, a powerful AI model, they trained the system to detect hidden patterns in deceptive writing.

The AI then identified deception with up to 85% accuracy, outperforming human detection.


What AI Found:

✔ Fraud emails used structured, professional writing to seem legitimate.
✔ Scam messages relied on urgency and fear (e.g., "Act Now! Limited Time Offer!").
✔ Fake product reviews often had unnatural repetition (e.g., "Best ever! Great quality! Best ever!").
✔ Disinformation articles were filled with unsupported claims and emotional triggers.
✔ Deceptive text contained fewer complex words and more passive voice, making it harder to trace responsibility.


“Our results indicate that machine learning can significantly improve the detection of deceptive content by recognizing linguistic anomalies.”


The conclusion?

Deception has patterns, and AI can learn them.


What This Means for You – How Disinformation Affects Your Life

Disinformation isn’t just about politics—it can affect your wallet, your health, and your choices.

Financial Scams: AI found patterns in fraud emails—can you spot them in your inbox?
Fake Reviews: Ever bought a bad product because of overly positive reviews? AI can detect these deceptive tactics.
Political Manipulation: Lies are designed to shape your worldview and voting choices.


Quick Test – Which One is Fake?

  1. "New study shows coffee may reduce risk of heart disease."
  2. "Harvard Scientists SHOCKED! This One Coffee Trick Could Save Your Life!"

The second headline is crafted to manipulate emotions and engagement—just like political propaganda or scam ads.


“With deepfake technology and AI-generated disinformation on the rise, individuals must cultivate digital literacy to protect themselves.”


The Future – AI Can Help, But We Need to Demand Better Tech

Despite AI’s ability to detect lies, social media platforms aren’t using these tools effectively.

Many tech companies profit from engagement—whether the content is true or not.

AI fact-checking should be mandatory on major platforms.
Users should demand transparency on how content is moderated.
Governments must push for regulations that force platforms to limit disinformation.


“AI-based detection systems must be implemented with ethical oversight to prevent misuse while improving public trust in online content.”


If we don’t push for better tools and policies, disinformation will continue to dominate online spaces.

By understanding how AI detects lies, we can make better choices about what we read, share, and believe.

The question is: Will we demand a better product, or will we keep consuming lies?


Sources and Further Reading


Enjoy my work? Keep it going for just $2! 🎉

Grab a membership, buy me a coffee, or support via PayPal or GoFundMe. Every bit helps! 🙌🔥

BMAC:https://buymeacoffee.com/nafoforum/membership

PP: https://www.paypal.com/donate/?hosted_button_id=STDDZAF88ZRNL

GoFundMe: https://www.gofundme.com/f/support-disinformation-education-public-education-forum


Study overview


Summary of "An Explainable XGBoost-based Approach on Assessing Detection of Deception and Disinformation"

1. Introduction

The study by Alex V. Mbaziira and Maha F. Sabir explores how artificial intelligence (AI) can detect deception and disinformation in digital content. It focuses on linguistic patterns that distinguish deceptive from truthful content and proposes an explainable AI model to improve detection.

2. Methodology

The researchers used machine learning algorithms, specifically XGBoost, to analyze deception in various sources, including:

  • Fake news articles
  • Fraudulent product reviews
  • Scam emails
  • Political disinformation

They extracted linguistic features such as sentence structure, pronoun use, emotional language, and modal verbs to train the AI model.

3. Key Findings

  • The AI detected deception with up to 85% accuracy, outperforming human detection.
  • Deceptive texts tend to have shorter sentences, vague language, and exaggerated claims.
  • Fraud emails use structured, polite language but lack specific details.
  • Fake product reviews often repeat keywords unnaturally.
  • Disinformation articles rely on emotionally charged language and unsupported claims.

4. Explainability of AI

To improve trust in AI decision-making, the researchers used Shapley Additive Explanations (SHAP) to show which words and phrases influenced the AI’s conclusions. This transparency allows users to understand why a piece of content is flagged as deceptive.

5. Applications

The study suggests that AI can be used for:

  • Fact-checking platforms to detect misinformation
  • Social media monitoring to limit disinformation spread
  • Email security to block phishing scams
  • Consumer protection to filter out fake product reviews

6. Conclusion

AI can detect deception effectively, but widespread implementation requires:

  • Increased public awareness of disinformation
  • Stronger regulatory oversight on tech platforms
  • Greater transparency in AI moderation systems

The study emphasizes the need for ethical oversight to prevent AI misuse while improving public trust in online content.