Techsoma Homepage
  • Reports
  • Reports
Home Artifical Intelligence

AI Hallucinations Are Getting Worse as Models Scale, and the Industry Has No Real Fix

by Kingsley Okeke
March 13, 2026
in Artifical Intelligence
Reading Time: 2 mins read
AI Hallucinations

The artificial intelligence industry has spent years promising that AI hallucinations (the confident fabrication of false information) would diminish as models grew more powerful. The data increasingly tells a different story.

OpenAI’s own internal testing revealed that its o3 and o4-mini reasoning models hallucinate at significantly higher rates than their predecessors. On the PersonQA benchmark, o3 hallucinated 33% of the time; more than double the 16% rate recorded by o1. The smaller o4-mini performed even worse at 48%. OpenAI’s own technical report admitted that “more research is needed” to understand why.

When More Thinking Produces More Errors

The pattern is counterintuitive but now well-documented. Models built for deeper, chain-of-thought reasoning tend to perform worse on factual accuracy benchmarks than simpler predecessors. The leading hypothesis is structural: reasoning models invest computational effort into working through answers, which can lead them to fill knowledge gaps with plausible-sounding guesses rather than acknowledging uncertainty. Independent research by Transluce, a nonprofit AI lab, found that o3 also fabricates actions it claims to have taken, including, in one documented case, running code on a physical laptop outside of ChatGPT.

An MIT study from early 2025 added a disturbing dimension. When AI models hallucinate, they tend to use more confident language than when they are factually correct. Some models were 34% more likely to use phrases like “definitely” and “certainly” when generating incorrect information. The more wrong the model is, the more certain it sounds.

The Benchmark Problem

Part of the confusion around hallucination trends is methodological. On tightly controlled tasks (such as summarising a provided document), some models have shown genuine improvement, with a handful now sitting below the 1% threshold on summarisation-specific benchmarks. But those tests measure a narrow slice of how AI is actually used.

In medical settings, hallucination rates in clinical scenarios ranged from 64% to over 80% for open-source models, even when mitigation prompts were applied. Legal queries fared no better: Stanford research found that models hallucinate between 69% and 88% of the time on specific legal questions.

Why No One Is Really Fixing It

The core problem is architectural. Large language models are prediction engines, not knowledge retrieval systems. They generate the statistically most likely next word based on training patterns, with no internal mechanism to distinguish known facts from plausible fictions. A 2025 paper from OpenAI and MIT researchers demonstrated mathematically why this tendency persists through training: the way models are currently evaluated rewards confident guessing over calibrated uncertainty, so models learn to bluff.

As OpenAI acknowledged in a September 2025 paper, standard benchmarks penalise uncertainty and reward accuracy scores, meaning a model that guesses will outperform one that says “I don’t know,” even if the guessing model produces far more incorrect answers.

Retrieval-Augmented Generation, which anchors model responses to external source documents, reduces hallucination rates by up to 42% when properly implemented, but it only works when there is a document to anchor to. For the open-ended questions that represent much of real-world AI usage, no reliable solution currently exists.

For users relying on AI tools in healthcare, law, journalism, or financial analysis, that gap remains very much open.

ADVERTISEMENT
Kingsley Okeke

Kingsley Okeke

I'm a skilled content writer, anatomist, and researcher with a strong academic background in human anatomy. I hold a degree...

Recommended For You

UNIVEN and African Technology Forum Form a Powerful Alliance to Build Africa’s AI-Ready Generation
Artifical Intelligence

UNIVEN and African Technology Forum Form a Powerful Alliance to Build Africa’s AI-Ready Generation

by Faith Amonimo
March 13, 2026

Africa needs AI talent. Urgently. Every single African organisation surveyed in a 2025 SAP report said AI skills demand rose that year. Yet most African universities still do not have...

Read moreDetails
Google Just Gave 20 African Creators the AI Tools to Dominate Global Storytelling

Google Just Gave 20 African Creators the AI Tools to Dominate Global Storytelling

March 13, 2026
Free AI vs Paid AI

The Gap Between Free AI and Paid AI in 2026 Is Bigger Than Most People Realise

March 11, 2026
Nkenneai amd NITDA

NITDA and NKENNEAi Partner to Build African Language AI Infrastructure for Nigeria’s Multilingual Digital Economy

March 10, 2026
African AI

From Swahili to Yoruba, AI Is Taking African Languages More Seriously

March 9, 2026

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

ADVERTISEMENT

Subscribe to our Newsletter

Recent News

AI Hallucinations

AI Hallucinations Are Getting Worse as Models Scale, and the Industry Has No Real Fix

March 13, 2026
2Africa subsea cable

Iran-Israel War and Houthi Attacks Halt Meta’s 2Africa Subsea Cable Project in the Persian Gulf

March 13, 2026
National Grid in Nigeria currently fails remote workers

Nigeria’s Power Crisis Forces Remote Workers to Spend Up to ₦13,000 Daily on Generator Fuel

March 13, 2026
CBN’s New AI Mandate: How Nigeria’s Banks and Fintechs Must Automate AML by 2027

CBN’s New AI Mandate: How Nigeria’s Banks and Fintechs Must Automate AML by 2027

March 13, 2026
UNIVEN and African Technology Forum Form a Powerful Alliance to Build Africa’s AI-Ready Generation

UNIVEN and African Technology Forum Form a Powerful Alliance to Build Africa’s AI-Ready Generation

March 13, 2026

Where Africa’s Tech Revolution Begins – Covering tech innovations, startups, and developments across Africa

Facebook X-twitter Instagram Linkedin

Quick Links

Advertise on Techsoma

Publish your Articles

T & C

Privacy Policy

© 2025 — Techsoma Africa. All Rights Reserved

Add New Playlist

No Result
View All Result

© 2026 JNews - Premium WordPress news & magazine theme by Jegtheme.