Techsoma Homepage
  • Reports
  • Reports
Home Artifical Intelligence

AI Hallucinations Are Getting Worse as Models Scale, and the Industry Has No Real Fix

by Kingsley Okeke
March 13, 2026
in Artifical Intelligence
Reading Time: 2 mins read
AI Hallucinations

The artificial intelligence industry has spent years promising that AI hallucinations (the confident fabrication of false information) would diminish as models grew more powerful. The data increasingly tells a different story.

OpenAI’s own internal testing revealed that its o3 and o4-mini reasoning models hallucinate at significantly higher rates than their predecessors. On the PersonQA benchmark, o3 hallucinated 33% of the time; more than double the 16% rate recorded by o1. The smaller o4-mini performed even worse at 48%. OpenAI’s own technical report admitted that “more research is needed” to understand why.

When More Thinking Produces More Errors

The pattern is counterintuitive but now well-documented. Models built for deeper, chain-of-thought reasoning tend to perform worse on factual accuracy benchmarks than simpler predecessors. The leading hypothesis is structural: reasoning models invest computational effort into working through answers, which can lead them to fill knowledge gaps with plausible-sounding guesses rather than acknowledging uncertainty. Independent research by Transluce, a nonprofit AI lab, found that o3 also fabricates actions it claims to have taken, including, in one documented case, running code on a physical laptop outside of ChatGPT.

An MIT study from early 2025 added a disturbing dimension. When AI models hallucinate, they tend to use more confident language than when they are factually correct. Some models were 34% more likely to use phrases like “definitely” and “certainly” when generating incorrect information. The more wrong the model is, the more certain it sounds.

The Benchmark Problem

Part of the confusion around hallucination trends is methodological. On tightly controlled tasks (such as summarising a provided document), some models have shown genuine improvement, with a handful now sitting below the 1% threshold on summarisation-specific benchmarks. But those tests measure a narrow slice of how AI is actually used.

In medical settings, hallucination rates in clinical scenarios ranged from 64% to over 80% for open-source models, even when mitigation prompts were applied. Legal queries fared no better: Stanford research found that models hallucinate between 69% and 88% of the time on specific legal questions.

Why No One Is Really Fixing It

The core problem is architectural. Large language models are prediction engines, not knowledge retrieval systems. They generate the statistically most likely next word based on training patterns, with no internal mechanism to distinguish known facts from plausible fictions. A 2025 paper from OpenAI and MIT researchers demonstrated mathematically why this tendency persists through training: the way models are currently evaluated rewards confident guessing over calibrated uncertainty, so models learn to bluff.

As OpenAI acknowledged in a September 2025 paper, standard benchmarks penalise uncertainty and reward accuracy scores, meaning a model that guesses will outperform one that says “I don’t know,” even if the guessing model produces far more incorrect answers.

Retrieval-Augmented Generation, which anchors model responses to external source documents, reduces hallucination rates by up to 42% when properly implemented, but it only works when there is a document to anchor to. For the open-ended questions that represent much of real-world AI usage, no reliable solution currently exists.

For users relying on AI tools in healthcare, law, journalism, or financial analysis, that gap remains very much open.

ADVERTISEMENT
Kingsley Okeke

Kingsley Okeke

I'm a skilled content writer, anatomist, and researcher with a strong academic background in human anatomy. I hold a degree...

Recommended For You

pewbeam open source alternative
African Startup Ecosystem

Pewbeam Has an Open-Source Rival – and That’s a Threat Every AI Startup Should Take Seriously

by Kingsley Okeke
April 10, 2026

Pewbeam launched less than a year ago with a sharp pitch: AI-powered church presentations that automatically detect Bible verse references from a pastor's speech and display them on screen in...

Read moreDetails
South African Startup Refiant Raises $5M to Make AI Burn Less Energy

South African Startup Refiant Raises $5M to Make AI Burn Less Energy

April 10, 2026
Africa's Data Centre

Africa’s Data Centre Gap Is One of the Biggest Infrastructure Investment Opportunities Right Now

April 9, 2026
Muse Spark Announcement

Meta Superintelligence Labs Releases Muse Spark, Its First Major AI Model After Billion-Dollar Overhaul

April 9, 2026
Morocco's $1.2bn AI Data Centre in Nouaceur

Morocco’s $1.2bn AI Data Centre Clears Land Phase as Nouaceur Site Takes Shape

April 3, 2026
Next Post
How Founders Can Switch Off Pitch Mode and Build Better Personal Relationships

How Founders Can Switch Off Pitch Mode and Build Better Personal Relationships

HOSTAFRICA

HOSTAFRICA Deploys Africa's First NVIDIA RTX PRO 6000 Blackwell GPU Servers in South Africa

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

ADVERTISEMENT

Subscribe to our Newsletter

Recent News

pewbeam open source alternative

Pewbeam Has an Open-Source Rival – and That’s a Threat Every AI Startup Should Take Seriously

April 10, 2026
South African Startup Refiant Raises $5M to Make AI Burn Less Energy

South African Startup Refiant Raises $5M to Make AI Burn Less Energy

April 10, 2026
Rwanda fintech hub gains ground as new law backs digital finance

Rwanda fintech hub gains ground as new law backs digital finance

April 9, 2026
Africa's Data Centre

Africa’s Data Centre Gap Is One of the Biggest Infrastructure Investment Opportunities Right Now

April 9, 2026
Muse Spark Announcement

Meta Superintelligence Labs Releases Muse Spark, Its First Major AI Model After Billion-Dollar Overhaul

April 9, 2026

Where Africa’s Tech Revolution Begins – Covering tech innovations, startups, and developments across Africa

Facebook X-twitter Instagram Linkedin

Quick Links

Advertise on Techsoma

Publish your Articles

T & C

Privacy Policy

© 2025 — Techsoma Africa. All Rights Reserved

Add New Playlist

No Result
View All Result

© 2026 JNews - Premium WordPress news & magazine theme by Jegtheme.