Techsoma Homepage
  • Policy & Regulations
  • Artificial Intelligence
  • Reports
  • Policy & Regulations
  • Artificial Intelligence
  • Reports
Home Artificial Intelligence

AI Hallucinations Are Getting Worse as Models Scale, and the Industry Has No Real Fix

by Kingsley Okeke
March 13, 2026
in Artificial Intelligence
Reading Time: 2 mins read
AI Hallucinations

The artificial intelligence industry has spent years promising that AI hallucinations (the confident fabrication of false information) would diminish as models grew more powerful. The data increasingly tells a different story.

OpenAI’s own internal testing revealed that its o3 and o4-mini reasoning models hallucinate at significantly higher rates than their predecessors. On the PersonQA benchmark, o3 hallucinated 33% of the time; more than double the 16% rate recorded by o1. The smaller o4-mini performed even worse at 48%. OpenAI’s own technical report admitted that “more research is needed” to understand why.

When More Thinking Produces More Errors

The pattern is counterintuitive but now well-documented. Models built for deeper, chain-of-thought reasoning tend to perform worse on factual accuracy benchmarks than simpler predecessors. The leading hypothesis is structural: reasoning models invest computational effort into working through answers, which can lead them to fill knowledge gaps with plausible-sounding guesses rather than acknowledging uncertainty. Independent research by Transluce, a nonprofit AI lab, found that o3 also fabricates actions it claims to have taken, including, in one documented case, running code on a physical laptop outside of ChatGPT.

An MIT study from early 2025 added a disturbing dimension. When AI models hallucinate, they tend to use more confident language than when they are factually correct. Some models were 34% more likely to use phrases like “definitely” and “certainly” when generating incorrect information. The more wrong the model is, the more certain it sounds.

The Benchmark Problem

Part of the confusion around hallucination trends is methodological. On tightly controlled tasks (such as summarising a provided document), some models have shown genuine improvement, with a handful now sitting below the 1% threshold on summarisation-specific benchmarks. But those tests measure a narrow slice of how AI is actually used.

In medical settings, hallucination rates in clinical scenarios ranged from 64% to over 80% for open-source models, even when mitigation prompts were applied. Legal queries fared no better: Stanford research found that models hallucinate between 69% and 88% of the time on specific legal questions.

Why No One Is Really Fixing It

The core problem is architectural. Large language models are prediction engines, not knowledge retrieval systems. They generate the statistically most likely next word based on training patterns, with no internal mechanism to distinguish known facts from plausible fictions. A 2025 paper from OpenAI and MIT researchers demonstrated mathematically why this tendency persists through training: the way models are currently evaluated rewards confident guessing over calibrated uncertainty, so models learn to bluff.

As OpenAI acknowledged in a September 2025 paper, standard benchmarks penalise uncertainty and reward accuracy scores, meaning a model that guesses will outperform one that says “I don’t know,” even if the guessing model produces far more incorrect answers.

Retrieval-Augmented Generation, which anchors model responses to external source documents, reduces hallucination rates by up to 42% when properly implemented, but it only works when there is a document to anchor to. For the open-ended questions that represent much of real-world AI usage, no reliable solution currently exists.

For users relying on AI tools in healthcare, law, journalism, or financial analysis, that gap remains very much open.

Kingsley Okeke

Kingsley Okeke

I'm a skilled content writer, anatomist, and researcher with a strong academic background in human anatomy. I hold a degree...

Recommended For You

ai-layoffs-in-tech-real-reason-behind-the-cuts
Artificial Intelligence

The Real Story Behind Job Layoffs and Why Your Skills Still Matter

by Faith Amonimo
April 28, 2026

Tech job cuts did not surge because software suddenly learned to do whole jobs on its own. Many employers cut staff to control costs after the post-pandemic hiring rush, reshape...

Read moreDetails
Elon Musk OpenAI lawsuit

Elon Musk vs. OpenAI: The Trial That Could Redefine the Future of Artificial Intelligence

April 27, 2026
Techsoma Africa

OpenAI Builds a Smarter ChatGPT With Hiro, a New $100 Pro Tier, and Careful Ad Plans

April 22, 2026
Claude Opus 4.7 launch

Anthropic Releases Claude Opus 4.7, Its Most Capable Publicly Available AI Model

April 16, 2026
Comptroller-General Adewale Adeniyi

Nigeria Customs Service Deploys AI to Close Revenue Leakages and Strengthen Fiscal Accountability

April 16, 2026
Next Post
Techsoma Africa

How Founders Can Switch Off Pitch Mode and Build Better Personal Relationships

HOSTAFRICA

HOSTAFRICA Deploys Africa's First NVIDIA RTX PRO 6000 Blackwell GPU Servers in South Africa

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter

Recent News

ai-layoffs-in-tech-real-reason-behind-the-cuts

The Real Story Behind Job Layoffs and Why Your Skills Still Matter

April 28, 2026
Online betting regulation in Africa

How Africa Is Taking Back Control of Online Betting

April 28, 2026
Kiwe Co-founders

Kiwe wins final CBE approval to launch its app and card in Egypt

April 28, 2026
Mastercard LOGO

Mastercard is scaling up in South Africa as faster payments and fintech deals grow

April 28, 2026
Techsoma Africa

Google opens 100,000 free tech scholarships in Ghana

April 28, 2026
Techsoma Africa

Techsoma Africa reports on startups, fintech, AI, digital policy, and the builders shaping Africa’s innovation economy.

Facebook X-twitter Instagram Linkedin

Company

About

Contact

Advertise

Site Map

Coverage

Startups

Fintech

Artificial Intelligence

Reports

Resources

Privacy Policy

RSS Feed

News Sitemap

Policy & Regulations

Copyright 2026 Techsoma Africa. All rights reserved.

No Result
View All Result
  • Reports
  • Policy & Regulations
  • Artificial Intelligence
  • About
  • Contact
  • Advertise

Copyright 2026 Techsoma Africa. All rights reserved.