The artificial intelligence industry has spent years promising that AI hallucinations (the confident fabrication of false information) would diminish as models grew more powerful. The data increasingly tells a different story.
OpenAI’s own internal testing revealed that its o3 and o4-mini reasoning models hallucinate at significantly higher rates than their predecessors. On the PersonQA benchmark, o3 hallucinated 33% of the time; more than double the 16% rate recorded by o1. The smaller o4-mini performed even worse at 48%. OpenAI’s own technical report admitted that “more research is needed” to understand why.
When More Thinking Produces More Errors
The pattern is counterintuitive but now well-documented. Models built for deeper, chain-of-thought reasoning tend to perform worse on factual accuracy benchmarks than simpler predecessors. The leading hypothesis is structural: reasoning models invest computational effort into working through answers, which can lead them to fill knowledge gaps with plausible-sounding guesses rather than acknowledging uncertainty. Independent research by Transluce, a nonprofit AI lab, found that o3 also fabricates actions it claims to have taken, including, in one documented case, running code on a physical laptop outside of ChatGPT.
An MIT study from early 2025 added a disturbing dimension. When AI models hallucinate, they tend to use more confident language than when they are factually correct. Some models were 34% more likely to use phrases like “definitely” and “certainly” when generating incorrect information. The more wrong the model is, the more certain it sounds.
The Benchmark Problem
Part of the confusion around hallucination trends is methodological. On tightly controlled tasks (such as summarising a provided document), some models have shown genuine improvement, with a handful now sitting below the 1% threshold on summarisation-specific benchmarks. But those tests measure a narrow slice of how AI is actually used.
In medical settings, hallucination rates in clinical scenarios ranged from 64% to over 80% for open-source models, even when mitigation prompts were applied. Legal queries fared no better: Stanford research found that models hallucinate between 69% and 88% of the time on specific legal questions.
Why No One Is Really Fixing It
The core problem is architectural. Large language models are prediction engines, not knowledge retrieval systems. They generate the statistically most likely next word based on training patterns, with no internal mechanism to distinguish known facts from plausible fictions. A 2025 paper from OpenAI and MIT researchers demonstrated mathematically why this tendency persists through training: the way models are currently evaluated rewards confident guessing over calibrated uncertainty, so models learn to bluff.
As OpenAI acknowledged in a September 2025 paper, standard benchmarks penalise uncertainty and reward accuracy scores, meaning a model that guesses will outperform one that says “I don’t know,” even if the guessing model produces far more incorrect answers.
Retrieval-Augmented Generation, which anchors model responses to external source documents, reduces hallucination rates by up to 42% when properly implemented, but it only works when there is a document to anchor to. For the open-ended questions that represent much of real-world AI usage, no reliable solution currently exists.
For users relying on AI tools in healthcare, law, journalism, or financial analysis, that gap remains very much open.










