Definition

Hallucination

When a language model produces confident-sounding output that is factually incorrect or unsupported by its inputs.

The Full Definition

Hallucination is the phenomenon where a language model generates content that sounds plausible but isn't grounded in reality or in the provided context. It happens because LLMs are trained to produce probable text, not true text — when the model lacks the information needed, it falls back on what statistically seems to fit. Hallucinations range from invented citations and made-up product features to confidently wrong technical details.

Why It Matters

Hallucination is the single most important risk to design around in production AI. Systems that ignore it embarrass their operators (the lawyer who cited invented cases) or harm users. Systems that design for it — through RAG, source citation, confidence scoring, and human-in-the-loop checkpoints — can be trusted.

How This Shows Up in Practice

A support agent built on a raw LLM started inventing return policies that didn't exist. The fix was to ground the agent in the actual policy documents via RAG, require source citations, and refuse to answer when no relevant policy was retrieved. Hallucinations on policy questions dropped from common to virtually zero.

Common Questions

Can hallucinations be eliminated?

Not entirely — but they can be made rare and easy to spot through architecture choices: grounding in retrieved context, citing sources, refusing to answer outside scope, and human review of high-stakes outputs.

Are newer models hallucinating less?

Yes — frontier models have substantially lower hallucination rates than even 18 months ago, especially on factual tasks. But hallucination is an architectural problem; better models help but don't solve it.

Related Terms

Want to put this to work?

A free process audit maps where hallucination — and the rest of the modern AI stack — actually move the needle in your business.

Survey My Business