Hallucination
Hallucination is when an LLM generates plausible but incorrect content, such as fabricated sources, wrong dates, or logic errors in code. This is not a traditional software bug, but a side effect of optimizing for likely phrasing rather than factual truth.
Reduce risk by requiring citations, providing RAG context, asking the model to signal uncertainty, and running human review for public-facing output. See prompt engineering and AI SEO.
Key characteristics
- Means the model can sound convincing even when facts, sources, or reasoning are wrong.
- Increases when relevant context is missing, the model is pushed to guess, or instructions are vague.
- Requires source checks, verification, and often retrieval or human review in production use cases.