Hallucination
Hallucination is when an AI model generates information that sounds plausible but is factually wrong - and presents it with full confidence. A language model might invent research papers that don't exist, cite legal cases that never happened or fabricate statistics that seem reasonable but are entirely made up. This isn't a bug that can be patched; it's a fundamental consequence of how these models work. They generate text by predicting what plausible-sounding words come next, not by looking up verified facts. When the model doesn't have reliable information about something, it doesn't say "I don't know" - it generates the most plausible-seeming text it can. The impact on business applications is significant: any use case requiring factual accuracy - legal research, medical advice, financial reporting, journalism - must account for hallucination risk. Mitigation strategies include retrieval-augmented generation (grounding responses in verified documents), asking the model to cite its sources (and then checking them), using multiple models and comparing outputs, and implementing human review for high-stakes content. Hallucination rates have improved with newer models, but the problem has not been solved. For now, treating AI outputs as drafts that require verification, rather than authoritative answers, remains the safest approach for any factual application.