What is Hallucination?
A phenomenon where an AI model generates incorrect or nonsensical information but presents it confidently as fact.
Hallucinations occur because LLMs are probabilistic engines designed to predict the next word, not truth engines. They can fabricate citations, facts, or code that looks plausible but is entirely wrong. Mitigation strategies include RAG and grounding.
Build your own AI Agent
Ready to put this concept into action? Create your own custom AI agent with Agent One in minutes. No coding required.
Start Building FreeRelated Terms
AI Agent
An autonomous software program capable of perceiving its environment, reasoning, and taking actions to achieve specific goals with minimal human intervention.
Large Language Model (LLM)
A deep learning algorithm that can recognize, summarize, translate, predict, and generate text and other content based on knowledge gained from massive datasets.
Retrieval-Augmented Generation (RAG)
A technique that enhances the accuracy and reliability of generative AI models with facts fetched from external sources.