Hallucination

When the AI confidently generates something that's completely wrong - and it looks perfectly reasonable.

The simple explanation

AI hallucination is when a model generates information that sounds correct, looks plausible, but is factually wrong. In coding, this means inventing function names that don't exist, using API parameters that were never part of the interface, or producing logic that seems right on the surface but contains subtle bugs.

Think of it like a student who doesn't know the answer to an exam question but writes a confident, well-structured response anyway. The answer reads well, uses the right terminology, and follows the right format - it's just wrong.

The tricky part is that hallucinated code often compiles. It looks like real code. It follows the patterns of real code. You have to actually run it, test it, or carefully review it to discover the problems.

Why it matters for agentic engineering

Hallucination is the core reason why agentic engineering emphasizes review, testing, and guardrails so heavily. You cannot trust AI output at face value - no matter how confident or correct it looks.

In an agentic workflow, hallucination has an interesting property: it can be partially self-correcting. When an agent writes code, runs it, and sees a test failure or compile error, it can use that feedback to fix the hallucination. This is the plan-act-observe loop working as intended. But some hallucinations survive testing - logic bugs that don't trigger any existing tests, or incorrect assumptions that happen to produce correct output for the test cases but fail in production.

In practice

Common forms of hallucination in coding:

Mitigation strategies: