Automation Bias

The tendency to trust AI output just because a computer generated it - even when it's wrong.

The simple explanation

Automation bias is a well-documented psychological tendency: people trust automated systems more than they should. It's the same instinct that makes drivers follow GPS into a lake, or pilots ignore their own instruments because the autopilot looks confident.

In AI coding, it shows up as rubber-stamping. An agent generates a diff, you glance at it, think "the AI probably got it right," and approve it without truly understanding what changed. The code looks clean, the structure seems reasonable, and you're busy - so you merge it.

The problem is that AI output looks more authoritative than it is. A well-formatted, well-structured piece of code feels trustworthy. But formatting and structure don't guarantee correctness. The AI might have made a subtle logic error, introduced a security vulnerability, or solved the wrong problem entirely.

Why it matters for agentic engineering

Automation bias is the biggest cultural risk in agentic engineering. If the point of having agents is to produce code faster, there's a natural pressure to also review code faster. But fast review + automation bias = shipping bugs.

The discipline of agentic engineering specifically requires fighting this tendency. Review AI code with the same rigor you'd apply to any code review - or arguably more rigor, because AI makes mistakes that humans typically don't (like hallucinating APIs or ignoring project-specific conventions it wasn't told about).

In practice