Human-in-the-Loop (HITL)

The AI does the heavy lifting, but a human reviews and approves key decisions.

The simple explanation

Think of a self-driving car that handles highway cruising but asks the human to take over for tricky intersections. Human-in-the-loop is the same idea applied to AI coding - the agent does most of the work, but a human stays involved at critical checkpoints.

In practice, this means an AI coding agent might write the implementation, but you review the diff before it gets merged. Or the agent might propose an architectural approach and wait for your approval before proceeding. The human provides the judgment, domain knowledge, and accountability that AI can't reliably deliver on its own.

The "loop" part is important - it's not a one-time check. The human reviews, provides feedback, and the agent adjusts. Then the human reviews again. It's an iterative cycle, not a rubber stamp.

Why it matters for agentic engineering

Full autonomy sounds appealing, but in practice, removing the human from the loop is where things go wrong. Agents can hallucinate APIs, make subtly incorrect architectural choices, or introduce security vulnerabilities that look perfectly reasonable at first glance.

Human-in-the-loop is what separates agentic engineering from vibe coding. In agentic engineering, you're deliberately choosing where to insert human checkpoints - spec review, code review, test verification, deployment approval. In vibe coding, the human just accepts whatever the AI produces.

In practice

The sweet spot is minimizing unnecessary human intervention (let the agent handle routine tasks) while maximizing high-value human review (architectural decisions, security-sensitive code, business logic).