The 70% Problem

AI gets you 70% of the way fast. The remaining 30% is where experienced engineers earn their keep.

The simple explanation

AI coding tools are spectacular at generating a first draft. Give them a clear description and they'll produce a working implementation in seconds. It looks great. It might even pass a few basic tests. You're 70% done, and it took almost no time.

Then you hit the other 30%. The edge case where the function receives null instead of an empty array. The race condition that only shows up under load. The accessibility requirement that the generated component ignores. The security vulnerability hidden behind plausible-looking code. The integration point where this module meets the rest of your system.

That last 30% takes longer than the first 70% - and it requires the kind of engineering judgment that AI can't reliably provide on its own. The 70% problem isn't a criticism of AI tools. It's a reminder that the hard part of software engineering was never typing the code.

Why it matters for agentic engineering

The 70% problem is why agentic engineering exists as a discipline. If AI could reliably deliver 100%, we wouldn't need engineering practices around it - you'd just describe what you want and ship the result. But because the gap between "compiles and looks right" and "production-ready" is significant, you need human judgment in the loop.

Understanding the 70% problem also changes how you allocate your time. Instead of writing code from scratch, you spend most of your effort on review, testing, edge case handling, and integration. The craft shifts from writing code to evaluating code - which is a fundamentally different (and arguably harder) skill.

In practice