The Efficiency Paradox: Why Making Software Easier to Write Means We'll Write Exponentially More

December 29, 2025

Every time we’ve made it easier to write software, we’ve ended up writing exponentially more of it.

When high-level languages replaced assembly, programmers didn’t write less code - they wrote orders of magnitude more, tackling problems that would have been economically impossible before. When frameworks abstracted away the plumbing, we didn’t reduce our output - we built more ambitious applications. When cloud platforms eliminated infrastructure management, we didn’t scale back - we spun up services for use cases that never would have justified a server room.

@levie recently articulated why this pattern is about to repeat itself at a scale we haven’t seen before, using Jevons Paradox as the frame. The argument resonates because it’s playing out in real-time in our developer tools. The initial question everyone asks is “will this replace developers?” but just watch what actually happens. Teams that adopt these tools don’t always shrink their engineering headcount - they expand their product surface area. The three-person startup that could only maintain one product now maintains four. The enterprise team that could only experiment with two approaches now tries seven.

The constraint being removed isn’t competence but it’s the activation energy required to start something new. Think about that internal tool you’ve been putting off because “it would take someone two weeks and we can’t spare anyone”? Now it takes three hours. That refactoring you’ve been deferring because the risk/reward math didn’t work? The math just changed.

This matters because software engineers are uniquely positioned to understand what’s coming. We’ve seen this movie before, just in smaller domains. Every abstraction layer - from assembly to C to Python to frameworks to low-code - followed the same pattern. Each one was supposed to mean we’d need fewer developers. Each one instead enabled us to build more software.

Here’s the part that deserves more attention imo: the barrier being lowered isn’t just about writing code faster. It’s about the types of problems that become economically viable to solve with software. Think about all the internal tools that don’t exist at your company. Not because no one thought of them, but because the ROI calculation never cleared the bar. The custom dashboard that would make one team 10% more efficient but would take a week to build. The data pipeline that would unlock insights but requires specialized knowledge. The integration that would smooth a workflow but touches three different systems.

These aren’t failing the cost-benefit analysis because the benefit is low - they’re failing because the cost is high. Lower that cost by “10x”, and suddenly you have an explosion of viable projects. This is exactly what’s happening with AI-assisted development, and it’s going to be more dramatic than previous transitions because we’re making previously “impossible” work possible.

The second-order effects get really interesting when you consider that every new tool creates demand for more tools. When we made it easier to build web applications, we didn’t just get more web applications - we got an entire ecosystem of monitoring tools, deployment platforms, debugging tools, and testing frameworks. Each of these spawned their own ecosystems. The compounding effect is nonlinear.

Now apply this logic to every domain where we’re lowering the barrier to entry. Every new capability unlocked creates demand for supporting capabilities. Every workflow that becomes tractable creates demand for adjacent workflows. The surface area of what’s economically viable expands in all directions.

For engineers specifically, this changes the calculus of what we choose to work on. Right now, we’re trained to be incredibly selective about what we build because our time is the scarce resource. But when the cost of building drops dramatically, the limiting factor becomes imagination, “taste” and judgment, not implementation capacity. The skill shifts from “what can I build given my constraints?” to “what should we build given that constraints have in some ways been evaporated?”

The meta-point here is that we keep making the same prediction error. Every time we make something more efficient, we predict it will mean less of that thing. But efficiency improvements don’t reduce demand - they reveal latent demand that was previously uneconomic to address. Coal. Computing. Cloud infrastructure. And now, knowledge work.

The pattern is so consistent that the burden of proof should shift. Instead of asking “will AI agents reduce the need for human knowledge workers?” we should be asking “what orders of magnitude increase in knowledge work output are we about to see?”

For software engineers it’s the same transition we’ve navigated successfully several times already. The developers who thrived weren’t the ones who resisted higher-level abstractions; they were the ones who used those abstractions to build more ambitious systems. The same logic applies now, just at a larger scale.

The real question is whether we’re prepared for a world where the bottleneck shifts from “can we build this?” to “should we build this?” That’s a fundamentally different problem space, and it requires fundamentally different skills.

We’re about to find out what happens when the cost of knowledge work drops by an order of magnitude. History suggests we (perhaps) won’t do less work - we’ll discover we’ve been massively under-investing in knowledge work because it was too expensive to do all the things that were actually worth doing.

The paradox isn’t that efficiency creates abundance. The paradox is that we keep being surprised by it.


This post was originally a response to Aaron Levie’s thread on Jevons Paradox and AI. My original response was posted on X/Twitter.