Key terms and concepts explained simply. A living reference for the age of AI-assisted software development.
A discipline where developers direct AI coding agents to build software through specs, reviews, and iteration rather than writing every line by hand. Think of it like being an architect who describes what to build and checks the work, instead of laying every brick yourself.
Read more →An AI that can autonomously write, edit, and run code to complete a task. Unlike a chatbot that just answers questions, an agent takes actions: it reads your codebase, makes changes, runs tests, and iterates until the job is done. Think of the difference between asking someone for directions vs. hiring a driver.
Learn more →Generating code with AI by describing what you want without carefully reviewing or understanding the output. It's fast for prototypes but risky for production. Like dictating an essay without proofreading it - sometimes it's fine, sometimes it's not.
How it differs from agentic engineering →A workflow where an AI does the heavy lifting but a human reviews and approves key decisions. Like an AI assistant that drafts code but waits for you to hit "merge." The human provides judgment, context, and accountability that the AI can't reliably handle alone.
Learn more →The amount of text an AI model can "see" at once, measured in tokens. It's like the AI's working memory. A bigger context window means the agent can read more of your codebase at once, but there are still practical limits. Feeding it your entire monorepo won't work - you need to be selective.
Learn more →The units AI models use to process text. Roughly, 1 token is about 3/4 of a word. When people talk about "token costs" or "token budgets," they mean how much text the AI is processing - more tokens means more compute and higher cost.
Learn more →The ability for an AI to call external tools - run shell commands, search codebases, read files, call APIs. This is what makes an AI agent an *agent* rather than just a text generator. It can take real actions in the world instead of just talking about them.
Learn more →A way of structuring AI work where one "lead" agent breaks a task into subtasks and delegates them to specialized worker agents. Like a tech lead who splits a feature across the team and coordinates the pieces. The orchestrator plans, the workers execute.
Conductors to orchestrators →Multiple AI agents working on different parts of a codebase simultaneously. One agent might be writing the API, another the frontend, and another the tests. Like a dev team, but made of AI. Coordination is the hard part - without it, agents step on each other's toes.
Claude Code Swarms →A metaphor for how coding agents have changed software engineering. Instead of artisan developers hand-crafting every line, engineering becomes more like running a factory - you design the processes, set quality controls, and let the machines do the production work.
The Factory Model →The core cycle most coding agents follow. They plan what to do, act on the codebase (write/edit code), then observe the result (read compiler output, test results). They repeat this loop until the task is done or they get stuck. It's essentially the scientific method applied to coding.
Learn more →The structure you give an agent before it starts working - project templates, linting rules, type definitions, test frameworks. Like building a trellis for a climbing plant. The more scaffolding you provide, the better the agent's output will be because it has clear constraints to work within.
Learn more →A technique where the AI retrieves relevant information from a knowledge base before generating a response. Instead of relying on what it memorized during training, it looks things up first. Like the difference between an exam with notes vs. from memory - RAG gives the AI notes.
Learn more →Writing a detailed specification before letting an AI agent build a feature. The spec defines what to build, acceptance criteria, edge cases, and constraints. It's the single highest-leverage thing you can do to get good output from coding agents - garbage spec in, garbage code out.
How to write a good spec →A file you put in your repo to tell AI coding agents how your project works - coding conventions, architecture decisions, what tools to use, what to avoid. Think of it as onboarding documentation, but for your AI teammates instead of human ones.
Stop Using /init for AGENTS.md →The craft of writing instructions that get the best results from AI. It's not just about saying what you want - it's about providing the right context, constraints, and examples. In agentic engineering, good prompts are less about clever tricks and more about being specific and structured.
Learn more →When an AI reasons step-by-step before giving an answer, rather than jumping straight to a conclusion. Like showing your work in math class. Coding agents that "think out loud" before writing code tend to produce better, more correct implementations.
Learn more →Agents that learn from their own successes and failures during a session. If an approach fails, they adjust their strategy. If a test passes, they remember what worked. This isn't permanent learning - it resets between sessions - but it makes agents more effective within a single task.
Self-Improving Coding Agents →Constraints you put on an AI agent to keep it from going off the rails. Type checking, linting, test suites, restricted file access, and mandatory human review are all guardrails. They're the bumpers on a bowling lane - they don't tell the agent what to do, but they keep it from doing something catastrophic.
Learn more →Using AI to help review pull requests and code changes. The AI can catch bugs, flag style issues, and suggest improvements. But it's a supplement to human review, not a replacement. Humans still need to verify the logic makes sense and the change is actually what was intended.
AI writes code faster. Your job is still to prove it works. →An open protocol that lets AI agents connect to external tools and data sources in a standardized way. Think of it like USB for AI - instead of building a custom integration for every tool, MCP provides one universal plug. An agent can use MCP to read your database, search your docs, or call your APIs.
Learn more →The idea that AI coding tools need a standard protocol like LSP (Language Server Protocol) gave us for code editors. LSP let any editor talk to any language's tooling. Similarly, emerging standards aim to let any AI agent talk to any dev tool - making the ecosystem more interoperable.
Learn more →Running an AI agent in an isolated environment so it can't accidentally damage your real system. Like giving a new intern a test account instead of prod access. Sandboxes let agents experiment freely - if they break something, it only breaks in the sandbox.
Learn more →The cost of having AI-generated code in your codebase that no human fully understands. Like tech debt, but for knowledge instead of code quality. If the AI wrote it and nobody reviewed it carefully, you've taken on a loan - one that comes due when something breaks and nobody knows why.
Comprehension Debt →The observation that AI coding tools get you roughly 70% of the way to a solution very quickly, but the remaining 30% - edge cases, integration, production readiness - is where all the hard work lives. That last 30% is where experienced engineers earn their keep.
Learn more →The risk that developers who over-rely on AI lose the fundamental skills they need to write, debug, and understand code. If you always let the AI drive, you might forget how. The mitigation is intentional practice - sometimes write code by hand, deeply review AI output, and stay curious about how things work under the hood.
Learn more →When an AI confidently generates something that's wrong - a function that doesn't exist, an API that has different parameters, or logic that looks right but isn't. In coding, this often shows up as plausible-looking code that fails at runtime. This is why testing and review are non-negotiable in agentic workflows.
Learn more →The tendency to over-trust AI output just because a computer generated it. Developers might rubber-stamp an AI's code changes because "the AI probably got it right." It's the same instinct that makes people follow GPS into a lake. Fight it by reviewing AI code with the same rigor you'd apply to a junior developer's PR.
Learn more →When an agent loses track of important context because its context window fills up or it switches between too many files. The agent starts making changes that conflict with earlier decisions or forgets constraints you set. Like someone who read the first half of the requirements doc but not the second.
Learn more →
This glossary is a living document. Terms are updated as the field of agentic engineering evolves.
Read the blog for deeper dives on these topics.