Most engineers use AI code assistants the same way they used Stack Overflow: describe the problem, get a snippet, adapt it. This works for isolated functions. It breaks down badly for features that have to fit into a real system — auth flows, state management, API boundaries.
The shift that made AI genuinely useful for me was treating it less like a search engine and more like a pairing partner who's extraordinarily fast at implementation but knows nothing about my codebase. Like any good pairing session, the value is in the preparation.
Why prompting without a spec fails
When you prompt without a spec, the model optimizes for plausibility. It writes code that looks correct, uses reasonable patterns, and compiles. What it can't do is know: which auth library you've standardized on, which error handling pattern the rest of the codebase uses, or that the endpoint it's building is rate-limited and needs exponential backoff.
The result is code that needs significant rework — not because the model is bad at code, but because it's bad at reading your mind.
What a spec looks like in practice
A spec isn't a formal document. It's a structured prompt that answers the questions a good engineer would ask before starting:
- What is this? One sentence on what this module/component/function does.
- What are the inputs and outputs? Types, constraints, edge cases.
- What does it depend on? Which existing utilities, stores, or API clients.
- What patterns should it follow? Point to 1-2 existing files as the style reference.
- What should it NOT do? Explicit exclusions prevent scope creep.
The workflow
Step 1: Write the spec in plain text
Before opening Claude Code, I write a short spec in a comment block at the top of the target file or in a separate .md file. This forces me to think through the design before implementation.
Step 2: Share the spec + relevant context
claude "implement AuthProvider per the spec in this file, following the pattern in lib/query.ts"The key is giving the model the right context, not all the context. Over-indexing on related files muddies the output.
Step 3: Review structurally before reviewing line-by-line
When the output comes back, the first review pass is architectural: does it follow the right pattern? Does it use the right dependencies? Does it handle the edge cases the spec called out?
Line-by-line review comes second. This ordering matters — catching a wrong abstraction early is cheap; catching it after you've validated the logic is expensive.
Step 4: Iterate in conversation, not in commits
If the first pass is wrong, correct it in the same session. The model has context. A new session loses it.
What this changes
The spec-first workflow makes AI useful for real features, not just utility functions. It also has a side effect: writing the spec often reveals design problems I would have hit halfway through implementation. The spec is worth writing even when I'm going to write the code myself.
When to skip the spec
Short, well-isolated utility functions with clear types and no system dependencies. formatDate(date: Date): string doesn't need a spec. A custom React Query mutation hook that touches auth state, error boundaries, and a specific API contract does.
The decision rule: if you'd pause before starting to write it yourself, write a spec first.