| profile | format | topic | red_thread | goal | generated | voice_match | word_count | status |
|---|---|---|---|---|---|---|---|---|
lr |
newsletter |
When planning pays off vs when iteration wins in Claude Code workflows |
The question isn't "should I plan?"—it's "what kind of work is this?" |
Reader understands the distinction between recurring systems (where rigor pays off) and one-off work (where iteration wins), and knows how to recognize which they're facing. |
2026-01-15 |
91 |
1750 |
complete |
This is Part 2 of Claude Code for Practitioners—a series about the mental models that matter, not the syntax. There's plenty of how-to content out there. This is the thinking underneath. (Part 1 covered the orchestration skills you already have if you're just joining.)
"Should I invest time upfront in comprehensive research and architecture documents before touching code? Or should I just dive in?"
That's the question every practitioner asks when moving to Claude Code workflows. I asked it too. And when Ivan—a BI engineer I've been working with who's making that exact transition—asked me, my immediate answer was "it depends."
So I spent some time figuring out what "it depends" actually depends on.
I like planning. I'll say that upfront—I'm not the "vibes and iteration" camp. My workflows look more like specification → clarification → analysis → plan → tasks → implement. It works. I have better outcomes with this process than without it.
But here's the thing: the answer isn't preference. It's not personality. It's what kind of work this is.
The planning question resolves when you stop treating it as binary (plan or iterate) and start asking: "Am I building infrastructure that will run repeatedly, or am I solving a one-time problem?"
Recurring systems reward rigor. Agents, reusable skills, infrastructure that runs repeatedly—upfront planning pays compound returns. You invest now, it amortizes across every future execution.
One-off work rewards iteration. A focused feature, a quick experiment, a targeted fix—planning overhead exceeds the value. Start, learn, adjust, ship.
The question isn't "should I plan?" It's: "Will this run more than once?"
There's a second dimension: project complexity. Anthropic's guidance breaks this down by token count:
Small projects (<10K tokens): Let Claude handle it autonomously. Minimal planning, fast iteration, 40% productivity gains possible when you just let it run.
Medium projects (10-100K tokens): Structured planning becomes necessary. Without it, you hit context pollution—the agent starts thrashing, losing track of what it's already done, rewriting working code.
Large projects (>200K tokens): Heavy upfront planning is mandatory. Token limits compress what fits in working memory. Quality degrades without human-controlled design. The agent can execute the plan, but the human needs to own the architecture.
Here's the interesting part: these aren't arbitrary thresholds. They map to cognitive load—how much the agent can hold in working memory versus how much needs to be externalized into explicit plans.
Which means the planning question is actually a context management question. When the whole system fits in the agent's context window, iteration works. When it doesn't, you need structure.
This is where newcomers get stuck—and I did too, initially.
Greenfield projects feel like one-off work. You're building something new, there's no existing infrastructure to navigate, so it seems like the perfect place to iterate quickly.
But greenfield projects are tricky—you're building recurring infrastructure in disguise. The auth system you're sketching? Runs thousands of times. The API endpoints you're prototyping? Foundation for everything downstream.
Treat greenfield like one-off work, and you end up refactoring constantly. I've watched this pattern: someone builds a feature iteratively, it works, they ship it—and then spend the next month untangling technical debt because they optimized for speed over structure. Plan accordingly.
When I'm building recurring systems or greenfield infrastructure, I use a structured flow:
- Constitution → Define project principles and constraints upfront
- Specify → Write the feature specification
- Clarify → Surface unknowns and assumptions before planning
- Analyze → Cross-check artifacts for consistency
- Plan → Create implementation plan with research embedded
- Tasks → Break plan into dependency-ordered work packages
- Implement → Execute with dedicated implementation agents
The key move: research happens at the PLAN phase, not before. Most people research first, then plan—but that creates loops where you're never sure if you're done. You keep pulling threads because "what if this matters?"
Research during planning solves this: you know what you're planning for, so you know when you have enough information. The plan gives you the boundary conditions. Research fills in the gaps within those boundaries.
I should say upfront—lots of people think structured spec workflows are overdoing it. They're not wrong. Sometimes it IS overdoing it. That's the whole point of this article: knowing when rigor pays off and when it's just ceremony.
The framework itself—constitution → specify → clarify → analyze → plan → tasks → implement—is useful as a mental model even if you never run a single command. The phases describe what actually needs to happen when you're building recurring systems. Whether you formalize it with tooling or just internalize the sequence as "the way I think about this," the structure matters.
This is what works for me. I use speckit because the phases match how I think about big work. But you don't need the tool to benefit from the thinking.
One thing to clarify—speckit scales beyond single features. I use it for big chunks: MVPs, architecture foundations, entire system designs. When scoped correctly, it gets implementation really far.
Here's the leverage: if you have experience building and planning, you already know where some of the footguns are. You can build those concerns into the spec itself. Then the planning phase researches those concerns, and that research becomes reference material for implementation agents. They have the context they need to avoid the problems you've already seen. Your experience compounds—it's encoded in the process, not just your memory.
If you want to try it, speckit is on GitHub. You don't need to run all phases for every piece of work—small changes can skip straight to implementation, greenfield systems benefit from the full sequence. Use the phases that match the work.
The other pattern I've converged on: front-load the human dependencies.
Before implementation agents start executing, I make sure they have:
- Clear acceptance criteria (not "make it work"—specific, testable conditions)
- API keys, credentials, and access sorted (not "we'll figure it out later")
- Architectural decisions made (not "let's see what feels right")
- Known constraints documented (not "we'll adapt as we go")
This isn't about control—it's about removing blockers before they become blockers. When agents hit a missing credential or undefined edge case, the context shift back to planning is expensive. You lose momentum. The agent loses thread. Reorienting more than executing.
Front-loading dependencies means agents can run parallel workstreams without waiting for human input. That's where you get real velocity.
Not everything deserves this rigor.
When iteration is the right move:
- Small features that touch 1-2 files
- Experiments where you're learning what "right" looks like
- Prototypes that might get thrown away
- Quick fixes with clear scope
- Exploratory work where the goal is discovery, not production
The test I use: "If this fails, what did I lose?"
If the answer is "an hour and some iteration cycles," just iterate. The planning overhead isn't worth it.
If the answer is "a week of rework and downstream breakage," plan.
Pure iteration: you start coding immediately. Claude reads files, makes changes, you run tests, it adjusts. Fast, fluid, minimal friction. For single-file edits, this is dramatically faster than planning.
The problem isn't that it doesn't work. The problem is when it stops working.
Pure iteration works until:
- You hit a breaking change that cascades across multiple files
- You realize the approach you've been iterating on won't scale
- You need to coordinate changes across systems
- The context window fills up and Claude starts losing track
At that point, you're not iterating forward—you're refactoring backwards. And the time you saved by skipping planning? You're spending it now, with interest.
This is why I keep saying: it's not preference, it's context.
For small, focused work—iteration wins. For recurring systems and complex projects—planning wins. For greenfield infrastructure—plan, even though it feels like you shouldn't need to.
If you're standing at the beginning of a new Claude Code task and asking yourself "should I plan or iterate?", here's the decision tree I'd use:
Plan when:
- This will run more than once (recurring system)
- Multiple files need coordination
- You're in unfamiliar codebase territory
- Estimated token count > 10K
- Failure cost is high (migrations, refactors, production changes)
- You feel "uneasy" about diving in (trust that instinct)
Iterate when:
- This is one-off work
- Scope is clear and contained
- You're in familiar territory
- Estimated token count < 10K
- Failure cost is low (prototype, experiment, exploration)
- Speed matters more than structure
Use the hybrid pattern when:
- You're building greenfield infrastructure
- Starting with iteration to explore, then pausing to plan once you understand the shape
- Executing planned work in small iterative chunks (plan the system, iterate the implementation)
The last one is worth emphasizing: planning and iteration aren't competing strategies. The most effective pattern is both—plan the system architecture, then iterate within that structure.
The Claude Code community has converged on this hybrid approach—not because someone prescribed it, but because it's what works.
Anthropic's official recommendation is a four-phase workflow: Explore (don't code yet) → Plan (get human approval) → Code (small diffs) → Commit (document and PR). The key phrase: "Skipping the planning step leads to suboptimal solutions."
But the same practitioners who swear by planning also report that over-planning becomes its own trap. One developer programmed their CLAUDE.md to "push back on perfectionism and analysis paralysis" and "nudge toward shipping, not endlessly polishing."
Which tracks with my experience: the goal isn't perfect upfront design. The goal is specification clarity that reduces rework.
Planning that enables action is useful. Planning that substitutes for action is procrastination wearing a process costume.
Here's the thing, though: if you're a builder, you probably have planning PTSD. We want to build. We've sat through enough waterfall spec reviews and architecture committee meetings to last a lifetime. Planning felt like the tax you paid before you could do the actual work.
AI changes this. Some models are genuinely good at helping you plan—fast. Planning isn't the barrier it used to be. And honestly, the more I tune my planning workflows in Claude Code, the faster they get. The more I plan before I build, the better I get at knowing what agents are going to need to work well. It's meta-iteration: you're not just iterating on code, you're iterating on your ability to set up work that succeeds.
Here's where this connects back to Part 1 of this series: the differentiator isn't planning or iteration—it's what I call context intelligence.
Think emotional intelligence, business intelligence, artificial intelligence. Context intelligence is in that family: knowing what should be in context to do the work, recognizing when it's not right, and finding what to adjust.
Could you call that engineering? Sure. But here's the thing—you're often dealing with black box context. There's no stack trace when a skill has drifted or when the agent's working from stale assumptions. You can't grep your way to the problem. It requires something less prescriptive than engineering: pattern recognition, intuition about what's off, judgment about what to surface.
Context engineering is the foundation—the mechanics of structuring what fits in context windows. A concise CLAUDE.md file (100-200 lines, focused on architecture and constraints) enables Claude to make better decisions upfront. That reduces the need for iteration. It also makes planning faster, because the context is already structured.
But context intelligence is the sensing layer on top. It's that uneasy feeling when you're about to iterate on something that needs a plan. The friction when you're over-engineering a one-off task. The instinct that something in the context is wrong before you can articulate what.
When people say "I don't have time to plan," what they often mean is "I don't have good context management, so planning feels expensive." Fix the context, and planning becomes cheaper. When people say "iteration is faster," what they often mean is "my planning overhead is too high, so skipping it feels like acceleration." But if planning is expensive, that's usually a context problem, not a planning problem.
The technology exists. Planning and iteration are both context management strategies—just optimized for different scenarios. Context intelligence is how you know which one you're in.
The heuristics earlier in this article? That's context intelligence in practice. Recognizing recurring versus one-off work. Sensing when token complexity is climbing. Knowing when "just start coding" is about to cost you. The skill isn't just managing context—it's reading it.
If you're asking Ivan's question—"should I plan or dive in?"—the answer depends on what you're building.
Recurring systems? Plan. The rigor compounds. One-off work? Iterate. The speed matters more than structure. Greenfield infrastructure? Plan—even though it feels like one-off work, it's not.
And if you're finding that planning feels slow or iteration keeps breaking, the problem might not be the approach. It might be your context management. Fix the CLAUDE.md. Structure what the agent sees. Front-load the dependencies.
The point isn't to copy my workflow. It's to match the approach to the work.
Ask: "Is this recurring or one-off?" Ask: "Is this small, medium, or large in token complexity?" Ask: "What's the failure cost if I get this wrong?"
Then choose accordingly.
I'm still working out where the boundaries are—when planning overhead tips into analysis paralysis, when iteration tips into thrashing. If you've found patterns that work (or antipatterns that don't), I want to hear about it. This is something we figure out collectively, not something I hand you as doctrine.
The Claude Code for Practitioners Series:
- You Already Know How to Do This — The orchestration skills you've built in other domains transfer directly.
- The Planning Trap (And When Planning Pays Off) ← you are here
- The Anti-Patterns I See Newcomers Building — The fastest path to productivity is recognizing what NOT to build.
- The Minimal Viable Stack — Not a copy of my extensive config, but where I'd start if I was starting over.
- Wiggum Needs a Spec — Planning and implementation aren't competing approaches. Here's how they fit together.