Skip to content

Instantly share code, notes, and snippets.

@lisaross
Last active January 16, 2026 13:28
Show Gist options
  • Select an option

  • Save lisaross/8744f72f25642c396de59452226a20b3 to your computer and use it in GitHub Desktop.

Select an option

Save lisaross/8744f72f25642c396de59452226a20b3 to your computer and use it in GitHub Desktop.
Article 2: The Planning Trap (v2 - Context Intelligence as thread)
profile format topic red_thread goal generated voice_match word_count status
lr
newsletter
Context Intelligence—the skill of reading what kind of work you're facing and knowing how to approach it
The question isn't "should I plan?"—it's "what kind of work is this?" Context intelligence is the skill that answers that.
Reader understands that context intelligence (recognizing work types, sensing complexity, knowing when rigor pays off) is the meta-skill underlying effective Claude Code workflows, and knows how to apply it in practice.
2026-01-15
TBD
TBD
draft-v2

The Planning Trap (And When Planning Pays Off)

This is Part 2 of Claude Code for Practitioners—a series about the mental models that matter, not the syntax. There's plenty of how-to content out there. This is the thinking underneath. (Part 1 covered the orchestration skills you already have if you're just joining.)


"Should I invest time upfront in comprehensive research and architecture documents before touching code? Or should I just dive in?"

That's the question Ivan asked me. He's a BI engineer making the transition to Claude Code workflows, and the planning question is where everyone gets stuck.

My immediate answer was "it depends."

Then I spent some time figuring out what "it depends" actually depends on.

The answer requires a skill I call context intelligence—and it's the difference between practitioners who thrive with Claude Code and those who thrash.

What Is Context Intelligence?

Think emotional intelligence, business intelligence, artificial intelligence. Context intelligence is in that family: knowing what should be in context to do the work, recognizing when it's not right, and finding what to adjust.

Could you call that engineering? Sure. But here's the thing—you're often dealing with black box context. There's no stack trace when a skill has drifted or when the agent's working from stale assumptions. You can't grep your way to the problem. It requires something less prescriptive than engineering: pattern recognition, intuition about what's off, judgment about what to surface.

Context intelligence is the sensing layer that tells you what kind of work you're facing—and how to approach it.

(You might hear "context engineering"—that's the mechanical foundation, the structuring of what fits in context windows. Context intelligence is the judgment layer on top. Engineering is the craft; intelligence is the intuition.)

The rest of this article is what context intelligence looks like in practice.

The First Question: Will This Run More Than Once?

Context intelligence starts here: "Am I building infrastructure that will run repeatedly, or am I solving a one-time problem?"

Recurring systems reward rigor. Agents, reusable skills, infrastructure that runs repeatedly—upfront planning pays compound returns. You invest now, it amortizes across every future execution.

One-off work rewards iteration. A focused feature, a quick experiment, a targeted fix—planning overhead exceeds the value. Start, learn, adjust, ship.

This is context intelligence reading the fundamental nature of the work. The question isn't binary (plan or iterate). It's categorical: what kind of work is this?

And once you know that, the approach becomes obvious.

The Complexity Dimension: What Token Economics Tell You

Context intelligence doesn't stop at recurring vs one-off. There's a second dimension: project complexity.

Anthropic's guidance breaks this down by token count:

Small projects (<10K tokens): Let Claude handle it autonomously. Minimal planning, fast iteration, 40% productivity gains possible when you just let it run.

Medium projects (10-100K tokens): Structured planning becomes necessary. Without it, you hit context pollution—the agent starts thrashing, losing track of what it's already done, rewriting working code.

Large projects (>200K tokens): Heavy upfront planning is mandatory. Token limits compress what fits in working memory. Quality degrades without human-controlled design. The agent can execute the plan, but the human needs to own the architecture.

Here's the interesting part: these aren't arbitrary thresholds. They map to cognitive load—how much the agent can hold in working memory versus how much needs to be externalized into explicit plans.

Which means the planning question is actually a context management question. When the whole system fits in the agent's context window, iteration works. When it doesn't, you need structure.

Context intelligence is reading that boundary—and sensing when you're about to cross it.

The Greenfield Trap: When Context Intelligence Catches Disguised Work

This is where context intelligence earns its keep.

Greenfield projects feel like one-off work. You're building something new, there's no existing infrastructure to navigate, so it seems like the perfect place to iterate quickly.

But context intelligence knows better. Greenfield projects are recurring infrastructure in disguise. The auth system you're sketching? Runs thousands of times. The API endpoints you're prototyping? Foundation for everything downstream.

Treat greenfield like one-off work, and you end up refactoring constantly. I've watched this pattern: someone builds a feature iteratively, it works, they ship it—and then spend the next month untangling technical debt because they optimized for speed over structure.

Context intelligence senses that something feels like one-off but isn't. It knows when to override the surface impression and plan anyway.

What I Do When Context Intelligence Says "Plan"

When I'm building recurring systems or greenfield infrastructure—when context intelligence tells me rigor will pay off—I use a structured flow:

  1. Constitution → Define project principles and constraints upfront
  2. Specify → Write the feature specification
  3. Clarify → Surface unknowns and assumptions before planning
  4. Analyze → Cross-check artifacts for consistency
  5. Plan → Create implementation plan with research embedded
  6. Tasks → Break plan into dependency-ordered work packages
  7. Implement → Execute with dedicated implementation agents

The key move: research happens at the PLAN phase, not before. Most people research first, then plan—but that creates loops where you're never sure if you're done. You keep pulling threads because "what if this matters?"

Research during planning solves this: you know what you're planning for, so you know when you have enough information. The plan gives you the boundary conditions. Research fills in the gaps within those boundaries.

I should say upfront—lots of people think structured spec workflows are overdoing it. They're not wrong. Sometimes it IS overdoing it. That's the whole point: knowing when rigor pays off and when it's just ceremony.

The framework itself—constitution → specify → clarify → analyze → plan → tasks → implement—is useful as a mental model even if you never run a single command. The phases describe what actually needs to happen when you're building recurring systems. Whether you formalize it with tooling or just internalize the sequence as "the way I think about this," the structure matters.

This is what works for me. I use speckit because the phases match how I think about big work. But you don't need the tool to benefit from the thinking.

One thing to clarify—speckit scales beyond single features. I use it for big chunks: MVPs, architecture foundations, entire system designs. When scoped correctly, it gets implementation really far.

Here's the leverage: if you have experience building and planning, you already know where some of the footguns are. You can build those concerns into the spec itself. Then the planning phase researches those concerns, and that research becomes reference material for implementation agents. They have the context they need to avoid the problems you've already seen. Your experience compounds—it's encoded in the process, not just your memory.

Context Intelligence Front-Loading Dependencies

The other pattern I've converged on: front-load the human dependencies.

Before implementation agents start executing, I make sure they have:

  • Clear acceptance criteria (not "make it work"—specific, testable conditions)
  • API keys, credentials, and access sorted (not "we'll figure it out later")
  • Architectural decisions made (not "let's see what feels right")
  • Known constraints documented (not "we'll adapt as we go")

This isn't about control—it's context intelligence removing blockers before they become blockers. When agents hit a missing credential or undefined edge case, the context shift back to planning is expensive. You lose momentum. The agent loses thread. Reorienting more than executing.

Front-loading dependencies means agents can run parallel workstreams without waiting for human input. That's where you get real velocity.

This is context intelligence in action: sensing what will block the agent before it starts, and loading that context upfront.

When Context Intelligence Says "Skip the Ceremony"

Not everything deserves rigor.

When iteration is the right move:

  • Small features that touch 1-2 files
  • Experiments where you're learning what "right" looks like
  • Prototypes that might get thrown away
  • Quick fixes with clear scope
  • Exploratory work where the goal is discovery, not production

The test I use: "If this fails, what did I lose?"

If the answer is "an hour and some iteration cycles," just iterate. The planning overhead isn't worth it.

If the answer is "a week of rework and downstream breakage," plan.

That's context intelligence knowing when NOT to plan.

The Pure Iteration Pattern (And When It Breaks)

Pure iteration: you start coding immediately. Claude reads files, makes changes, you run tests, it adjusts. Fast, fluid, minimal friction. For single-file edits, this is dramatically faster than planning.

The problem isn't that it doesn't work. The problem is when it stops working.

Pure iteration works until:

  • You hit a breaking change that cascades across multiple files
  • You realize the approach you've been iterating on won't scale
  • You need to coordinate changes across systems
  • The context window fills up and Claude starts losing track

At that point, you're not iterating forward—you're refactoring backwards. And the time you saved by skipping planning? You're spending it now, with interest.

Context intelligence senses when you're about to cross that threshold—when iteration is about to tip into thrashing. That uneasy feeling when you're about to dive in? That's context intelligence telling you something's off.

Trust that instinct.

Context Intelligence in Practice: The Heuristics

If you're standing at the beginning of a new Claude Code task and asking yourself "should I plan or iterate?", here's the decision tree I'd use:

Plan when:

  • This will run more than once (recurring system)
  • Multiple files need coordination
  • You're in unfamiliar codebase territory
  • Estimated token count > 10K
  • Failure cost is high (migrations, refactors, production changes)
  • You feel "uneasy" about diving in (trust that instinct)

Iterate when:

  • This is one-off work
  • Scope is clear and contained
  • You're in familiar territory
  • Estimated token count < 10K
  • Failure cost is low (prototype, experiment, exploration)
  • Speed matters more than structure

Use the hybrid pattern when:

  • You're building greenfield infrastructure
  • Starting with iteration to explore, then pausing to plan once you understand the shape
  • Executing planned work in small iterative chunks (plan the system, iterate the implementation)

The last one is worth emphasizing: planning and iteration aren't competing strategies. The most effective pattern is both—plan the system architecture, then iterate within that structure.

These heuristics? They're context intelligence codified into rules you can follow before the intuition fully develops.

What the Data Actually Shows

The Claude Code community has converged on this hybrid approach—not because someone prescribed it, but because it's what works.

Anthropic's official recommendation is a four-phase workflow: Explore (don't code yet) → Plan (get human approval) → Code (small diffs) → Commit (document and PR). The key phrase: "Skipping the planning step leads to suboptimal solutions."

But the same practitioners who swear by planning also report that over-planning becomes its own trap. One developer programmed their CLAUDE.md to "push back on perfectionism and analysis paralysis" and "nudge toward shipping, not endlessly polishing."

Which tracks with my experience: the goal isn't perfect upfront design. The goal is specification clarity that reduces rework.

Planning that enables action is useful. Planning that substitutes for action is procrastination wearing a process costume.

Here's the thing, though: if you're a builder, you probably have planning PTSD. We want to build. We've sat through enough waterfall spec reviews and architecture committee meetings to last a lifetime. Planning felt like the tax you paid before you could do the actual work.

AI changes this. Some models are genuinely good at helping you plan—fast. Planning isn't the barrier it used to be. And honestly, the more I tune my planning workflows in Claude Code, the faster they get. The more I plan before I build, the better I get at knowing what agents are going to need to work well. It's meta-iteration: you're not just iterating on code, you're iterating on your ability to set up work that succeeds.

That's context intelligence improving through practice—and meta-iteration is how you train it.

Why Planning Feels Expensive (And How to Fix It)

When people say "I don't have time to plan," what they often mean is "I don't have good context management, so planning feels expensive." Fix the context, and planning becomes cheaper.

A concise CLAUDE.md file (100-200 lines, focused on architecture and constraints) enables Claude to make better decisions upfront. That reduces the need for iteration. It also makes planning faster, because the context is already structured.

When people say "iteration is faster," what they often mean is "my planning overhead is too high, so skipping it feels like acceleration." But if planning is expensive, that's usually a context problem, not a planning problem.

The technology exists. Planning and iteration are both context management strategies—just optimized for different scenarios. Context intelligence is how you know which one you're in.

What This Means for You

If you're asking Ivan's question—"should I plan or dive in?"—the answer depends on what you're building.

Recurring systems? Plan. The rigor compounds. One-off work? Iterate. The speed matters more than structure. Greenfield infrastructure? Plan—even though it feels like one-off work, it's not.

And if you're finding that planning feels slow or iteration keeps breaking, the problem might not be the approach. It might be your context management. Fix the CLAUDE.md. Structure what the agent sees. Front-load the dependencies.

The point isn't to copy my workflow. It's to develop your context intelligence.

Ask: "Is this recurring or one-off?" Ask: "Is this small, medium, or large in token complexity?" Ask: "What's the failure cost if I get this wrong?" Ask: "Do I feel uneasy about diving in?"

Then choose accordingly.

The heuristics are trainable. Context intelligence is a skill you build through practice—and meta-iteration applies to context intelligence itself. The more you pay attention to what works and what doesn't, the better you get at sensing what kind of work you're facing.


I'm still working out where the boundaries are—when planning overhead tips into analysis paralysis, when iteration tips into thrashing. If you've found patterns that work (or antipatterns that don't), I want to hear about it. This is something we figure out collectively, not something I hand you as doctrine.


The Claude Code for Practitioners Series:

  1. You Already Know How to Do This — The orchestration skills you've built in other domains transfer directly.
  2. The Planning Trap (And When Planning Pays Off) ← you are here
  3. The Anti-Patterns I See Newcomers Building — The fastest path to productivity is recognizing what NOT to build.
  4. The Minimal Viable Stack — Not a copy of my extensive config, but where I'd start if I was starting over.
  5. Wiggum Needs a Spec — Planning and implementation aren't competing approaches. Here's how they fit together.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment