Skip to content

Instantly share code, notes, and snippets.

@lisaross
Last active January 18, 2026 22:14
Show Gist options
  • Select an option

  • Save lisaross/1cdc9c2494c392ec4b4f8db022a8a0e6 to your computer and use it in GitHub Desktop.

Select an option

Save lisaross/1cdc9c2494c392ec4b4f8db022a8a0e6 to your computer and use it in GitHub Desktop.
Article 4: The Minimal Viable Stack - Claude Code for Practitioners series
profile format topic red_thread goal generated voice_match word_count status citations
lr
newsletter
The Minimal Viable Stack—where to start if starting over
Start minimal and learn how things work together. Iterate as problems repeat.
Reader has a concrete starting architecture and philosophy for building up—not a copy of someone else's setup.
2026-01-18 15:30:00 UTC
92
1750
complete
Convex Plugin Evolution - Day 1-4 iteration pattern from community discussion
Ivan's Design System - Component mapping approach for shadcn/blocks, Aceternity, cult-ui
Claude Code Plugin Architecture - llms.txt strategy and context window isolation

Building Your Claude Code Stack (From Zero)

This is Part 4 of Claude Code for Practitioners—a series about the mental models that matter, not the syntax. Part 1 covered the orchestration skills you already have. Part 2 introduced the planning trap—when rigor pays off and when it doesn't. Part 3 identified the anti-patterns that signal you haven't mapped the terrain yet.


"Can you share your base setup?"

I get this question a lot. And the answer is always the same: don't copy mine.

My current setup has agents for different stacks, tool-specific skills for fast-moving frameworks, commands for frequent workflows, hooks that capture troubleshooting insights. A whole infrastructure that took months to build—since Claude Code launched, basically. I added pieces as problems repeated. Iterated as I learned how things compose. Pruned regularly when complexity outweighed utility.

If you just took it, you'd inherit my mental model—built for my stack, my workflows, my repeated needs. You'd end up using Convex auth instead of Clerk even though you explicitly told it not to. You'd debug my assumptions instead of building from yours.

So the better question isn't "what's your setup?"—it's where would you start if you were starting over?

That's a philosophy, not a configuration.

What a Minimal Claude Code Stack Actually Is

Start minimal. Learn how things work together. Add as problems repeat—not before. Build from need, not from browsing awesome-claude-skills repos. Keep it clean. Prune regularly.

That's the whole strategy. But specifics matter—here's the concrete version: three types of infrastructure, applied strategically.

The Claude Code Architecture (What Goes Where)

Think of your setup like a small company's staff. Not Google's org chart—the team you'd actually run with if you were three people trying to ship.

Simple agents are your specialists. One job each. A Convex agent who knows current Convex patterns and checks llms.txt before implementing. A design implementation agent who understands your component library and can spin up layouts fast. A git workflow agent who handles the commit/push/PR ceremony without cluttering your main conversation.

Skills are "the way you do things"—your conventions, your gotchas, your repeated patterns. Not downloaded capabilities. Context you build from experience. The troubleshooting insight you captured after three hours debugging why your agent kept using deprecated APIs. The component mapping that connects heroes and CTAs to actual assets you own.

MCP servers are your research layer. Exa for finding current information. Ref for pulling documentation. These aren't things you build—they're infrastructure you install once and use everywhere. The agents call them when they need knowledge outside the codebase.

That's the core. Agents for specialized work. Skills for repeated patterns. MCPs for external knowledge.

The piece most people miss? Tool-specific skills. This is where the real leverage lives.

Tool-Specific Skills: Why They Matter More Than You'd Think

Here's the problem nobody talks about enough: models don't know current tool patterns.

Convex launched less than two years ago. The best practices are still emerging. The agent falls back to training data—which means it tries to apply "old school data models" instead of Convex's reactive patterns. And it drifts more as the context window fills up, because the confident-but-wrong training data outweighs the small corrections you're making.

Same issue with Cloudflare Workers, shadcn/ui blocks added last month, the Claude Agent SDK, any tool that moves faster than training cycles. The agent confidently suggests patterns that were superseded, uses APIs that were deprecated, misses features that would solve your problem elegantly.

This isn't the model being bad. It's the model doing exactly what it was trained to do—answer from what it knows. The problem is that what it knows is six months stale.

Tool-specific skills fix this. Not generic "be a Convex expert" instructions—actual current documentation, key patterns, common gotchas, what's changed recently. You're giving the agent up-to-date context it can't infer from training data.

The pattern that works:

  1. Reference the tool's llms.txt file in your CLAUDE.md or agent config
  2. Give the agent MCP access (exa or ref) to pull current docs
  3. Spawn tool-specific agents in their own context windows
  4. Have them check llms.txt before implementing

Why context window isolation matters: if you keep your main agent window clean and spawn a convex-expert when you hit Convex work, the specialized context doesn't pollute the orchestration layer. The agent checks current patterns, doesn't drift toward training data, and you stay further from the keyboard.

This applies to any tool in your stack that moves fast. Which, if you're building with AI infrastructure, is probably most of them.

Pro Tip: When you hit tool-specific work—Convex, Cloudflare, shadcn—spawn a dedicated agent in its own context window. The specialized context doesn't pollute your orchestration layer, and the agent checks current patterns instead of drifting toward stale training data.

The Day 1-4 Evolution (How This Actually Builds)

I sent Ivan my convex plugin architecture recently—but told him not to just take it. What he'd learn from building his own would compound faster than inheriting mine.

Then I shared the evolution pattern:

Day 1: Create the convex agent. Simple. One job. Knows to check llms.txt. That's it.

Day 2: Use the agent. Create skills as you hit snags—but only if it doesn't work out of the box. Don't pre-build skills for problems you haven't had.

Day 3: Keep building. Iterate the skills based on what actually broke. Tune the agent based on what it's still getting wrong.

Day 4: Add commands for tasks you're doing all the time. The ceremony you're tired of explaining.

Repeat. Eventually you get a solid "convex employee." Same as any other worker, without the meetings.

The key move: only create infrastructure when it doesn't work out of the box. Don't anticipate. Don't pre-engineer. Build from the friction you're actually experiencing.

Pro Tip: Use the agent out of the box first—create skills only when it doesn't work. The friction you experience is the signal for what infrastructure matters, and building from that signal compounds faster than inheriting someone else's assumptions.

Philosophy: Build from Experience, Not Anticipation

This part changes everything.

Skills aren't things you download and hope work. They're documentation of "the way you do things"—built from experience, not speculation.

Ivan's design implementation setup is the perfect example. He didn't start by downloading a "website builder agent" from awesome-claude-skills. He mapped his actual component library—shadcn/blocks, cult-ui, Aceternity, organized by function and style. Created skills for heroes, CTAs, features—each connected to assets he owns. Built it iteratively as he learned what he needed.

That's the pattern. Not "what might I need?" but "what keeps coming up?"

The troubleshooting insight you captured after debugging for three hours—that becomes a skill. The architecture decision you had to explain three times—that goes in CLAUDE.md. The workflow you're tired of manually orchestrating—that becomes a command.

Build abstractions from experience, not from anticipation.

This is different from normal engineering, where you're supposed to anticipate scale and design for flexibility. Here, premature abstraction is expensive—you're guessing what patterns will emerge before you've seen them. Better to stay concrete, iterate fast, and abstract only when repetition proves the pattern.

The troubleshooting → skill update loop looks like this:

  1. Hit a snag. Agent does the wrong thing.
  2. Debug it. Figure out what pattern it missed.
  3. Update the relevant skill or agent config with the insight.
  4. Next time, the agent gets it right.

I'm building toward something more automated—hooks that capture insights to a database, maybe even self-healing context that learns from errors. But I'm not there yet, and I'm curious if anyone is. If you're solving this problem, I want to hear about it. For now, the foundational move is simpler: when you hit friction, document the fix so it doesn't repeat.

Your Claude Code Starter List (If You're Starting Today)

Alright, enough philosophy. Here's what I'd actually build if I was starting fresh:

2-4 simple agents, one job each:

  • Tool-specific agent for your fastest-moving stack piece (Convex, Cloudflare, whatever evolves faster than training data)
  • Research agent using MCPs if you need current information frequently
  • Whatever else YOU keep repeating—that's the signal for what to build next

(Claude Code already ships with solid git workflow and frontend agents. Don't rebuild what exists—extend what's missing.)

Tool-specific skills for YOUR stack:

  • Current patterns for tools that move fast
  • Gotchas you've hit and debugged
  • Architecture decisions you keep explaining
  • Component mappings for things you build repeatedly

MCP servers for research:

  • Exa (web search with current information)
  • Ref (documentation lookup)

A CLAUDE.md that captures "how we do things":

# CLAUDE.md

## Orchestration Pattern

ALWAYS create a task list using TodoWrite that delegates dedicated
(and parallel if possible) subagents to complete each task. Have them
commit meaningfully and often. This keeps the main agent context window
clean for orchestration.

## Conventions

- Package manager: [bun/npm/pnpm/yarn]
- Test command: [your test command]
- Build command: [your build command]

## Gotchas

[Add fixes after you debug them—not before]

Start here. Add to it only when you hit friction.

That's it. That's the MVS.

Just the infrastructure that matters from day one, built in a way that lets you learn how things compose.

The Claude Code Orchestration Pattern That Scales

One more piece—the shortcut that goes everywhere:

ALWAYS create a task list using TodoWrite.

Then delegate to dedicated subagents—parallel if the tasks are independent, sequential if they depend on each other.

This is the pattern that prevents context pollution. Main agent orchestrates. Subagents execute in their own windows. Each subagent commits meaningfully and often. Main agent stays clean for orchestration.

It's not magic. It's separation of concerns—applied to context windows instead of code modules.

This scales because the orchestration layer doesn't accumulate implementation details. The subagents do the work, the main agent tracks progress.

Simple. Predictable. Effective.

What This Isn't (And Why That Matters)

This isn't the setup I use today. Mine's more complex—because I've hit more problems, iterated more times, built more infrastructure for patterns that repeated.

But complexity isn't the goal. Solving your actual problems is the goal. And you don't know what those problems are yet.

So start minimal. Use Plan Mode before building custom planning agents. Use tool-specific skills for fast-moving infrastructure. Spawn subagents for specialized work. Build from need, not from browsing repos. Prune regularly.

The MVS isn't a configuration to copy. It's the discipline to start small and iterate from reality.

The Full Lifecycle (When You're Ready)

There's another layer to this—for when you're facing greenfield projects that need more structure upfront. Speckit for systematic specification. Constitutional documents that encode project governance. The full lifecycle from intent to implementation.

But that's Article 5. And you don't need it yet.

Build the basics. Use them. Learn how things compose. Let complexity emerge from need.

Then—when you're ready—we'll talk about the structured planning workflows for when iteration isn't enough.


What's your stack? What tools are you building with that move faster than training data? I'm especially curious about tool-specific skills people are building—the implementations are what matter, and I want to learn from what you're seeing.


Claude Code for Practitioners is a newsletter series about the mental models that matter when working with AI agents. If you found this useful, share it with someone who's just starting their setup—they'll thank you when they avoid copying someone else's complexity instead of building from their own needs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment