| name | description |
|---|---|
spec-interview |
Deep-dive requirements gathering through iterative interviewing. Use when user says "spec", "interview me about", "help me think through", "requirements for", or provides a rough idea/file and wants a comprehensive specification. Reads input (file, idea, or brief), then conducts exhaustive AskUserQuestion interviews covering technical implementation, UI/UX, edge cases, tradeoffs, architecture, security, performance, and failure modes. Updates the spec file incrementally after each answer to maintain a living document. |
Conduct exhaustive requirements interviews to transform vague ideas into comprehensive specifications.
- Read input - Parse $ARGUMENTS for spec file path (existing or new) and optional context
- Initialize spec - If file exists, read it; otherwise create skeleton spec file immediately
- Interview & Update - After EACH user answer, update the spec file with new information
- Finalize - When complete, do a final pass to polish and fill any remaining gaps
Questions must be non-obvious and probing. Challenge assumptions. Find gaps.
BAD questions (too obvious, surface-level):
- "What does this feature do?"
- "Who are the users?"
- "What's the main goal?"
GOOD questions (deep, reveal hidden complexity):
- "When a user is mid-operation and loses network, what state should persist locally vs require re-fetch?"
- "If two users edit the same resource simultaneously, who wins? Last-write? Merge? Conflict UI?"
- "What's the failure UX when a third-party API times out at 30s but your users expect <2s response?"
- "You mentioned 'admin users' - are there permission gradients, or is it binary? Can admins grant partial access?"
- "What happens to in-flight requests when a user's session expires?"
- "If this scales 100x, which component breaks first? What's your mitigation?"
Cycle through ALL of these. Don't stop at surface answers.
Technical Implementation
- Data model edge cases (nulls, cycles, orphans, max sizes)
- State management (local vs server, sync strategy, conflict resolution)
- API contracts (versioning, backwards compat, deprecation policy)
- Error handling (retry logic, fallbacks, graceful degradation)
- Performance constraints (latency budgets, throughput requirements)
UI/UX
- Loading states, skeleton screens, optimistic updates
- Error presentation (inline, toast, modal, retry affordances)
- Empty states, zero-data states, first-run experiences
- Accessibility (keyboard nav, screen readers, color contrast)
- Mobile/responsive breakpoints, touch targets
- Undo/redo capabilities, destructive action confirmation
Architecture
- Service boundaries (what's a separate service vs module?)
- Data flow (unidirectional? event-driven? pub/sub?)
- Caching layers (what, where, invalidation strategy)
- Authentication/authorization boundaries
- Dependency injection, testability, mockability
Security & Privacy
- PII handling, data retention, right-to-delete
- Input sanitization, injection vectors
- Rate limiting, abuse prevention
- Audit logging requirements
Operational
- Monitoring, alerting thresholds
- Rollback strategy, feature flags
- Migration path from current state
- SLO/SLA requirements
Edge Cases & Failure Modes
- Partial failures (3 of 5 writes succeed)
- Timeout handling, idempotency
- Data corruption recovery
- Concurrent modification
- Clock skew, timezone handling
- Use AskUserQuestion with 2-4 options per question - Force choices to reveal preferences
- Follow the thread - If answer reveals ambiguity, probe deeper before moving on
- Challenge vague answers - "It should be fast" → "What's your p99 latency budget in ms?"
- Propose extremes - "What if you had 10 users? 10 million? Where does this design break?"
- Ask about anti-requirements - "What should this explicitly NOT do?"
Continue interviewing until user indicates:
- "That's enough" / "I think we're done"
- "Let's wrap up"
- User gives minimal/repetitive answers (they're tapped out)
When wrapping up, ask: "Any final concerns or things we haven't covered?" then do a final polish pass on the spec.
- If user provides a path to an existing spec file: read it, understand current state, continue from there
- If user provides a path that doesn't exist: create skeleton spec immediately at that path
- If user provides no path: create
SPEC.mdin current directory
After EACH user answer:
- Immediately update the relevant section(s) of the spec file
- Keep the document well-organized as it grows
- Mark sections as
[DRAFT]or[NEEDS DETAIL]where gaps remain - User can see progress in real-time by viewing the file
This approach:
- Prevents information loss if session is interrupted
- Lets user course-correct as spec takes shape
- Creates natural checkpoints for review
User: /spec-interview specs/realtime-feature.md
Claude reads file (or creates skeleton if new), then:
I've initialized the spec at specs/realtime-feature.md. Let me interview you to build it out.
[Uses AskUserQuestion]
Q: "Your doc mentions 'real-time updates'. What's the acceptable latency?"
Options:
- <100ms (true real-time, requires WebSocket)
- <1s (near real-time, polling acceptable)
- <5s (eventually consistent, simplest)
- Depends on the data type (let me explain)
User: "<100ms"
Claude immediately updates specs/realtime-feature.md:
## Performance Requirements
- Real-time updates: <100ms latency (requires WebSocket implementation)Then asks next question...
[20-40 questions later, spec file has been updated throughout]
Claude: "I think we've covered the major areas. Any final concerns?"
User: "Nope, looks good"
Claude does final polish pass on specs/realtime-feature.md, removing [DRAFT] markers and ensuring consistency