Skip to content

Instantly share code, notes, and snippets.

@martypenner
Created January 20, 2026 14:56
Show Gist options
  • Select an option

  • Save martypenner/9b180cf034be7703f4c674f5a453f939 to your computer and use it in GitHub Desktop.

Select an option

Save martypenner/9b180cf034be7703f4c674f5a453f939 to your computer and use it in GitHub Desktop.
Cursor commands
description
Create git commits with user approval and no Claude attribution

Commit Changes

You are tasked with creating git commits for the changes made during this session.

Process:

  1. Check for ticket number:

    • Ask the user: "Is there a ticket number for this work? (e.g., ENG-1234, or 'none' if not applicable)"
    • If provided, you'll use this as a prefix for all commit messages (e.g., ENG-1234 - Add new feature)
    • If none, proceed without a prefix
  2. Think about what changed:

    • Review the conversation history and understand what was accomplished
    • Run git status to see current changes
    • Run git diff to understand the modifications
    • Consider whether changes should be one commit or multiple logical commits
  3. Plan your commit(s):

    • Identify which files belong together
    • Draft clear, descriptive commit messages
    • Use imperative mood in commit messages
    • If a ticket was provided, prefix each commit message with the ticket number followed by a colon (e.g., ENG-1234 - Add validation for user input)
    • Focus on why the changes were made, not just what
    • Frame commits from a user/product perspective - emphasize user-facing features, bug fixes, and behavior changes rather than implementation details or local development concerns
  4. Present your plan to the user:

    • List the files you plan to add for each commit
    • Show the commit message(s) you'll use (with ticket prefix if applicable)
    • Ask: "I plan to create [N] commit(s) with these changes. Shall I proceed?"
  5. Execute upon confirmation:

    • Use git add with specific files (never use -A or .)
    • Create commits with your planned messages
    • Show the result with git log --oneline -n [number]

Important:

  • NEVER add co-author information or Claude attribution
  • NEVER wrap commit messages - do not insert line breaks to limit line length
  • Commits should be authored solely by the user
  • Do not include any "Generated with Claude" messages
  • Do not add "Co-Authored-By" lines
  • Write commit messages as if the user wrote them

Remember:

  • You have the full context of what was done in this session
  • Group related changes together
  • Keep commits focused and atomic when possible
  • The user trusts your judgment - they asked you to commit
description
Create handoff document for transferring work to another session

Create Handoff

You are tasked with writing a handoff document to hand off your work to another agent in a new session. You will create a handoff document that is thorough, but also concise. The goal is to compact and summarize your context without losing any of the key details of what you're working on.

Process

1. Filepath & Metadata

Use the following information to understand how to create your document:

  • create your file under thoughts/shared/handoffs/ENG-XXXX/YYYY-MM-DD_HH-MM-SS_ENG-ZZZZ_description.md, where:
    • YYYY-MM-DD is today's date
    • HH-MM-SS is the hours, minutes and seconds based on the current time, in 24-hour format (i.e. use 13:00 for 1:00 pm)
    • ENG-XXXX is the ticket number (replace with general if no ticket) - ENG-ZZZZ is the ticket number (omit if no ticket)
    • description is a brief kebab-case description
  • Examples:
    • With ticket: 2025-01-08_13-55-22_ENG-2166_create-context-compaction.md
    • Without ticket: 2025-01-08_13-55-22_create-context-compaction.md

2. Handoff writing.

using the above conventions, write your document. use the defined filepath, and the following YAML frontmatter pattern. Use the metadata gathered in step 1, Structure the document with YAML frontmatter followed by content:

Use the following template structure:

---
date: [Current date and time with timezone in ISO format]
researcher: [Researcher name from thoughts status]
git_commit: [Current commit hash]
branch: [Current branch name]
repository: [Repository name]
topic: "[Feature/Task Name] Implementation Strategy"
tags: [implementation, strategy, relevant-component-names]
status: complete
last_updated: [Current date in YYYY-MM-DD format]
last_updated_by: [Researcher name]
type: implementation_strategy
---

# Handoff: ENG-XXXX {very concise description}

## Task(s)

{description of the task(s) that you were working on, along with the status of each (completed, work in progress, planned/discussed). If you are working on an implementation plan, make sure to call out which phase you are on. Make sure to reference the plan document and/or research document(s) you are working from that were provided to you at the beginning of the session, if applicable.}

## Critical References

{List any critical specification documents, architectural decisions, or design docs that must be followed. Include only 2-3 most important file paths. Leave blank if none.}

## Recent changes

{describe recent changes made to the codebase that you made in line:file syntax}

## Learnings

{describe important things that you learned - e.g. patterns, root causes of bugs, or other important pieces of information someone that is picking up your work after you should know. consider listing explicit file paths.}

## Artifacts

{ an exhaustive list of artifacts you produced or updated as filepaths and/or file:line references - e.g. paths to feature documents, implementation plans, etc that should be read in order to resume your work.}

## Action Items & Next Steps

{ a list of action items and next steps for the next agent to accomplish based on your tasks and their statuses}

## Other Notes

{ other notes, references, or useful information - e.g. where relevant sections of the codebase are, where relevant documents are, or other important things you leanrned that you want to pass on but that don't fall into the above categories}

3. Present to user

Once the handoff document is created, you should respond to the user with the template between <template_response></template_response> XML tags. do NOT include the tags in your response.

<template_response> Handoff created and synced! You can resume from this handoff in a new session with the following command:

/resume_handoff path/to/handoff.md

</template_response>

for example (between <example_response></example_response> XML tags - do NOT include these tags in your actual response to the user)

<example_response> Handoff created and synced! You can resume from this handoff in a new session with the following command:

/resume_handoff thoughts/shared/handoffs/ENG-2166/2025-01-08_13-44-55_ENG-2166_create-context-compaction.md

</example_response>


##. Additional Notes & Instructions

  • more information, not less. This is a guideline that defines the minimum of what a handoff should be. Always feel free to include more information if necessary.
  • be thorough and precise. include both top-level objectives, and lower-level details as necessary.
  • avoid excessive code snippets. While a brief snippet to describe some key change is important, avoid large code blocks or diffs; do not include one unless it's necessary (e.g. pertains to an error you're debugging). Prefer using /path/to/file.ext:line references that an agent can follow later when it's ready, e.g. packages/dashboard/src/app/dashboard/page.tsx:12-24
description model
Create detailed implementation plans through interactive research and iteration
opus

Implementation Plan

You are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.

Initial Response

When this command is invoked:

  1. Check if parameters were provided:

    • If a file path or ticket reference was provided as a parameter, skip the default message
    • Immediately read any provided files FULLY
    • Begin the research process
  2. If no parameters provided, respond with:

I'll help you create a detailed implementation plan. Let me start by understanding what we're building.

Please provide:
1. The task/ticket description (or reference to a ticket file)
2. Any relevant context, constraints, or specific requirements
3. Links to related research or previous implementations

I'll analyze this information and work with you to create a comprehensive plan.

Tip: You can also invoke this command with a ticket file directly: `/create_plan thoughts/allison/tickets/eng_1234.md`
For deeper analysis, try: `/create_plan think deeply about thoughts/allison/tickets/eng_1234.md`

Then wait for the user's input.

Process Steps

Step 1: Context Gathering & Initial Analysis

  1. Read all mentioned files immediately and FULLY:

    • Ticket files (e.g., thoughts/allison/tickets/eng_1234.md)
    • Research documents
    • Related implementation plans
    • Any JSON/data files mentioned
    • IMPORTANT: Use the Read tool WITHOUT limit/offset parameters to read entire files
    • CRITICAL: DO NOT spawn sub-tasks before reading these files yourself in the main context
    • NEVER read files partially - if a file is mentioned, read it completely
  2. Spawn initial research tasks to gather context: Before asking the user any questions, use specialized agents to research in parallel:

    • Use the codebase-locator agent to find all files related to the ticket/task
    • Use the codebase-analyzer agent to understand how the current implementation works
    • If relevant, use the thoughts-locator agent to find any existing thoughts documents about this feature
    • If a ticket is mentioned, use the ticket-reader agent to get full details

    These agents will:

    • Find relevant source files, configs, and tests
    • Identify the specific directories to focus on
    • Trace data flow and key functions
    • Return detailed explanations with file:line references
  3. Read all files identified by research tasks:

    • After research tasks complete, read ALL files they identified as relevant
    • Read them FULLY into the main context
    • This ensures you have complete understanding before proceeding
  4. Analyze and verify understanding:

    • Cross-reference the ticket requirements with actual code
    • Identify any discrepancies or misunderstandings
    • Note assumptions that need verification
    • Determine true scope based on codebase reality
  5. Present informed understanding and focused questions:

    Based on the ticket and my research of the codebase, I understand we need to [accurate summary].
    
    I've found that:
    - [Current implementation detail with file:line reference]
    - [Relevant pattern or constraint discovered]
    - [Potential complexity or edge case identified]
    
    Questions that my research couldn't answer:
    - [Specific technical question that requires human judgment]
    - [Business logic clarification]
    - [Design preference that affects implementation]
    

    Only ask questions that you genuinely cannot answer through code investigation.

Step 2: Research & Discovery

After getting initial clarifications:

  1. If the user corrects any misunderstanding:

    • DO NOT just accept the correction
    • Spawn new research tasks to verify the correct information
    • Read the specific files/directories they mention
    • Only proceed once you've verified the facts yourself
  2. Create a research todo list using TodoWrite to track exploration tasks

  3. Spawn parallel sub-tasks for comprehensive research:

    • Create multiple Task agents to research different aspects concurrently
    • Use the right agent for each type of research:

    For deeper investigation:

    • codebase-locator - To find more specific files (e.g., "find all files that handle [specific component]")
    • codebase-analyzer - To understand implementation details (e.g., "analyze how [system] works")
    • codebase-pattern-finder - To find similar features we can model after

    For historical context:

    • thoughts-locator - To find any research, plans, or decisions about this area
    • thoughts-analyzer - To extract key insights from the most relevant documents

    For related tickets:

    • ticket-searcher - To find similar issues or past implementations

    Each agent knows how to:

    • Find the right files and code patterns
    • Identify conventions and patterns to follow
    • Look for integration points and dependencies
    • Return specific file:line references
    • Find tests and examples
  4. Wait for ALL sub-tasks to complete before proceeding

  5. Present findings and design options:

    Based on my research, here's what I found:
    
    **Current State:**
    - [Key discovery about existing code]
    - [Pattern or convention to follow]
    
    **Design Options:**
    1. [Option A] - [pros/cons]
    2. [Option B] - [pros/cons]
    
    **Open Questions:**
    - [Technical uncertainty]
    - [Design decision needed]
    
    Which approach aligns best with your vision?
    

Step 3: Plan Structure Development

Once aligned on approach:

  1. Create initial plan outline:

    Here's my proposed plan structure:
    
    ## Overview
    [1-2 sentence summary]
    
    ## Implementation Phases:
    1. [Phase name] - [what it accomplishes]
    2. [Phase name] - [what it accomplishes]
    3. [Phase name] - [what it accomplishes]
    
    Does this phasing make sense? Should I adjust the order or granularity?
    
  2. Get feedback on structure before writing details

Step 4: Detailed Plan Writing

After structure approval:

  1. Write the plan to thoughts/shared/plans/YYYY-MM-DD-ENG-XXXX-description.md
    • Format: YYYY-MM-DD-ENG-XXXX-description.md where:
      • YYYY-MM-DD is today's date
      • ENG-XXXX is the ticket number (omit if no ticket)
      • description is a brief kebab-case description
    • Examples:
      • With ticket: 2025-01-08-ENG-1478-parent-child-tracking.md
      • Without ticket: 2025-01-08-improve-error-handling.md
  2. Use this template structure:
# [Feature/Task Name] Implementation Plan

## Overview

[Brief description of what we're implementing and why]

## Current State Analysis

[What exists now, what's missing, key constraints discovered]

## Desired End State

[A Specification of the desired end state after this plan is complete, and how to verify it]

### Key Discoveries:

- [Important finding with file:line reference]
- [Pattern to follow]
- [Constraint to work within]

## What We're NOT Doing

[Explicitly list out-of-scope items to prevent scope creep]

## Implementation Approach

[High-level strategy and reasoning]

## Phase 1: [Descriptive Name]

### Overview

[What this phase accomplishes]

### Changes Required:

#### 1. [Component/File Group]

**File**: `path/to/file.ext`
**Changes**: [Summary of changes]

```[language]
// Specific code to add/modify
```

### Success Criteria:

#### Automated Verification:

- [ ] Migration applies cleanly: `make migrate`
- [ ] Unit tests pass: `make test-component`
- [ ] Type checking passes: `npm run typecheck`
- [ ] Linting passes: `make lint`
- [ ] Integration tests pass: `make test-integration`

#### Manual Verification:

- [ ] Feature works as expected when tested via UI
- [ ] Performance is acceptable under load
- [ ] Edge case handling verified manually
- [ ] No regressions in related features

**Implementation Note**: After completing this phase and all automated verification passes, pause here for manual confirmation from the human that the manual testing was successful before proceeding to the next phase.

---

## Phase 2: [Descriptive Name]

[Similar structure with both automated and manual success criteria...]

---

## Testing Strategy

### Unit Tests:

- [What to test]
- [Key edge cases]

### Integration Tests:

- [End-to-end scenarios]

### Manual Testing Steps:

1. [Specific step to verify feature]
2. [Another verification step]
3. [Edge case to test manually]

## Performance Considerations

[Any performance implications or optimizations needed]

## Migration Notes

[If applicable, how to handle existing data/systems]

## References

- Original ticket: `thoughts/allison/tickets/eng_XXXX.md`
- Related research: `thoughts/shared/research/[relevant].md`
- Similar implementation: `[file:line]`

Step 5: Review

  1. Present the draft plan location:

    I've created the initial implementation plan at:
    `thoughts/shared/plans/YYYY-MM-DD-ENG-XXXX-description.md`
    
    Please review it and let me know:
    - Are the phases properly scoped?
    - Are the success criteria specific enough?
    - Any technical details that need adjustment?
    - Missing edge cases or considerations?
    
  2. Iterate based on feedback - be ready to:

    • Add missing phases
    • Adjust technical approach
    • Clarify success criteria (both automated and manual)
    • Add/remove scope items
  3. Continue refining until the user is satisfied

Important Guidelines

  1. Be Skeptical:

    • Question vague requirements
    • Identify potential issues early
    • Ask "why" and "what about"
    • Don't assume - verify with code
  2. Be Interactive:

    • Don't write the full plan in one shot
    • Get buy-in at each major step
    • Allow course corrections
    • Work collaboratively
  3. Be Thorough:

    • Read all context files COMPLETELY before planning
    • Research actual code patterns using parallel sub-tasks
    • Include specific file paths and line numbers
    • Write measurable success criteria with clear automated vs manual distinction
    • Automated steps should use make whenever possible
  4. Be Practical:

    • Focus on incremental, testable changes
    • Consider migration and rollback
    • Think about edge cases
    • Include "what we're NOT doing"
  5. Track Progress:

    • Use TodoWrite to track planning tasks
    • Update todos as you complete research
    • Mark planning tasks complete when done
  6. No Open Questions in Final Plan:

    • If you encounter open questions during planning, STOP
    • Research or ask for clarification immediately
    • Do NOT write the plan with unresolved questions
    • The implementation plan must be complete and actionable
    • Every decision must be made before finalizing the plan

Success Criteria Guidelines

Always separate success criteria into two categories:

  1. Automated Verification (can be run by execution agents):

    • Commands that can be run: make test, npm run lint, etc.
    • Specific files that should exist
    • Code compilation/type checking
    • Automated test suites
  2. Manual Verification (requires human testing):

    • UI/UX functionality
    • Performance under real conditions
    • Edge cases that are hard to automate
    • User acceptance criteria

Format example:

### Success Criteria:

#### Automated Verification:

- [ ] Database migration runs successfully: `make migrate`
- [ ] All unit tests pass: `go test ./...`
- [ ] No linting errors: `golangci-lint run`
- [ ] API endpoint returns 200: `curl localhost:8080/api/new-endpoint`

#### Manual Verification:

- [ ] New feature appears correctly in the UI
- [ ] Performance is acceptable with 1000+ items
- [ ] Error messages are user-friendly
- [ ] Feature works correctly on mobile devices

Common Patterns

For Database Changes:

  • Start with schema/migration
  • Add store methods
  • Update business logic
  • Expose via API
  • Update clients

For New Features:

  • Research existing patterns first
  • Start with data model
  • Build backend logic
  • Add API endpoints
  • Implement UI last

For Refactoring:

  • Document current behavior
  • Plan incremental changes
  • Maintain backwards compatibility
  • Include migration strategy

Sub-task Spawning Best Practices

When spawning research sub-tasks:

  1. Spawn multiple tasks in parallel for efficiency
  2. Each task should be focused on a specific area
  3. Provide detailed instructions including:
    • Exactly what to search for
    • Which directories to focus on
    • What information to extract
    • Expected output format
  4. Be EXTREMELY specific about directories:
    • Include the full path context in your prompts
    • Be specific about which directories to focus on
  5. Specify read-only tools to use
  6. Request specific file:line references in responses
  7. Wait for all tasks to complete before synthesizing
  8. Verify sub-task results:
    • If a sub-task returns unexpected results, spawn follow-up tasks
    • Cross-check findings against the actual codebase
    • Don't accept results that seem incorrect

Example of spawning multiple tasks:

# Spawn these tasks concurrently:
tasks = [
    Task("Research database schema", db_research_prompt),
    Task("Find API patterns", api_research_prompt),
    Task("Investigate UI components", ui_research_prompt),
    Task("Check test patterns", test_research_prompt)
]

Example Interaction Flow

User: /create_plan
Assistant: I'll help you create a detailed implementation plan...

User: We need to add parent-child tracking for Claude sub-tasks. See thoughts/allison/tickets/eng_1478.md
Assistant: Let me read that ticket file completely first...

[Reads file fully]

Based on the ticket, I understand we need to track parent-child relationships for Claude sub-task events in the hld daemon. Before I start planning, I have some questions...

[Interactive process continues...]
description
Create worktree and launch implementation session for a plan
  1. set up worktree for implementation: 2a. create a new worktree with the issue tracker branch name

  2. determine required data:

branch name path to plan file (use relative path only) launch prompt command to run

IMPORTANT PATH USAGE:

  • Always use ONLY the relative path starting with thoughts/shared/... without any directory prefix
  • Example: thoughts/shared/plans/fix-mcp-keepalive-proper.md (not the full absolute path)

3a. confirm with the user by sending a message to the Human

based on the input, I plan to create a worktree with the following details:

worktree path: ~/wt/PROJECT/ENG-XXXX
branch name: BRANCH_NAME
path to plan file: $FILEPATH
launch prompt:

    /implement_plan at $FILEPATH and when you are done implementing and all tests pass, read ./commands/commit.md and create a commit, then read ./commands/describe_pr.md and create a PR, then add a comment to the issue tracker ticket with the PR link

command to run:
    # TODO: Add your session launch command here

incorporate any user feedback then:

  1. launch implementation session

    TODO: Add your session launch command here

description
Generate comprehensive PR descriptions following repository templates

Generate PR Description

You are tasked with generating a comprehensive pull request description following the repository's standard template.

Steps to follow:

  1. Check for ticket number:

    • Ask the user: "Is there a ticket number for this PR? (e.g., ENG-1234, or 'none' if not applicable)"
    • If provided, you'll use this as a prefix for the PR title (e.g., ENG-1234 - Add new feature)
    • If none, proceed without a prefix
  2. Read the PR description template:

    • First, check if thoughts/shared/pr_description.md exists
    • If it doesn't exist, inform the user they need to create a PR description template at thoughts/shared/pr_description.md
    • Read the template carefully to understand all sections and requirements
  3. Identify the PR to describe:

    • Check if the current branch has an associated PR: gh pr view --json url,number,title,state 2>/dev/null
    • If no PR exists for the current branch, or if on main/master, list open PRs: gh pr list --limit 10 --json number,title,headRefName,author
    • Ask the user which PR they want to describe
  4. Check for existing description:

    • Check if thoughts/shared/prs/{number}_description.md already exists
    • If it exists, read it and inform the user you'll be updating it
    • Consider what has changed since the last description was written
  5. Gather comprehensive PR information:

    • Get the full PR diff: gh pr diff {number}
    • If you get an error about no default remote repository, instruct the user to run gh repo set-default and select the appropriate repository
    • Get commit history: gh pr view {number} --json commits
    • Review the base branch: gh pr view {number} --json baseRefName
    • Get PR metadata: gh pr view {number} --json url,title,number,state
  6. Analyze the changes thoroughly: (ultrathink about the code changes, their architectural implications, and potential impacts)

    • Read through the entire diff carefully
    • For context, read any files that are referenced but not shown in the diff
    • Understand the purpose and impact of each change
    • Identify user-facing changes vs internal implementation details
    • Look for breaking changes or migration requirements
  7. Handle verification requirements:

    • Look for any checklist items in the "How to verify it" section of the template
    • For each verification step:
      • If it's a command you can run (like make check test, npm test, etc.), run it
      • If it passes, mark the checkbox as checked: - [x]
      • If it fails, keep it unchecked and note what failed: - [ ] with explanation
      • If it requires manual testing (UI interactions, external services), leave unchecked and note for user
    • Document any verification steps you couldn't complete
    • Use staging URLs in test steps, not localhost - reviewers will verify on staging environments, not local dev servers
  8. Generate the description:

    • Fill out each section from the template thoroughly:
      • Answer each question/section based on your analysis
      • Be specific about problems solved and changes made
      • Lead with user-facing impact - what does this change mean for users? How does it improve their experience?
      • Include technical details in appropriate sections, but don't make them the focus
      • Write a concise changelog entry from the user's perspective
    • Ensure all checklist items are addressed (checked or explained)
  9. Save the description:

    • Write the completed description to thoughts/shared/prs/{number}_description.md
    • Show the user the generated description
  10. Update the PR:

    • Update the PR description directly: gh pr edit {number} --body-file thoughts/shared/prs/{number}_description.md
    • If a ticket was provided, update the PR title with the prefix: gh pr edit {number} --title "ENG-XXXX - Original Title"
    • Confirm the update was successful
    • If any verification steps remain unchecked, remind the user to complete them before merging

Important notes:

  • This command works across different repositories - always read the local template
  • Be thorough but concise - descriptions should be scannable
  • Focus on the "why" as much as the "what"
  • Include any breaking changes or migration notes prominently
  • If the PR touches multiple components, organize the description accordingly
  • Always attempt to run verification commands when possible
  • Clearly communicate which verification steps need manual testing
  • Write for reviewers, not developers - assume readers will test on staging, not locally
  • Prioritize user-facing changes - internal refactors and implementation details are secondary to what users will experience
description
Implement technical plans from thoughts/shared/plans with verification

Implement Plan

You are tasked with implementing an approved technical plan from thoughts/shared/plans/. These plans contain phases with specific changes and success criteria.

Getting Started

When given a plan path:

  • Read the plan completely and check for any existing checkmarks (- [x])
  • Read the original ticket and all files mentioned in the plan
  • Read files fully - never use limit/offset parameters, you need complete context
  • Think deeply about how the pieces fit together
  • Create a todo list to track your progress
  • Start implementing if you understand what needs to be done

If no plan path provided, ask for one.

Implementation Philosophy

Plans are carefully designed, but reality can be messy. Your job is to:

  • Follow the plan's intent while adapting to what you find
  • Implement each phase fully before moving to the next
  • Verify your work makes sense in the broader codebase context
  • Update checkboxes in the plan as you complete sections

When things don't match the plan exactly, think about why and communicate clearly. The plan is your guide, but your judgment matters too.

If you encounter a mismatch:

  • STOP and think deeply about why the plan can't be followed
  • Present the issue clearly:
    Issue in Phase [N]:
    Expected: [what the plan says]
    Found: [actual situation]
    Why this matters: [explanation]
    
    How should I proceed?
    

Verification Approach

After implementing a phase:

  • Run the success criteria checks (usually make check test covers everything)
  • Fix any issues before proceeding
  • Update your progress in both the plan and your todos
  • Check off completed items in the plan file itself using Edit
  • Pause for human verification: After completing all automated verification for a phase, pause and inform the human that the phase is ready for manual testing. Use this format:
    Phase [N] Complete - Ready for Manual Verification
    
    Automated verification passed:
    - [List automated checks that passed]
    
    Please perform the manual verification steps listed in the plan:
    - [List manual verification items from the plan]
    
    Let me know when manual testing is complete so I can proceed to Phase [N+1].
    

If instructed to execute multiple phases consecutively, skip the pause until the last phase. Otherwise, assume you are just doing one phase.

do not check off items in the manual testing steps until confirmed by the user.

If You Get Stuck

When something isn't working as expected:

  • First, make sure you've read and understood all the relevant code
  • Consider if the codebase has evolved since the plan was written
  • Present the mismatch clearly and ask for guidance

Use sub-tasks sparingly - mainly for targeted debugging or exploring unfamiliar territory.

Resuming Work

If the plan has existing checkmarks:

  • Trust that completed work is done
  • Pick up from the first unchecked item
  • Verify previous work only if something seems off

Remember: You're implementing a solution, not just checking boxes. Keep the end goal in mind and maintain forward momentum.

description model
Iterate on existing implementation plans with thorough research and updates
opus

Iterate Implementation Plan

You are tasked with updating existing implementation plans based on user feedback. You should be skeptical, thorough, and ensure changes are grounded in actual codebase reality.

Initial Response

When this command is invoked:

  1. Parse the input to identify:

    • Plan file path (e.g., thoughts/shared/plans/2025-10-16-feature.md)
    • Requested changes/feedback
  2. Handle different input scenarios:

    If NO plan file provided:

    I'll help you iterate on an existing implementation plan.
    
    Which plan would you like to update? Please provide the path to the plan file (e.g., `thoughts/shared/plans/2025-10-16-feature.md`).
    
    Tip: You can list recent plans with `ls -lt thoughts/shared/plans/ | head`
    

    Wait for user input, then re-check for feedback.

    If plan file provided but NO feedback:

    I've found the plan at [path]. What changes would you like to make?
    
    For example:
    - "Add a phase for migration handling"
    - "Update the success criteria to include performance tests"
    - "Adjust the scope to exclude feature X"
    - "Split Phase 2 into two separate phases"
    

    Wait for user input.

    If BOTH plan file AND feedback provided:

    • Proceed immediately to Step 1
    • No preliminary questions needed

Process Steps

Step 1: Read and Understand Current Plan

  1. Read the existing plan file COMPLETELY:

    • Use the Read tool WITHOUT limit/offset parameters
    • Understand the current structure, phases, and scope
    • Note the success criteria and implementation approach
  2. Understand the requested changes:

    • Parse what the user wants to add/modify/remove
    • Identify if changes require codebase research
    • Determine scope of the update

Step 2: Research If Needed

Only spawn research tasks if the changes require new technical understanding.

If the user's feedback requires understanding new code patterns or validating assumptions:

  1. Create a research todo list

  2. Spawn parallel sub-tasks for research: Use the right agent for each type of research:

    For code investigation:

    • codebase-locator - To find relevant files
    • codebase-analyzer - To understand implementation details
    • codebase-pattern-finder - To find similar patterns

    For historical context:

    • thoughts-locator - To find related research or decisions
    • thoughts-analyzer - To extract insights from documents

    Be EXTREMELY specific about directories:

    • Include full path context in prompts
    • Be specific about which directories to focus on
  3. Read any new files identified by research:

    • Read them FULLY into the main context
    • Cross-reference with the plan requirements
  4. Wait for ALL sub-tasks to complete before proceeding

Step 3: Present Understanding and Approach

Before making changes, confirm your understanding:

Based on your feedback, I understand you want to:
- [Change 1 with specific detail]
- [Change 2 with specific detail]

My research found:
- [Relevant code pattern or constraint]
- [Important discovery that affects the change]

I plan to update the plan by:
1. [Specific modification to make]
2. [Another modification]

Does this align with your intent?

Get user confirmation before proceeding.

Step 4: Update the Plan

  1. Make focused, precise edits to the existing plan:

    • Use the Edit tool for surgical changes
    • Maintain the existing structure unless explicitly changing it
    • Keep all file:line references accurate
    • Update success criteria if needed
  2. Ensure consistency:

    • If adding a new phase, ensure it follows the existing pattern
    • If modifying scope, update "What We're NOT Doing" section
    • If changing approach, update "Implementation Approach" section
    • Maintain the distinction between automated vs manual success criteria
  3. Preserve quality standards:

    • Include specific file paths and line numbers for new content
    • Write measurable success criteria
    • Use make commands for automated verification
    • Keep language clear and actionable

Step 5: Review

  1. Present the changes made:

    I've updated the plan at `thoughts/shared/plans/[filename].md`
    
    Changes made:
    - [Specific change 1]
    - [Specific change 2]
    
    The updated plan now:
    - [Key improvement]
    - [Another improvement]
    
    Would you like any further adjustments?
    
  2. Be ready to iterate further based on feedback

Important Guidelines

  1. Be Skeptical:

    • Don't blindly accept change requests that seem problematic
    • Question vague feedback - ask for clarification
    • Verify technical feasibility with code research
    • Point out potential conflicts with existing plan phases
  2. Be Surgical:

    • Make precise edits, not wholesale rewrites
    • Preserve good content that doesn't need changing
    • Only research what's necessary for the specific changes
    • Don't over-engineer the updates
  3. Be Thorough:

    • Read the entire existing plan before making changes
    • Research code patterns if changes require new technical understanding
    • Ensure updated sections maintain quality standards
    • Verify success criteria are still measurable
  4. Be Interactive:

    • Confirm understanding before making changes
    • Show what you plan to change before doing it
    • Allow course corrections
    • Don't disappear into research without communicating
  5. Track Progress:

    • Use TodoWrite to track update tasks if complex
    • Update todos as you complete research
    • Mark tasks complete when done
  6. No Open Questions:

    • If the requested change raises questions, ASK
    • Research or get clarification immediately
    • Do NOT update the plan with unresolved questions
    • Every change must be complete and actionable

Success Criteria Guidelines

When updating success criteria, always maintain the two-category structure:

  1. Automated Verification (can be run by execution agents):

    • Commands that can be run: make test, npm run lint, etc.
    • Prefer make commands when available
    • Specific files that should exist
    • Code compilation/type checking
  2. Manual Verification (requires human testing):

    • UI/UX functionality
    • Performance under real conditions
    • Edge cases that are hard to automate
    • User acceptance criteria

Sub-task Spawning Best Practices

When spawning research sub-tasks:

  1. Only spawn if truly needed - don't research for simple changes
  2. Spawn multiple tasks in parallel for efficiency
  3. Each task should be focused on a specific area
  4. Provide detailed instructions including:
    • Exactly what to search for
    • Which directories to focus on
    • What information to extract
    • Expected output format
  5. Request specific file:line references in responses
  6. Wait for all tasks to complete before synthesizing
  7. Verify sub-task results - if something seems off, spawn follow-up tasks

Example Interaction Flows

Scenario 1: User provides everything upfront

User: /iterate_plan thoughts/shared/plans/2025-10-16-feature.md - add phase for error handling
Assistant: [Reads plan, researches error handling patterns, updates plan]

Scenario 2: User provides just plan file

User: /iterate_plan thoughts/shared/plans/2025-10-16-feature.md
Assistant: I've found the plan. What changes would you like to make?
User: Split Phase 2 into two phases - one for backend, one for frontend
Assistant: [Proceeds with update]

Scenario 3: User provides no arguments

User: /iterate_plan
Assistant: Which plan would you like to update? Please provide the path...
User: thoughts/shared/plans/2025-10-16-feature.md
Assistant: I've found the plan. What changes would you like to make?
User: Add more specific success criteria
Assistant: [Proceeds with update]
description model
Document codebase as-is with thoughts directory for historical context
opus

Research Codebase

You are tasked with conducting comprehensive research across the codebase to answer user questions by spawning parallel sub-agents and synthesizing their findings.

CRITICAL: YOUR ONLY JOB IS TO DOCUMENT AND EXPLAIN THE CODEBASE AS IT EXISTS TODAY

  • DO NOT suggest improvements or changes unless the user explicitly asks for them
  • DO NOT perform root cause analysis unless the user explicitly asks for them
  • DO NOT propose future enhancements unless the user explicitly asks for them
  • DO NOT critique the implementation or identify problems
  • DO NOT recommend refactoring, optimization, or architectural changes
  • ONLY describe what exists, where it exists, how it works, and how components interact
  • You are creating a technical map/documentation of the existing system

Initial Setup:

When this command is invoked, respond with:

I'm ready to research the codebase. Please provide your research question or area of interest, and I'll analyze it thoroughly by exploring relevant components and connections.

Then wait for the user's research query.

Steps to follow after receiving the research query:

  1. Read any directly mentioned files first:

    • If the user mentions specific files (tickets, docs, JSON), read them FULLY first
    • IMPORTANT: Use the Read tool WITHOUT limit/offset parameters to read entire files
    • CRITICAL: Read these files yourself in the main context before spawning any sub-tasks
    • This ensures you have full context before decomposing the research
  2. Analyze and decompose the research question:

    • Break down the user's query into composable research areas
    • Take time to ultrathink about the underlying patterns, connections, and architectural implications the user might be seeking
    • Identify specific components, patterns, or concepts to investigate
    • Create a research plan using TodoWrite to track all subtasks
    • Consider which directories, files, or architectural patterns are relevant
  3. Spawn parallel sub-agent tasks for comprehensive research:

    • Create multiple Task agents to research different aspects concurrently
    • We now have specialized agents that know how to do specific research tasks:

    For codebase research:

    • Use the codebase-locator agent to find WHERE files and components live
    • Use the codebase-analyzer agent to understand HOW specific code works (without critiquing it)
    • Use the codebase-pattern-finder agent to find examples of existing patterns (without evaluating them)

    IMPORTANT: All agents are documentarians, not critics. They will describe what exists without suggesting improvements or identifying issues.

    For thoughts directory:

    • Use the thoughts-locator agent to discover what documents exist about the topic
    • Use the thoughts-analyzer agent to extract key insights from specific documents (only the most relevant ones)

    For web research (only if user explicitly asks):

    • Use the web-search-researcher agent for external documentation and resources
    • IF you use web-research agents, instruct them to return LINKS with their findings, and please INCLUDE those links in your final report

    For issue tracker tickets (if relevant):

    • Use the ticket-reader agent to get full details of a specific ticket
    • Use the ticket-searcher agent to find related tickets or historical context

    The key is to use these agents intelligently:

    • Start with locator agents to find what exists
    • Then use analyzer agents on the most promising findings to document how they work
    • Run multiple agents in parallel when they're searching for different things
    • Each agent knows its job - just tell it what you're looking for
    • Don't write detailed prompts about HOW to search - the agents already know
    • Remind agents they are documenting, not evaluating or improving
  4. Wait for all sub-agents to complete and synthesize findings:

    • IMPORTANT: Wait for ALL sub-agent tasks to complete before proceeding
    • Compile all sub-agent results (both codebase and thoughts findings)
    • Prioritize live codebase findings as primary source of truth
    • Use thoughts/ findings as supplementary historical context
    • Connect findings across different components
    • Include specific file paths and line numbers for reference
    • Verify all thoughts/ paths are correct (e.g., thoughts/allison/ not thoughts/shared/ for personal files)
    • Highlight patterns, connections, and architectural decisions
    • Answer the user's specific questions with concrete evidence
  5. Gather metadata for the research document:

    • Filename: thoughts/shared/research/YYYY-MM-DD-ENG-XXXX-description.md
      • Format: YYYY-MM-DD-ENG-XXXX-description.md where:
        • YYYY-MM-DD is today's date
        • ENG-XXXX is the ticket number (omit if no ticket)
        • description is a brief kebab-case description of the research topic
      • Examples:
        • With ticket: 2025-01-08-ENG-1478-parent-child-tracking.md
        • Without ticket: 2025-01-08-authentication-flow.md
  6. Generate research document:

    • Use the metadata gathered in step 4

    • Structure the document with YAML frontmatter followed by content:

      ---
      date: [Current date and time with timezone in ISO format]
      researcher: [Researcher name from thoughts status]
      git_commit: [Current commit hash]
      branch: [Current branch name]
      repository: [Repository name]
      topic: "[User's Question/Topic]"
      tags: [research, codebase, relevant-component-names]
      status: complete
      last_updated: [Current date in YYYY-MM-DD format]
      last_updated_by: [Researcher name]
      ---
      
      # Research: [User's Question/Topic]
      
      **Date**: [Current date and time with timezone from step 4]
      **Researcher**: [Researcher name from thoughts status]
      **Git Commit**: [Current commit hash from step 4]
      **Branch**: [Current branch name from step 4]
      **Repository**: [Repository name]
      
      ## Research Question
      
      [Original user query]
      
      ## Summary
      
      [High-level documentation of what was found, answering the user's question by describing what exists]
      
      ## Detailed Findings
      
      ### [Component/Area 1]
      
      - Description of what exists ([file.ext:line](link))
      - How it connects to other components
      - Current implementation details (without evaluation)
      
      ### [Component/Area 2]
      
      ...
      
      ## Code References
      
      - `path/to/file.py:123` - Description of what's there
      - `another/file.ts:45-67` - Description of the code block
      
      ## Architecture Documentation
      
      [Current patterns, conventions, and design implementations found in the codebase]
      
      ## Historical Context (from thoughts/)
      
      [Relevant insights from thoughts/ directory with references]
      
      - `thoughts/shared/something.md` - Historical decision about X
      - `thoughts/local/notes.md` - Past exploration of Y
        Note: Paths exclude "searchable/" even if found there
      
      ## Related Research
      
      [Links to other research documents in thoughts/shared/research/]
      
      ## Open Questions
      
      [Any areas that need further investigation]
  7. Add GitHub permalinks (if applicable):

    • Check if on main branch or if commit is pushed: git branch --show-current and git status
    • If on main/master or pushed, generate GitHub permalinks:
      • Get repo info: gh repo view --json owner,name
      • Create permalinks: https://github.com/{owner}/{repo}/blob/{commit}/{file}#L{line}
    • Replace local file references with permalinks in the document
  8. Present findings:

    • Present a concise summary of findings to the user
    • Include key file references for easy navigation
    • Ask if they have follow-up questions or need clarification
  9. Handle follow-up questions:

    • If the user has follow-up questions, append to the same research document
    • Update the frontmatter fields last_updated and last_updated_by to reflect the update
    • Add last_updated_note: "Added follow-up research for [brief description]" to frontmatter
    • Add a new section: ## Follow-up Research [timestamp]
    • Spawn new sub-agents as needed for additional investigation
    • Continue updating the document and syncing

Important notes:

  • Always use parallel Task agents to maximize efficiency and minimize context usage
  • Always run fresh codebase research - never rely solely on existing research documents
  • The thoughts/ directory provides historical context to supplement live findings
  • Focus on finding concrete file paths and line numbers for developer reference
  • Research documents should be self-contained with all necessary context
  • Each sub-agent prompt should be specific and focused on read-only documentation operations
  • Document cross-component connections and how systems interact
  • Include temporal context (when the research was conducted)
  • Link to GitHub when possible for permanent references
  • Keep the main agent focused on synthesis, not deep file reading
  • Have sub-agents document examples and usage patterns as they exist
  • Explore all of thoughts/ directory, not just research subdirectory
  • CRITICAL: You and all sub-agents are documentarians, not evaluators
  • REMEMBER: Document what IS, not what SHOULD BE
  • NO RECOMMENDATIONS: Only describe the current state of the codebase
  • File reading: Always read mentioned files FULLY (no limit/offset) before spawning sub-tasks
  • Critical ordering: Follow the numbered steps exactly
    • ALWAYS read mentioned files first before spawning sub-tasks (step 1)
    • ALWAYS wait for all sub-agents to complete before synthesizing (step 4)
    • ALWAYS gather metadata before writing the document (step 5 before step 6)
    • NEVER write the research document with placeholder values
  • Path handling: The thoughts/searchable/ directory contains hard links for searching
    • Always document paths by removing ONLY "searchable/" - preserve all other subdirectories
    • Examples of correct transformations:
      • thoughts/searchable/allison/old_stuff/notes.mdthoughts/allison/old_stuff/notes.md
      • thoughts/searchable/shared/prs/123.mdthoughts/shared/prs/123.md
      • thoughts/searchable/global/shared/templates.mdthoughts/global/shared/templates.md
    • NEVER change allison/ to shared/ or vice versa - preserve the exact directory structure
    • This ensures paths are correct for editing and navigation
  • Frontmatter consistency:
    • Always include frontmatter at the beginning of research documents
    • Keep frontmatter fields consistent across all research documents
    • Update frontmatter when adding follow-up research
    • Use snake_case for multi-word field names (e.g., last_updated, git_commit)
    • Tags should be relevant to the research topic and components studied
description
Resume work from handoff document with context analysis and validation

Resume work from a handoff document

You are tasked with resuming work from a handoff document through an interactive process. These handoffs contain critical context, learnings, and next steps from previous work sessions that need to be understood and continued.

Initial Response

When this command is invoked:

  1. If the path to a handoff document was provided:

    • If a handoff document path was provided as a parameter, skip the default message
    • Immediately read the handoff document FULLY
    • Immediately read any research or plan documents that it links to under thoughts/shared/plans or thoughts/shared/research. do NOT use a sub-agent to read these critical files.
    • Begin the analysis process by ingesting relevant context from the handoff document, reading additional files it mentions
    • Then propose a course of action to the user and confirm, or ask for clarification on direction.
  2. If a ticket number (like ENG-XXXX) was provided:

    • locate the most recent handoff document for the ticket. Tickets will be located in thoughts/shared/handoffs/ENG-XXXX where ENG-XXXX is the ticket number. e.g. for ENG-2124 the handoffs would be in thoughts/shared/handoffs/ENG-2124/. List this directory's contents.
    • There may be zero, one or multiple files in the directory.
    • If there are zero files in the directory, or the directory does not exist: tell the user: "I'm sorry, I can't seem to find that handoff document. Can you please provide me with a path to it?"
    • If there is only one file in the directory: proceed with that handoff
    • If there are multiple files in the directory: using the date and time specified in the file name (it will be in the format YYYY-MM-DD_HH-MM-SS in 24-hour time format), proceed with the most recent handoff document.
    • Immediately read the handoff document FULLY
    • Immediately read any research or plan documents that it links to under thoughts/shared/plans or thoughts/shared/research; do NOT use a sub-agent to read these critical files.
    • Begin the analysis process by ingesting relevant context from the handoff document, reading additional files it mentions
    • Then propose a course of action to the user and confirm, or ask for clarification on direction.
  3. If no parameters provided, respond with:

I'll help you resume work from a handoff document. Let me find the available handoffs.

Which handoff would you like to resume from?

Tip: You can invoke this command directly with a handoff path: `/resume_handoff `thoughts/shared/handoffs/ENG-XXXX/YYYY-MM-DD_HH-MM-SS_ENG-XXXX_description.md`

or using a ticket number to resume from the most recent handoff for that ticket: `/resume_handoff ENG-XXXX`

Then wait for the user's input.

Process Steps

Step 1: Read and Analyze Handoff

  1. Read handoff document completely:

    • Use the Read tool WITHOUT limit/offset parameters
    • Extract all sections:
      • Task(s) and their statuses
      • Recent changes
      • Learnings
      • Artifacts
      • Action items and next steps
      • Other notes
  2. Spawn focused research tasks: Based on the handoff content, spawn parallel research tasks to verify current state:

    Task 1 - Gather artifact context:
    Read all artifacts mentioned in the handoff.
    1. Read feature documents listed in "Artifacts"
    2. Read implementation plans referenced
    3. Read any research documents mentioned
    4. Extract key requirements and decisions
    Use tools: Read
    Return: Summary of artifact contents and key decisions
    
  3. Wait for ALL sub-tasks to complete before proceeding

  4. Read critical files identified:

    • Read files from "Learnings" section completely
    • Read files from "Recent changes" to understand modifications
    • Read any new related files discovered during research

Step 2: Synthesize and Present Analysis

  1. Present comprehensive analysis:

    I've analyzed the handoff from [date] by [researcher]. Here's the current situation:
    
    **Original Tasks:**
    - [Task 1]: [Status from handoff] → [Current verification]
    - [Task 2]: [Status from handoff] → [Current verification]
    
    **Key Learnings Validated:**
    - [Learning with file:line reference] - [Still valid/Changed]
    - [Pattern discovered] - [Still applicable/Modified]
    
    **Recent Changes Status:**
    - [Change 1] - [Verified present/Missing/Modified]
    - [Change 2] - [Verified present/Missing/Modified]
    
    **Artifacts Reviewed:**
    - [Document 1]: [Key takeaway]
    - [Document 2]: [Key takeaway]
    
    **Recommended Next Actions:**
    Based on the handoff's action items and current state:
    1. [Most logical next step based on handoff]
    2. [Second priority action]
    3. [Additional tasks discovered]
    
    **Potential Issues Identified:**
    - [Any conflicts or regressions found]
    - [Missing dependencies or broken code]
    
    Shall I proceed with [recommended action 1], or would you like to adjust the approach?
    
  2. Get confirmation before proceeding

Step 3: Create Action Plan

  1. Use TodoWrite to create task list:

    • Convert action items from handoff into todos
    • Add any new tasks discovered during analysis
    • Prioritize based on dependencies and handoff guidance
  2. Present the plan:

    I've created a task list based on the handoff and current analysis:
    
    [Show todo list]
    
    Ready to begin with the first task: [task description]?
    

Step 4: Begin Implementation

  1. Start with the first approved task
  2. Reference learnings from handoff throughout implementation
  3. Apply patterns and approaches documented in the handoff
  4. Update progress as tasks are completed

Guidelines

  1. Be Thorough in Analysis:

    • Read the entire handoff document first
    • Verify ALL mentioned changes still exist
    • Check for any regressions or conflicts
    • Read all referenced artifacts
  2. Be Interactive:

    • Present findings before starting work
    • Get buy-in on the approach
    • Allow for course corrections
    • Adapt based on current state vs handoff state
  3. Leverage Handoff Wisdom:

    • Pay special attention to "Learnings" section
    • Apply documented patterns and approaches
    • Avoid repeating mistakes mentioned
    • Build on discovered solutions
  4. Track Continuity:

    • Use TodoWrite to maintain task continuity
    • Reference the handoff document in commits
    • Document any deviations from original plan
    • Consider creating a new handoff when done
  5. Validate Before Acting:

    • Never assume handoff state matches current state
    • Verify all file references still exist
    • Check for breaking changes since handoff
    • Confirm patterns are still valid

Common Scenarios

Scenario 1: Clean Continuation

  • All changes from handoff are present
  • No conflicts or regressions
  • Clear next steps in action items
  • Proceed with recommended actions

Scenario 2: Diverged Codebase

  • Some changes missing or modified
  • New related code added since handoff
  • Need to reconcile differences
  • Adapt plan based on current state

Scenario 3: Incomplete Handoff Work

  • Tasks marked as "in_progress" in handoff
  • Need to complete unfinished work first
  • May need to re-understand partial implementations
  • Focus on completing before new work

Scenario 4: Stale Handoff

  • Significant time has passed
  • Major refactoring has occurred
  • Original approach may no longer apply
  • Need to re-evaluate strategy

Example Interaction Flow

User: /resume_handoff specification/feature/handoffs/handoff-0.md
Assistant: Let me read and analyze that handoff document...

[Reads handoff completely]
[Spawns research tasks]
[Waits for completion]
[Reads identified files]

I've analyzed the handoff from [date]. Here's the current situation...

[Presents analysis]

Shall I proceed with implementing the webhook validation fix, or would you like to adjust the approach?

User: Yes, proceed with the webhook validation
Assistant: [Creates todo list and begins implementation]
description
Validate implementation against plan, verify success criteria, identify issues

Validate Plan

You are tasked with validating that an implementation plan was correctly executed, verifying all success criteria and identifying any deviations or issues.

Initial Setup

When invoked:

  1. Determine context - Are you in an existing conversation or starting fresh?

    • If existing: Review what was implemented in this session
    • If fresh: Need to discover what was done through git and codebase analysis
  2. Locate the plan:

    • If plan path provided, use it
    • Otherwise, search recent commits for plan references or ask user
  3. Gather implementation evidence:

    # Check recent commits
    git log --oneline -n 20
    git diff HEAD~N..HEAD  # Where N covers implementation commits
    
    # Run comprehensive checks
    cd $(git rev-parse --show-toplevel) && make check test

Validation Process

Step 1: Context Discovery

If starting fresh or need more context:

  1. Read the implementation plan completely

  2. Identify what should have changed:

    • List all files that should be modified
    • Note all success criteria (automated and manual)
    • Identify key functionality to verify
  3. Spawn parallel research tasks to discover implementation:

    Task 1 - Verify database changes:
    Research if migration [N] was added and schema changes match plan.
    Check: migration files, schema version, table structure
    Return: What was implemented vs what plan specified
    
    Task 2 - Verify code changes:
    Find all modified files related to [feature].
    Compare actual changes to plan specifications.
    Return: File-by-file comparison of planned vs actual
    
    Task 3 - Verify test coverage:
    Check if tests were added/modified as specified.
    Run test commands and capture results.
    Return: Test status and any missing coverage
    

Step 2: Systematic Validation

For each phase in the plan:

  1. Check completion status:

    • Look for checkmarks in the plan (- [x])
    • Verify the actual code matches claimed completion
  2. Run automated verification:

    • Execute each command from "Automated Verification"
    • Document pass/fail status
    • If failures, investigate root cause
  3. Assess manual criteria:

    • List what needs manual testing
    • Provide clear steps for user verification
  4. Think deeply about edge cases:

    • Were error conditions handled?
    • Are there missing validations?
    • Could the implementation break existing functionality?

Step 3: Generate Validation Report

Create comprehensive validation summary:

## Validation Report: [Plan Name]

### Implementation Status
✓ Phase 1: [Name] - Fully implemented
✓ Phase 2: [Name] - Fully implemented
⚠️ Phase 3: [Name] - Partially implemented (see issues)

### Automated Verification Results
✓ Build passes: `make build`
✓ Tests pass: `make test`
✗ Linting issues: `make lint` (3 warnings)

### Code Review Findings

#### Matches Plan:
- Database migration correctly adds [table]
- API endpoints implement specified methods
- Error handling follows plan

#### Deviations from Plan:
- Used different variable names in [file:line]
- Added extra validation in [file:line] (improvement)

#### Potential Issues:
- Missing index on foreign key could impact performance
- No rollback handling in migration

### Manual Testing Required:
1. UI functionality:
   - [ ] Verify [feature] appears correctly
   - [ ] Test error states with invalid input

2. Integration:
   - [ ] Confirm works with existing [component]
   - [ ] Check performance with large datasets

### Recommendations:
- Address linting warnings before merge
- Consider adding integration test for [scenario]
- Document new API endpoints

Working with Existing Context

If you were part of the implementation:

  • Review the conversation history
  • Check your todo list for what was completed
  • Focus validation on work done in this session
  • Be honest about any shortcuts or incomplete items

Important Guidelines

  1. Be thorough but practical - Focus on what matters
  2. Run all automated checks - Don't skip verification commands
  3. Document everything - Both successes and issues
  4. Think critically - Question if the implementation truly solves the problem
  5. Consider maintenance - Will this be maintainable long-term?

Validation Checklist

Always verify:

  • All phases marked complete are actually done
  • Automated tests pass
  • Code follows existing patterns
  • No regressions introduced
  • Error handling is robust
  • Documentation updated if needed
  • Manual test steps are clear

Relationship to Other Commands

Recommended workflow:

  1. /implement_plan - Execute the implementation
  2. /commit - Create atomic commits for changes
  3. /validate_plan - Verify implementation correctness
  4. /describe_pr - Generate PR description

The validation works best after commits are made, as it can analyze the git history to understand what was implemented.

Remember: Good validation catches issues before they reach production. Be constructive but thorough in identifying gaps or improvements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment