Skip to content

Instantly share code, notes, and snippets.

@rwarbelow
Last active December 30, 2025 00:08
Show Gist options
  • Select an option

  • Save rwarbelow/6959d5ee4a189d8841a068a1aefe88b1 to your computer and use it in GitHub Desktop.

Select an option

Save rwarbelow/6959d5ee4a189d8841a068a1aefe88b1 to your computer and use it in GitHub Desktop.
Cursor AI Rules & Self-Improvement System (Portable)

Cursor AI Rules & Self-Improvement System

A comprehensive, portable configuration for Cursor AI that includes rules, commands, and a self-improving learning system.

🚀 Quick Start

  1. Copy all files to your project's .cursor/ directory
  2. Create the folder structure:
    .cursor/
    ├── rules/          # Rule files (.mdc)
    ├── commands/       # Command files (.md)
    ├── plans/          # Auto-generated plan files
    ├── context/        # Session context cache
    ├── patterns.md     # Learning log
    ├── credentials.md  # Your secrets (gitignored)
    └── .gitignore
    
  3. Rename cursor-gitignore to .gitignore
  4. Rename patterns-template.md to patterns.md
  5. Create work-specific files (see below)

⚙️ Configuration Checklist

After copying these files, configure them for YOUR project:

1. Create architecture.mdc

This file describes YOUR system. Template:

---
description: "Your system architecture"
globs: ["**/*"]
---

# System Architecture

## Services
| Service | Purpose | Tech Stack |
|---------|---------|------------|
| api | Main backend | Node.js, PostgreSQL |

## Authentication  
- Provider: <!-- Firebase Auth / Supabase / Clerk / none -->

## Infrastructure
- Hosting: <!-- Vercel / Railway / AWS / local -->
- Database: <!-- PostgreSQL / SQLite / MongoDB / none -->

2. Create credentials.md (gitignored)

Store your dev secrets locally.

3. Search for <!-- CONFIGURE: placeholders

Replace these comments with your specifics:

  • UI framework
  • Inter-service communication
  • Issue tracker (or remove if not using)

4. Remove unused sections

Delete what does not apply:

  • Issue tracker integration (if solo project)
  • CI/CD monitoring (if no pipelines)
  • Cloud CLI commands (if local-only)

📁 What's Included

Rules (.cursor/rules/)

File Purpose
global.mdc Core behavior, git restrictions, self-improvement triggers
workflow.mdc Full development lifecycle (Research → Plan → Implement → Validate)
planning.mdc Plan-first approach with assumptions
subagents.mdc Parallel task execution with background agents
thinking.mdc Extended thinking for complex problems
agentic.mdc Autonomous coding philosophy
testing.mdc Test patterns and automation
typescript.mdc TypeScript/React rules
python.mdc Python rules
ruby.mdc Ruby/Rails rules
react-advanced.mdc React hooks, performance, composition

Commands (.cursor/commands/)

Category Commands
Git/PR /ship, /commit-message, /pr-description, /ci-monitor
Code Quality /code-review, /security-review, /refactor, /refactor-check
Testing /write-tests, /coverage-gaps, /manual-test
Analysis /impact-analysis, /breaking-changes, /dead-code, /duplicate-check
Documentation /document, /api-docs, /changelog, /runbook
Self-Improvement /retro, /pattern-log, /rules-review, /debug-loop
Utilities /explain, /explain-error, /fix-lint, /add-types, /optimize

Templates

File Purpose
patterns-template.md Learning log template
context-README.md Context folder documentation
credentials.example.md Credentials template
cli-config.json Command restrictions

🔧 Work-Specific Files to Create

These files are not included because they contain project/company-specific information. Create them yourself:

1. architecture.mdc - System Architecture

---
description: "System architecture, design patterns, and high-level context"
globs: ["**/*"]
---

# System Architecture

## Overview
[Describe your system's high-level architecture]

## Key Services
| Service | Purpose | Tech Stack |
|---------|---------|------------|
| service-name | What it does | Node.js, PostgreSQL |

## Team Ownership
| Team | Repositories |
|------|--------------|
| Team Name | repo1, repo2 |

## Authentication & Authorization
- Auth provider: [Your identity provider]
- Token types: [Access, ID, etc.]
- Key audiences: [API URLs]

## Environment URLs
| Environment | URL |
|-------------|-----|
| Production | https://... |
| Staging | https://... |
| Development | https://... |

## Critical Gotchas
- [Service-specific gotchas discovered during development]

## Key Documentation Links
- [Your documentation links]

2. ticket-integration.mdc - Issue Tracker Integration

---
description: "Ticket system integration"
globs: ["**/*"]
---

# Ticket Integration

## Issue Tracker Configuration
- Project keys: [PROJ, TEAM, etc.]
- Board URL: https://your-company.atlassian.net/...

## Ticket Patterns
- Feature: `TICKET-###`
- Bug: `BUG-###`

## Branch Naming
Format: `TICKET-123-short-description`

## Required Fields
- Acceptance Criteria
- Story Points
- Component

3. credentials.md - Your Secrets (ALWAYS GITIGNORED)

Copy from credentials.example.md and fill in:

# Credentials (DO NOT COMMIT)

## Auth Provider Domains
| Environment | Domain |
|-------------|--------|
| Production | auth.example.com |
| Development | auth.dev.yourcompany.com |

## M2M Clients
| Purpose | Client ID | Client Secret | Audience |
|---------|-----------|---------------|----------|
| Service A | xxx | xxx | https://api.yourcompany.com |

## Test Users
| Environment | Email | Password | Notes |
|-------------|-------|----------|-------|
| Dev | test@example.com | xxx | Admin user |

4. mcp.json - MCP Server Configuration

{
  "mcpServers": {
    "your-mcp-server": {
      "command": "npx",
      "args": ["-y", "your-mcp-package"],
      "env": {
        "API_KEY": "your-key"
      }
    }
  }
}

🔄 Self-Improvement Features

This system learns from your corrections:

  1. Auto-Retro: When you correct the AI, it immediately updates rules
  2. Pattern Logging: Discoveries are logged to patterns.md
  3. Background Learning: Complex corrections spawn background agents
  4. Generalization: Fixes are made broadly applicable, not project-specific

How It Works

You: "Why did I have to tell you to check the config file?"

AI: "Got it - I should self-investigate configuration errors. Updating now..."
    [Updates global.mdc with troubleshooting rule]
    [Logs to patterns.md]
    "Fixed. Continuing..."

📋 Folder Structure

.cursor/
├── rules/                    # AI behavior rules
│   ├── global.mdc           # Core rules (always applied)
│   ├── workflow.mdc         # Development lifecycle
│   ├── architecture.mdc     # YOUR system architecture
│   └── ...
├── commands/                 # Slash commands
│   ├── ship.md              # Git workflow
│   ├── manual-test.md       # Testing guide
│   └── ...
├── plans/                    # Auto-generated task plans
├── context/                  # Session context cache
│   └── README.md
├── patterns.md              # Learning log
├── credentials.md           # Secrets (gitignored)
├── credentials.example.md   # Template
├── cli-config.json          # Command restrictions
├── mcp.json                 # MCP servers
└── .gitignore               # Ignore secrets & temp files

🎯 Key Workflows

Starting a New Feature

  1. Mention the ticket: "Implement TICKET-123"
  2. AI fetches ticket details (if MCP configured)
  3. AI creates a plan with assumptions
  4. You confirm assumptions
  5. AI implements, validates, and documents

After Completing Work

  1. Run /ship to commit, push, and create PR
  2. /ci-monitor watches GitHub Actions
  3. If CI fails, AI auto-fixes and learns
  4. Run /retro for session learnings

When You Correct the AI

  1. AI acknowledges the correction
  2. Updates relevant rule/command immediately
  3. Logs to patterns.md
  4. Generalizes the fix for all projects

📝 License

MIT - Use freely, modify as needed, share improvements!

Accessibility Audit

Objective

Review UI code for accessibility issues and ensure WCAG compliance.

Instructions

  1. Check for WCAG compliance:

    • Perceivable: Alt text, color contrast, text alternatives
    • Operable: Keyboard navigation, focus management, timing
    • Understandable: Labels, error messages, consistent navigation
    • Robust: Valid HTML, ARIA usage, screen reader compatibility
  2. Review common issues:

    • Missing or poor alt text on images
    • Insufficient color contrast (4.5:1 for text)
    • Missing form labels and aria-labels
    • Non-semantic HTML (divs instead of buttons/links)
    • Missing skip links and landmark regions
    • Poor focus indicators
    • Missing keyboard support for interactive elements
    • Missing ARIA attributes for dynamic content
  3. For each issue:

    • Describe the accessibility barrier
    • Which WCAG criterion it violates
    • Who is affected (screen reader users, keyboard users, etc.)
    • Provide the fix with code example
  4. Test considerations:

    • Keyboard-only navigation
    • Screen reader compatibility
    • High contrast/zoom
    • Reduced motion preferences

Output

  • List of accessibility issues found
  • WCAG level (A, AA, AAA) for each issue
  • Fixed code examples
  • Testing recommendations

Add Logging & Observability

Objective

Audit logging coverage and add missing logging for debugging and observability.

Action Required

After identifying logging gaps, immediately add the missing log statements. Do not just report - implement the logging.

Instructions

  1. Identify logging points:

    • Function entry/exit for key operations
    • Error and exception handling
    • State changes and important decisions
    • External service calls (API, database)
    • User actions and events
    • Performance-sensitive operations
  2. Add appropriate log levels:

    • ERROR: Exceptions, failures that need attention
    • WARN: Unexpected but handled situations
    • INFO: Key business events and operations
    • DEBUG: Detailed diagnostic information
    • TRACE: Very detailed, verbose debugging
  3. Include useful context:

    • Request/correlation IDs for tracing
    • Relevant identifiers (user_id, order_id, etc.)
    • Input values (sanitized—no secrets!)
    • Duration/timing for performance tracking
    • Outcome of operations
  4. Best practices:

    • Use structured logging (JSON) where appropriate
    • Don't log sensitive data (passwords, tokens, PII)
    • Make logs searchable and filterable
    • Include enough context to debug without the code
    • Use consistent message formats

Output

  • Code with logging added
  • Any logger setup/configuration needed
  • Recommendations for log aggregation/analysis

Action: Add Logging

After analysis, immediately implement:

  1. Add missing log statements at identified points
  2. Ensure proper log levels are used
  3. Include relevant context (IDs, outcomes) in each log
  4. Verify no sensitive data is logged
  5. Report what logging was added

Add Type Annotations

Objective

Add comprehensive type annotations to improve code safety and developer experience.

Instructions

  1. Analyze the code to determine:

    • Function parameter types
    • Return types
    • Variable types where not inferrable
    • Object/interface shapes
  2. Add types for:

    • All function parameters
    • All function return types
    • Complex data structures
    • Generic types where appropriate
    • Union types for values that can be multiple types
    • Optional types for nullable values
  3. Follow best practices:

    • Use specific types over any or unknown where possible
    • Create interfaces/types for reusable shapes
    • Use readonly where data shouldn't be mutated
    • Prefer interfaces for object shapes, types for unions/aliases
    • Use generic types for flexible, reusable code
  4. Language-specific:

    • TypeScript: Full type annotations, interfaces, generics
    • Python: Type hints, typing module, TypedDict, Protocol
    • Ruby: Sorbet/RBS annotations if using typed Ruby
    • Other: Follow language conventions

Output

  • Fully typed version of the code
  • Any new type definitions needed
  • Notes on types that were difficult to determine
---
description: "Agentic coding behavior: research first, minimal questions, autonomous execution"
alwaysApply: true
---
# Agentic Coding Philosophy
Act as an autonomous coding agent. Research thoroughly, minimize questions, execute end-to-end.
## Research Before Asking
**ALWAYS research before asking questions:**
1. Search the codebase for similar implementations
2. Read relevant files to understand patterns
3. Check READMEs, type definitions, inline comments
4. Infer answers from naming conventions and folder structure
**Only ask questions when:**
- Business decisions not findable in code
- Multiple valid approaches with no clear precedent
- Security/compliance implications requiring explicit approval
- Genuinely cannot find the answer after thorough research
## Before Coding
- Read and understand relevant files before making changes
- Search the codebase to find patterns and existing implementations
- Identify dependencies and potential impacts of changes
- For non-trivial tasks: create a plan with assumptions (see `planning.mdc`)
## During Coding
- Make all necessary changes in a single pass when possible
- Follow existing patterns found in the codebase
- Handle edge cases proactively
- Add appropriate error handling
## After Coding
- Run relevant tests to verify changes work
- Check for linting errors and fix them
- Verify the change accomplishes the stated goal
- Report what was done and any follow-up considerations
## Complex Tasks
For complex tasks requiring multiple steps:
1. Break down the task into discrete steps
2. **If steps are independent**: Use parallel subagents (see `subagents.mdc`)
3. **If steps are dependent**: Execute sequentially, verifying each one
4. If a step fails, diagnose and fix before continuing
5. Provide a summary of all changes made
## Error Recovery
If something goes wrong:
1. Read the error message carefully
2. Search for similar patterns in the codebase
3. Try an alternative approach
4. If stuck after 2-3 attempts, explain the issue and ask for guidance

Generate API Documentation

Objective

Create comprehensive API documentation for endpoints, services, or libraries.

Instructions

  1. Analyze the API to document:

    • Available endpoints/methods
    • HTTP methods (GET, POST, etc.)
    • URL patterns and parameters
    • Request body schema
    • Response body schema
    • Authentication requirements
    • Error responses
  2. For each endpoint/method, include:

    • Description: What this endpoint does
    • URL: Full path with path parameters
    • Method: HTTP method(s)
    • Authentication: Required auth headers/tokens
    • Parameters: Query params, path params, headers
    • Request body: Schema with field descriptions
    • Response: Success response schema
    • Errors: Possible error codes and responses
    • Example: Request and response examples
  3. Format options:

    • OpenAPI/Swagger specification (YAML/JSON)
    • Markdown documentation
    • JSDoc/docstrings for code
  4. Include:

    • Rate limiting information
    • Pagination details
    • Filtering/sorting options
    • Deprecation notices if applicable

Output

  • Complete API documentation in requested format
  • Example requests (curl, JavaScript fetch, etc.)
  • Notes on any undocumented behavior

Breaking Change Detection

Objective

Identify breaking changes in APIs, interfaces, or contracts that could affect consumers, and implement backward-compatible alternatives when possible.

Action Required

After identifying breaking changes, propose and implement backward-compatible solutions where feasible. If breaking changes are unavoidable, create the migration guide.

Instructions

  1. Analyze changes for breaking patterns:

    API/GraphQL Changes

    • Removed endpoints or fields
    • Renamed endpoints or fields
    • Changed required/optional status of fields
    • Changed field types (string → number, etc.)
    • Changed response structure
    • Changed authentication requirements

    Function/Method Changes

    • Removed or renamed public functions
    • Changed function signatures (parameters, return types)
    • Changed behavior of existing functions
    • Removed or renamed exported types/interfaces

    Database/Schema Changes

    • Removed columns or tables
    • Changed column types
    • Added NOT NULL constraints to existing columns
    • Changed primary/foreign key relationships

    Event/Message Changes

    • Changed event schema
    • Removed event fields
    • Changed event names
  2. For each breaking change, document:

    • What changed (before → after)
    • Who is affected (consumers, services, etc.)
    • Severity (Critical/High/Medium)
    • Migration path
  3. Suggest alternatives:

    • Deprecation strategy instead of removal
    • Backward-compatible additions
    • Feature flags for gradual rollout

Output

Breaking Changes Found

Change Type Severity Affected Consumers

Details

[Change Name]

  • Before: [old behavior/signature]
  • After: [new behavior/signature]
  • Impact: [who is affected and how]
  • Migration: [steps to migrate]

Recommendations

  • Consider deprecation period before removal
  • Add backward compatibility layer
  • Notify affected teams
  • Update API documentation

Non-Breaking Alternatives

  • Suggestions for making changes backward-compatible

Action: Implement Fixes

After analysis, take action:

  1. If backward-compatible solution exists: Implement it immediately
  2. If breaking change is unavoidable: Run /migration-guide to create migration docs
  3. Add deprecation notices to old APIs if keeping temporarily
  4. Report what was changed to maintain compatibility

Generate Changelog Entry

Objective

Create a changelog entry following the Keep a Changelog format for the current changes.

Instructions

  1. Analyze the changes:

    • Review modified files and commits
    • Identify the type of changes made
    • Extract ticket/issue references
  2. Categorize changes using Keep a Changelog format:

    • Added: New features
    • Changed: Changes to existing functionality
    • Deprecated: Features that will be removed in future
    • Removed: Features removed in this release
    • Fixed: Bug fixes
    • Security: Security-related changes
  3. Write entries that:

    • Are written for end-users/consumers, not developers
    • Explain the impact, not the implementation
    • Include ticket/PR references
    • Are concise but informative
  4. Format:

    ## [Unreleased]
    
    ### Added
    - Description of new feature ([#123](link))
    
    ### Changed
    - Description of change ([#124](link))
    
    ### Fixed
    - Description of fix ([#125](link))

Output

Changelog Entry

## [Unreleased]

### [Category]
- [Entry with ticket reference]

Release Notes (User-Friendly Version)

  • Summary suitable for release announcements
  • Key highlights for stakeholders

CI Monitor (Watch GitHub Actions & Fix Failures)

Objective

Monitor GitHub Actions workflow runs for the current branch, wait for completion, and fix failures.

When to Use

  • After pushing changes to check if CI passes
  • To monitor an already-running workflow
  • When /ship calls this as its final step
  • Standalone when you just want to check CI status without shipping new changes

⚠️ IMPORTANT: Always Watch to Completion

NEVER ask the user if they want you to watch CI. Just do it.

  • When invoked, always poll until the workflow completes
  • Do not stop to ask "would you like me to watch?" or "should I continue?"
  • Report progress periodically but keep monitoring

Instructions

Step 0: Navigate to Repository & Identify Branch

cd /path/to/repository
git branch --show-current
git remote get-url origin

Parse owner/repo from remote URL:

  • git@github.com:OrgName/repo-name.git → owner: OrgName, repo: repo-name

Step 1: Find Workflow Runs for This Branch

mcp_github_list_workflow_runs({
  owner: "<owner>",
  repo: "<repo>"
})

Filter results for runs where head_branch matches the current branch. Get the most recent run(s).

If no runs found:

ℹ️ No workflow runs found for branch '<branch-name>'

This could mean:
- The push hasn't triggered workflows yet (wait a moment and try again)
- This repository doesn't have GitHub Actions configured
- The branch hasn't been pushed yet

Would you like me to check again in 10 seconds? (y/n)

Step 2: Display Current Status

📊 CI Status for branch: <branch-name>

Workflow: "<workflow-name>" (Run #<run_number>)
Status: <status>
Started: <created_at>
URL: <html_url>

Jobs:
  - build: <status>
  - test: <status>
  - lint: <status>

Step 3: Poll Until Complete

If status is queued or in_progress, poll every 30 seconds:

🔄 Workflow in progress... (checking every 30s)

   [===========         ] 55% complete
   
   Jobs:
   - build: ✅ completed
   - test: 🔄 in_progress (2m 15s)
   - lint: ⏳ queued
   
   Next check in 30s... (Ctrl+C to stop polling)

Use this to get updated status:

mcp_github_get_workflow_run({
  owner: "<owner>",
  repo: "<repo>",
  run_id: <run_id>
})

Step 4: Handle Completion

If conclusion is success:

✅ CI Passed!

Workflow: "<workflow-name>" (Run #<run_number>)
Duration: <duration>
All jobs completed successfully:
  - build ✅ (1m 23s)
  - test ✅ (2m 45s)
  - lint ✅ (0m 32s)

Branch '<branch-name>' is green!

If conclusion is failure:

❌ CI Failed

Workflow: "<workflow-name>" (Run #<run_number>)
Failed jobs:
  - test ❌ (exit code 1)

View logs: <html_url>

Step 4a: Check for Flaky Test Indicators

Before attempting to fix, check if the failure appears to be flaky:

Flaky test indicators:

  • Failure is in a test that wasn't touched by current changes
  • Error message mentions timeouts, network issues, or race conditions
  • Error mentions external services (Pact contracts, AWS) that we don't control
  • Same test passed on a previous run of the same commit
  • Error is "Type mismatch" in contract tests unrelated to our changes

If failure appears flaky:

🔄 Failure appears to be flaky (unrelated to changes). Auto-retrying...

Automatically rerun the failed jobs:

gh run rerun <run_id> --failed

Then return to Step 3 to monitor the rerun. Track retries:

  • Maximum 2 automatic retries for flaky failures
  • If still failing after 2 retries, proceed to Step 5 (analyze and fix)

Step 4b: Analyze for Real Failures

If failure is NOT flaky (i.e., related to our changes):

🔧 Automatically analyzing failure and attempting to fix...

Default behavior: Always attempt to fix failures automatically. Only ask for user input if:

  • Fix attempt fails 3 times (recursion safeguard)
  • The error is ambiguous and multiple fix approaches are possible
  • The fix requires a decision (e.g., "delete this file or update it?")

If conclusion is cancelled:

⚠️ CI Cancelled

Workflow: "<workflow-name>" (Run #<run_number>)
The workflow was cancelled before completion.

Would you like to:
1. Re-trigger the workflow (push an empty commit)
2. Check for a newer run
3. Skip

Step 5: Analyze & Fix Failures (Automatic)

When CI fails, automatically:

  1. Get failure details from the workflow run

  2. Identify the failing job and step

  3. Analyze the error message - common patterns:

    • Test failures → Find failing test, check assertion
    • Lint errors → Run linter locally, fix issues
    • Type errors → Run type checker, fix types
    • Build errors → Check compilation output
    • Dependency issues → Check package.json/lockfile
  4. Make the fix in the codebase

  5. Verify locally before pushing:

    # Run the same checks that failed
    yarn test    # if test failed
    yarn lint    # if lint failed
    yarn build   # if build failed
  6. Commit and push the fix:

    git add -A
    git commit -m "fix: address CI failure from run #<run_number>
    
    - <description of what was fixed>
    
    Fixes workflow run: <html_url>"
    
    git push origin <branch-name>
  7. Return to Step 1 to monitor the new run

Step 6: Learn from Failures (Prevent Future Issues)

This step is REQUIRED after every CI fix. Always log the pattern and update rules.

After successfully fixing a CI failure, analyze whether this type of issue could have been prevented and update rules/commands accordingly.

Analyze the root cause:

  • Was this a pattern that should have been caught earlier?
  • Could a rule have prevented this?
  • Should a command include additional checks?

Update rules if applicable:

Failure Type Rule/Command to Update
Missing tests testing.mdc - add pattern to catch
Type errors typescript.mdc - add guidance
Lint errors workflow.mdc - ensure lint runs in Step 2
Import errors Language-specific rule - add import patterns
Missing env vars architecture.mdc - document required vars
Build failures Add to /run-checks command

Example: Updating a rule

🔍 Analyzing root cause of failure...

Issue: Test failed because async function wasn't awaited
Root cause: Missing `await` keyword

📝 Updating .cursor/rules/typescript.mdc to add:
   
   ## Common Pitfalls
   + - **Forgetting await**: Always await async functions in tests.
   +   Use `await expect(...).resolves` or `await expect(...).rejects`
   
✅ Rule updated to prevent similar issues.

Example: Updating a command

🔍 Analyzing root cause of failure...

Issue: Lint failed due to unused import
Root cause: Import was added but never used

📝 Updating .cursor/commands/run-checks.md to add:
   
   - Check for unused imports before committing
   - Run: eslint --rule 'no-unused-vars: error'
   
✅ Command updated to catch this earlier.

What to update:

  1. Rules (.cursor/rules/*.mdc):

    • Add to "Common Pitfalls" sections
    • Add new patterns to avoid
    • Document gotchas discovered
  2. Commands (.cursor/commands/*.md):

    • Add new checks to /run-checks
    • Update /ship pre-flight checks
    • Enhance /write-tests patterns
  3. Architecture (.cursor/rules/architecture.mdc):

    • Document new service dependencies discovered
    • Add environment requirements
    • Note integration gotchas

Always log to pattern file:

# Add entry to .cursor/patterns.md
📝 Logging to Pattern Log (.cursor/patterns.md):

| ID | Issue | Occurrences | Solution | Last Seen |
|----|-------|-------------|----------|-----------|
| P### | <issue description> | 1 | <solution> | <today> |

Output after learning:

📚 Learning from failure...

Root cause: Missing type annotation caused runtime error
Prevention: Add explicit return types to async functions

Updated files:
  - .cursor/patterns.md (added P### entry)
  - .cursor/rules/typescript.mdc (added pitfall)
  - .cursor/commands/run-checks.md (added type check)

Future occurrences of this pattern will be caught earlier.

Recursion Safeguards

  • Maximum 3 fix attempts before requiring manual intervention
  • Track fix attempts in conversation context
  • After 3 attempts:
    ⚠️ CI has failed 3 times after fix attempts.
    
    Fix history:
    1. Run #123: Fixed missing import
    2. Run #124: Fixed test assertion  
    3. Run #125: Fixed lint error (new issue)
    
    This may require manual debugging.
    View the latest failure: <html_url>
    

Example: Successful Run

$ /ci-monitor

📊 CI Status for branch: TICKET-123-my-feature

Workflow: "CI" (Run #456)
Status: in_progress
Started: 2 minutes ago

🔄 Polling... (30s intervals)

   [================    ] 80% complete
   - build: ✅ 
   - test: 🔄 running
   - lint: ✅

   ... 30s later ...

✅ CI Passed!

Duration: 3m 42s
All jobs successful!
Branch 'TICKET-123-my-feature' is green!

Example: Failed Run with Auto-Fix

$ /ci-monitor

📊 CI Status for branch: TICKET-123-my-feature

Workflow: "CI" (Run #456)  
Status: completed
Conclusion: failure ❌

Failed jobs:
  - test ❌

🔧 Automatically analyzing and fixing...

🔍 Analyzing failure...

Error found in test job:
  FAIL src/services/userService.test.ts
  ● updateUser › should validate email format
    Expected: true
    Received: false

The test expects email validation to return true for valid emails,
but the validation function is returning false.

📝 Fixing...
  - Updated emailValidator.ts to handle edge case

🧪 Verifying locally...
  yarn test → ✅ All tests pass

📤 Pushing fix...
  git commit -m "fix: correct email validation edge case"
  git push origin TICKET-123-my-feature

🔄 Monitoring new run...

✅ CI Passed! (Run #457)
Branch is now green!

Notes

  • This command can be run anytime, not just after /ship
  • Useful for checking CI status on branches you didn't just push
  • The fix functionality respects the same git safety rules (no pushing to main)
{
"version": 1,
"editor": {
"vimMode": false
},
"hasChangedDefaultModel": false,
"permissions": {
"allow": [
"Shell(ls*)",
"Shell(cat*)",
"Shell(head*)",
"Shell(tail*)",
"Shell(grep*)",
"Shell(find*)",
"Shell(wc*)",
"Shell(pwd)",
"Shell(echo*)",
"Shell(which*)",
"Shell(cd*)",
"Shell(mkdir*)",
"Shell(touch*)",
"Shell(yarn test*)",
"Shell(yarn lint*)",
"Shell(yarn build*)",
"Shell(yarn install*)",
"Shell(yarn dev*)",
"Shell(npm test*)",
"Shell(npm run test*)",
"Shell(npm run lint*)",
"Shell(npx*)",
"Shell(pytest*)",
"Shell(python -m pytest*)",
"Shell(make test*)",
"Shell(make lint*)",
"Shell(bundle exec rspec*)",
"Shell(rspec*)",
"Shell(rubocop*)",
"Shell(rails console*)",
"Shell(rails db:migrate*)",
"Shell(curl*)",
"Shell(jq*)",
"Shell(sort*)",
"Shell(uniq*)",
"Shell(awk*)",
"Shell(sed*)",
"Shell(diff*)",
"Shell(tree*)",
"Shell(git status*)",
"Shell(git diff*)",
"Shell(git log*)",
"Shell(git show*)",
"Shell(git blame*)",
"Shell(git branch*)",
"Shell(git checkout*)",
"Shell(git switch*)",
"Shell(git pull*)",
"Shell(git stash*)",
"Shell(git cherry-pick*)",
"Shell(git fetch*)",
"Shell(node*)",
"Shell(python*)",
"Shell(ruby*)",
"Shell(rails*)",
"Shell(bundle*)",
"Shell(pip*)",
"Shell(uv*)",
"Shell(yarn*)",
"Shell(npm*)",
"Shell(pnpm*)",
"Shell(tsc*)",
"Shell(eslint*)",
"Shell(prettier*)",
"Shell(black*)",
"Shell(ruff*)",
"Shell(mypy*)",
"Shell(sam*)",
"Shell(aws*)",
"Shell(docker*)",
"Shell(psql*)",
"Shell(mysql*)",
"Shell(redis-cli*)",
"Shell(ag*)",
"Shell(rg*)",
"Shell(fd*)",
"Shell(xargs*)",
"Shell(env*)",
"Shell(export*)",
"Shell(source*)",
"Shell(type*)",
"Shell(file*)",
"Shell(stat*)",
"Shell(date*)",
"Shell(whoami*)",
"Shell(hostname*)",
"Shell(uname*)",
"Shell(cursor-agent*)"
],
"deny": [
"Shell(git add*)",
"Shell(git commit*)",
"Shell(git push*)",
"Shell(git push origin main*)",
"Shell(git push origin master*)",
"Shell(git push -f*)",
"Shell(git push --force*)",
"Shell(rm -rf *)",
"Shell(sudo *)"
]
},
"network": {
"useHttp1ForAgent": false
}
}

Code Review

Objective

Perform a thorough code review as if reviewing a pull request from a colleague, and fix any blocking issues found.

Action Required

After identifying issues, immediately fix blocking issues. Do not just report - improve the code.

Review Checklist

Functionality

  • Code does what it's supposed to do
  • Edge cases are handled appropriately
  • Error handling is comprehensive and appropriate
  • No obvious bugs or logic errors
  • Behavior matches requirements/specifications

Code Quality

  • Code is readable and well-structured
  • Functions/methods are focused and reasonably sized
  • Variable and function names are clear and descriptive
  • No unnecessary code duplication (DRY)
  • Follows project conventions and style guide
  • Appropriate use of comments (explains "why", not "what")

Architecture & Design

  • Changes fit well with existing architecture
  • No unnecessary complexity or over-engineering
  • Appropriate separation of concerns
  • No breaking changes to public APIs (or they're intentional)

Performance

  • No obvious performance issues (N+1 queries, unnecessary loops)
  • Appropriate use of caching if applicable
  • No memory leaks or resource management issues

Security

  • No security vulnerabilities introduced
  • Input validation where needed
  • No sensitive data exposure

Testing

  • Adequate test coverage for new code
  • Tests are clear and test the right things
  • Edge cases are tested

Output

  • Summary of review (approve/request changes)
  • Specific feedback organized by category
  • Suggestions marked as "blocking" vs "nice-to-have"

Action: Fix Issues

After review, immediately fix:

  1. Blocking issues: Fix before approving
  2. Nice-to-have: Fix if quick, otherwise note for follow-up

Report what was fixed and final review status.

Generate Commit Message

Objective

Create a clear, conventional commit message for the staged changes.

Instructions

  1. Analyze staged changes:

    • What files are modified?
    • What is the nature of the change? (feature, fix, refactor, docs, etc.)
    • What is the scope of the change?
  2. Follow Conventional Commits format:

    <type>(<scope>): <subject>
    
    [optional body]
    
    [optional footer(s)]
    
  3. Types:

    • feat: New feature
    • fix: Bug fix
    • refactor: Code change that neither fixes a bug nor adds a feature
    • docs: Documentation only changes
    • style: Formatting, missing semicolons, etc. (not CSS)
    • test: Adding or correcting tests
    • chore: Maintenance tasks, dependencies, build changes
    • perf: Performance improvements
  4. Guidelines:

    • Subject line: 50 chars or less, imperative mood ("Add" not "Added")
    • Body: Wrap at 72 chars, explain what and why (not how)
    • Reference issues/tickets in footer

Output

  • Commit message ready to use
  • If changes are complex, suggest breaking into multiple commits

Context Files

PRDs, API specs, ADRs, and persistent documentation belong in your team docs (Confluence, Notion, etc.). This local folder is a working cache for session context.

Documentation Integration

Use your team docs for permanent documentation:

  • PRDs and feature specifications
  • Architecture Decision Records (ADRs)
  • API documentation
  • Runbooks and operational docs

To use team documentation:

  1. Share the page ID when starting a task
  2. The AI will fetch and incorporate the content
  3. After significant work, the AI will offer to create/update documentation

What Goes Here (Working Context)

This folder caches external context brought into the session:

Source What to Store Example File
Team Docs Fetched page content for quick reference docs-<page-id>.md
Slack Relevant conversations, decisions, context slack-<topic>.md
Issue Tracker Ticket details, comments, requirements issue-<ticket-id>.md
Meetings Notes, decisions, action items meeting-<date>-<topic>.md
External docs API docs, vendor info, research external-<source>.md
User input Context you share verbally context-<topic>.md

How the AI Uses This (Automatic)

  1. Auto-load on task start: The AI automatically checks this folder at the start of each task and reads any relevant files based on the topic
  2. Auto-cache docs: When fetching a documentation page, the AI saves a local copy here
  3. Auto-store shared context: When you paste Slack convos or other info, the AI saves it here
  4. No manual referencing needed: You don't need to use @context/file - the AI reads relevant files automatically
  5. Clean up: After a feature ships, archive or delete stale context

Naming Conventions

docs-page-123.md      # Documentation page cache
slack-auth-discussion.md     # Slack conversation
issue-TICKET-123.md              # Issue ticket context  
meeting-2024-01-15-kickoff.md # Meeting notes
external-auth-docs.md    # External documentation
context-user-requirements.md # Ad-hoc context you shared

Lifecycle

1. Start task → Fetch/receive context → Save to this folder
2. During work → Reference with @context/filename
3. Task complete → Archive to context/archive/ or delete

Rules

  1. Team docs are source of truth - This is just a cache/working copy
  2. Date-sensitive - Include dates in meeting/decision files
  3. Clean up regularly - Archive after task completion
  4. Don't duplicate needlessly - If you'll fetch it fresh, don't cache it

Convert/Translate Code

Objective

Convert code from one language, framework, or format to another.

Instructions

  1. Understand the source code:

    • What is the purpose and behavior?
    • What are the inputs and outputs?
    • What dependencies or libraries are used?
    • Are there any language-specific idioms?
  2. Convert while:

    • Preserving all functionality and behavior
    • Using idiomatic patterns for the target language
    • Mapping libraries to equivalent target libraries
    • Adapting language-specific features appropriately
    • Maintaining the same API/interface where possible
  3. Handle differences:

    • Different type systems (static vs dynamic)
    • Different concurrency models
    • Different error handling patterns
    • Different standard library functions
    • Framework-specific patterns
  4. Common conversions:

    • JavaScript ↔ TypeScript
    • Python 2 → Python 3
    • Class components → Functional components (React)
    • REST → GraphQL
    • Callbacks → Promises → async/await
    • SQL ↔ ORM queries

Output

  • Converted code in the target language/framework
  • Notes on any changes in behavior or approach
  • Dependencies needed in the target environment
  • Any manual changes that might be needed

Test Coverage Gap Analysis

Objective

Identify untested code paths and implement high-value test cases to fill the gaps.

Action Required

After identifying gaps, immediately write the missing tests following the project's existing test patterns. Do not just report - fix the gaps.

Instructions

  1. Analyze the code for coverage gaps:

    Untested Code Paths

    • Functions without any test coverage
    • Branches (if/else) not covered
    • Error handling paths not tested
    • Edge cases not covered

    Risk-Based Prioritization

    Prioritize testing for:

    • Critical: Authentication, authorization, payment, data mutation
    • High: User-facing features, API endpoints, data validation
    • Medium: Business logic, calculations, transformations
    • Low: Logging, formatting, utilities
  2. Identify missing test types:

    • Unit tests for individual functions
    • Integration tests for service interactions
    • Contract tests for API boundaries
    • E2E tests for critical user journeys
  3. Suggest specific test cases:

    • Happy path scenarios
    • Error scenarios and edge cases
    • Boundary conditions
    • Null/undefined/empty inputs
    • Concurrent access scenarios
    • Performance edge cases
  4. Review existing tests for:

    • Tests that only test the happy path
    • Missing assertions (tests that don't assert enough)
    • Flaky tests that should be fixed
    • Tests that mock too much (testing mocks, not code)

Output

Coverage Summary

  • Estimated coverage: X%
  • Critical paths covered: Y/Z

Uncovered Code

File Function/Method Risk Level Suggested Tests

Suggested Test Cases

High Priority

  1. [Test Name]
    • Scenario: [description]
    • Expected: [outcome]
    • Why: [risk if untested]

Medium Priority

...

Edge Cases to Cover

  • Empty input handling
  • Null/undefined values
  • Maximum size limits
  • Invalid format inputs
  • Timeout scenarios
  • Concurrent access

Test Quality Issues

  • Tests that need improvement
  • Missing assertions
  • Over-mocked tests

Action: Implement Missing Tests

After analysis, immediately implement the highest priority missing tests:

  1. Follow existing test patterns in the codebase
  2. Create test files if they don't exist
  3. Add test cases for identified gaps
  4. Run tests to verify they pass
  5. Report what was added

Create Mock/Test Data

Objective

Generate realistic mock data or test fixtures for the specified types, APIs, or components.

Instructions

  1. Analyze what's needed:

    • What data types/models need mocking?
    • What's the shape/schema of the data?
    • Are there any constraints (required fields, formats, relationships)?
    • What scenarios need to be covered (success, error, edge cases)?
  2. Generate mock data that is:

    • Realistic: Uses plausible values, not just "test" or "foo"
    • Varied: Different values for different test cases
    • Valid: Respects constraints, formats, and relationships
    • Comprehensive: Covers normal cases and edge cases
  3. Create mocks for:

    • API responses (success, error states, empty results)
    • Database fixtures
    • Service/dependency mocks
    • User input scenarios
    • Configuration/environment variations
  4. Include:

    • Factory functions for generating variations
    • Edge case data (empty, null, max length, special characters)
    • Error response mocks
    • Helper functions for common mock patterns

Output

  • Mock data/fixtures ready to use in tests
  • Factory functions if helpful for generating variations
  • Notes on any assumptions made about the data

Credentials Template

⚠️ Copy this file to credentials.md and fill in your actual values. credentials.md is gitignored and safe for storing secrets locally.


Cloud Provider Profiles (Optional)

Profile Name Environment Account/Project ID Notes
your-dev-profile Development 123456789012
your-staging-profile Staging 234567890123
your-prod-profile Production 345678901234 Read-only access only!

Auth Provider Domains

Environment Domain
Development auth.dev.your-company.com
Staging auth.staging.your-company.com
Production auth.your-company.com

API Audiences

Audience Purpose
https://api.your-company.com Primary API
https://frontend.your-company.com Frontend SPA
https://your-auth-domain.com/api/v2/ Management API

M2M Clients

Development Environment

Purpose / Service Client ID Client Secret Notes
Service A your-client-id your-client-secret Audience: https://api.your-company.com
Service B your-client-id your-client-secret Scopes: read:users write:users

Staging Environment

Purpose / Service Client ID Client Secret Notes
Service A your-client-id your-client-secret

Test Users

Environment Email Password Roles Notes
Dev test-user@example.com your-password admin For admin testing
Dev test-member@example.com your-password member Standard user
Staging qa-user@example.com your-password admin QA testing

Service URLs

Service Development Staging Production
API Gateway https://api.dev.your-company.com https://api.staging.your-company.com https://api.your-company.com
API Service https://api.dev.your-company.com https://api.staging.your-company.com https://api.your-company.com

How to Use

  1. Copy this file: cp credentials.example.md credentials.md
  2. Fill in your actual values
  3. Reference in AI conversations: "Check my credentials in .cursor/credentials.md"

The AI will look up values here during manual testing, token retrieval, etc.

# Credentials - NEVER commit these
credentials.md
credentials.local.md
# Context files - temporary/session data
context/*.md
!context/README.md
# Plans - may contain sensitive details
plans/*.md
# Terminal outputs
terminals/

Dead Code Detection

Objective

Identify unused code, functions, imports, and exports and remove them safely.

Action Required

After identifying dead code with high confidence, immediately remove it. Do not just report - clean up the codebase.

Instructions

  1. Scan for unused code:

    Unused Exports

    • Exported functions never imported elsewhere
    • Exported types/interfaces not used
    • Exported constants not referenced

    Unused Functions/Methods

    • Private functions never called
    • Methods that are defined but never invoked
    • Helper functions that lost their callers

    Unused Imports

    • Imported modules not used
    • Imported types not referenced
    • Side-effect imports that may no longer be needed

    Unused Variables

    • Declared but never read
    • Assigned but value never used
    • Function parameters not used
  2. Identify code smell patterns:

    • Commented-out code blocks
    • TODO/FIXME comments for features never implemented
    • Feature flags for features fully rolled out
    • Debug/development code left in
    • Deprecated functions still present
  3. Check for false positives:

    • Dynamically called functions
    • Reflection-based usage
    • Test utilities
    • Public API surface (intentionally exported)
    • Framework hooks (lifecycle methods, etc.)
  4. Verify safety before removal:

    • Check git blame for recent activity
    • Search for dynamic references
    • Check for external consumers

Output

Dead Code Found

Location Type Code Confidence Notes

Safe to Remove

[List of files/functions safe to delete]

Needs Verification

[List of potentially dead code that needs manual verification]

Code Smells

  • Commented-out code blocks to review
  • Stale TODO comments
  • Unused feature flags

Cleanup Commands

# Commands to remove identified dead code

Estimated Impact

  • Lines of code removable: ~X
  • Files that could be deleted: Y
  • Bundle size reduction: ~Z KB (if applicable)

Action: Remove Dead Code

After analysis, immediately remove code marked as "Safe to Remove":

  1. Delete unused functions, imports, and variables
  2. Remove commented-out code blocks
  3. Delete unused files entirely
  4. Run linter and tests to verify no breakage
  5. Report what was removed

Debug Loop (Local Test → Fix → Repeat)

Objective

Automatically run tests, analyze failures, fix issues, and repeat until all tests pass.

When to Use

  • After making changes that might break tests
  • When debugging failing tests
  • When user says "make the tests pass" or "fix the tests"
  • As part of local validation before /ship

Instructions

Step 1: Identify Test Command

Determine the appropriate test command for the project:

# Check for test scripts
cat package.json | grep -A5 '"scripts"' | grep test
# or
ls Makefile 2>/dev/null && grep -E "^test:" Makefile
# or  
ls Gemfile 2>/dev/null && echo "bundle exec rspec"

Common test commands:

Project Type Command
Node/TypeScript yarn test or npm test
Python pytest or make test
Ruby bundle exec rspec
Go go test ./...

Step 2: Run Tests

Execute the test suite and capture output:

yarn test 2>&1 | tee /tmp/test-output.txt

Step 3: Analyze Results

If all tests pass:

✅ All tests passing!

Tests: X passed
Duration: Xs

Ready to proceed.

→ Exit the loop

If tests fail:

❌ Test Failures Detected

Failed: X tests
Passing: Y tests

Analyzing failures...

Step 4: Parse Failure Details

Extract failure information:

  • Test file and line number
  • Test name/description
  • Error message
  • Stack trace
  • Expected vs actual values
🔍 Failure Analysis:

1. src/services/userService.test.ts:45
   Test: "should create user with valid email"
   Error: Expected undefined to equal { id: 1, email: "test@example.com" }
   
2. src/utils/validation.test.ts:23
   Test: "should reject invalid phone numbers"
   Error: TypeError: Cannot read property 'match' of undefined

Step 5: Attempt Fix

For each failure:

  1. Read the failing test file
  2. Read the source file being tested
  3. Identify the root cause
  4. Apply the fix
🔧 Fixing: src/services/userService.test.ts:45

Root cause: Missing return statement in createUser function
Fix: Added return statement at line 23

Applying fix...

Step 6: Repeat (with Safeguards)

Loop back to Step 2 with these safeguards:

Safeguard Action
Max iterations: 5 Stop after 5 attempts
Same error twice Different approach or ask for help
New failures introduced Revert and try different fix
Timeout: 10 minutes Stop and report status
🔄 Iteration 2/5

Running tests again...

Step 7: Final Report

On success:

✅ Debug Loop Complete!

Iterations: 3
Tests fixed: 2
All tests now passing.

Changes made:
- src/services/userService.ts:23 - Added return statement
- src/utils/validation.ts:15 - Added null check

On max iterations reached:

⚠️ Debug Loop Stopped (max iterations)

After 5 attempts, these tests still fail:
- src/services/userService.test.ts:45

Tried approaches:
1. Added return statement - still failing
2. Fixed mock setup - still failing
3. Updated expected value - still failing

Manual investigation recommended.
Would you like me to explain my analysis?

Example Flow

$ /debug-loop

🔍 Detecting test command...
   Found: yarn test

🧪 Running tests (iteration 1/5)...
   ❌ 2 tests failed, 45 passed

🔍 Analyzing failures...
   1. userService.test.ts:45 - Missing return
   2. validation.test.ts:23 - Null reference

🔧 Fixing userService.ts:23...
   Added: return user;

🔧 Fixing validation.ts:15...
   Added: if (!input) return false;

🧪 Running tests (iteration 2/5)...
   ✅ All 47 tests passed!

✅ Debug Loop Complete!
   Iterations: 2
   Tests fixed: 2

Integration with Other Commands

  • Before /ship: Run /debug-loop to ensure tests pass locally
  • After /ci-monitor failure: Use for local reproduction and fixing
  • With /pattern-log: Log recurring test failure patterns

Debug Issue

Objective

Systematically diagnose and fix the reported issue or unexpected behavior.

Instructions

  1. Understand the problem:

    • What is the expected behavior?
    • What is the actual behavior?
    • When does this occur? (specific inputs, conditions, timing)
    • Any error messages or stack traces?
  2. Investigate:

    • Trace the code flow from input to the point of failure
    • Identify relevant variables and their states
    • Check for common issues:
      • Null/undefined values
      • Type mismatches
      • Off-by-one errors
      • Race conditions or timing issues
      • Missing error handling
      • Incorrect assumptions about data
  3. Diagnose:

    • Identify the root cause (not just symptoms)
    • Explain WHY the bug occurs
    • Consider if similar issues might exist elsewhere
  4. Fix:

    • Propose the minimal fix that addresses the root cause
    • Ensure the fix doesn't introduce new issues
    • Add appropriate error handling if missing
    • Suggest a test case that would catch this bug

Output

  • Clear explanation of what's causing the issue
  • The proposed fix with reasoning
  • Any related issues or areas of concern discovered

Dependency Audit

Objective

Scan project dependencies for security vulnerabilities, outdated packages, and license compliance issues.

Instructions

  1. Identify the package manager(s) in use:

    • npm/yarn/pnpm (package.json)
    • pip/poetry (requirements.txt, pyproject.toml)
    • bundler (Gemfile)
    • go modules (go.mod)
  2. Run security audit:

    • Check for known CVEs in dependencies
    • Identify packages with security advisories
    • Flag transitive dependencies with vulnerabilities
  3. Check for outdated packages:

    • List packages with available updates
    • Distinguish patch/minor/major updates
    • Note breaking changes in major updates
  4. Review for:

    • Abandoned/unmaintained packages (no updates in 2+ years)
    • Packages with known security issues
    • Duplicate dependencies at different versions
    • Unnecessarily large dependencies
  5. License compliance (if applicable):

    • Flag copyleft licenses (GPL) in proprietary projects
    • Identify license incompatibilities

Output

Security Vulnerabilities

Package Severity CVE Fixed In Action

Outdated Packages

Package Current Latest Update Type Breaking Changes?

Recommendations

  • Critical: Must fix before merge
  • High: Should fix soon
  • Medium: Plan to address
  • Low: Nice to have

Commands to Run

# Provide specific commands for the detected package manager

Add Documentation

Objective

Add clear, useful documentation to the selected code.

Instructions

  1. Analyze what documentation is needed:

    • Function/method docstrings
    • Inline comments for complex logic
    • Module/file-level documentation
    • Type annotations if applicable
  2. Write documentation that includes:

    • Purpose: What does this code do and why?
    • Parameters: Name, type, description, and any constraints
    • Return value: Type and description of what's returned
    • Exceptions/Errors: What can be thrown and when
    • Examples: Usage examples for complex APIs
    • Side effects: Any mutations, I/O, or external calls
  3. Follow conventions:

    • Use the documentation style already established in the project
    • For Python: docstrings (Google, NumPy, or Sphinx style)
    • For JavaScript/TypeScript: JSDoc
    • For Ruby: YARD or RDoc
    • For other languages: appropriate conventions
  4. Keep it:

    • Concise but complete
    • Focused on the "why" not just the "what"
    • Up-to-date with the actual code behavior
    • Useful to someone unfamiliar with this code

Output

  • Add appropriate documentation without changing code logic
  • Prioritize public APIs and complex internal logic

Duplicate Functionality Check

Objective

Identify duplicate or near-duplicate functionality in the codebase and refactor to eliminate duplication (DRY principle).

Action Required

After identifying duplicates, immediately refactor to reuse existing code. Do not create new implementations of functionality that already exists.

Instructions

  1. Before implementing new code, search for existing implementations:

    • Search for similar function names
    • Search for similar logic patterns
    • Check utility files and shared modules
    • Look for existing services that do the same thing
  2. Identify duplication patterns:

    • Copy-pasted code blocks
    • Similar functions with minor variations
    • Reimplemented utilities that exist elsewhere
    • Multiple implementations of the same business logic
  3. For each duplicate found:

    • Compare the implementations
    • Identify which is more complete/correct
    • Determine the best location for shared code
    • Plan the refactor
  4. Refactoring options (in order of preference):

    • Reuse existing: Call the existing implementation directly
    • Extract to shared utility: Move to a shared module both can use
    • Extract to service: Create a dedicated service if complex
    • Document if intentional: If duplication is necessary, add comments explaining why

Output

Duplicates Found

New Code Existing Code Similarity Action

Analysis

For each duplicate:

  • Location 1: [path and function]
  • Location 2: [path and function]
  • Differences: [what's different between them]
  • Recommended action: [reuse/extract/document]

Action: Eliminate Duplication

After analysis, immediately refactor:

  1. If existing code is sufficient: Remove new code, call existing
  2. If new code is better: Replace existing with new, update all callers
  3. If both needed: Extract shared logic to utility, have both call it
  4. Run tests to verify refactor didn't break anything
  5. Report what was deduplicated

Environment Configuration Check

Objective

Verify all required environment variables, secrets, and configuration are properly set, and fix any issues found.

Action Required

After identifying configuration issues, immediately fix security issues (hardcoded secrets) and update documentation for missing env vars.

Instructions

  1. Scan code for environment dependencies:

    Environment Variables

    • Variables read via process.env, os.environ, ENV[], etc.
    • Required vs optional variables
    • Default values provided

    Secrets

    • API keys, tokens, credentials
    • Database connection strings
    • Third-party service credentials

    Configuration Files

    • Config files for different environments
    • Feature flags
    • Service endpoints
  2. Check for security issues:

    • Hardcoded secrets in code
    • Secrets committed to git
    • Secrets in logs or error messages
    • Overly permissive default values
  3. Verify environment parity:

    • Variables present in all environments (dev, staging, prod)
    • Appropriate values for each environment
    • Missing variables that could cause runtime errors
  4. Validate configuration:

    • Required variables are documented
    • Types and formats are correct
    • Validation exists for critical config

Output

Environment Variables Required

Variable Required Default Description Sensitive

Security Issues

Issue Severity Location Remediation

Environment Parity Check

Variable Dev Staging Prod Status

Missing Documentation

  • Variables that need to be added to README/docs
  • Variables without clear descriptions

Configuration Validation

  • All required vars have validation
  • Secrets are not logged
  • Defaults are secure
  • Sensitive vars are marked appropriately

Recommended .env.example Updates

# Add these to .env.example
VARIABLE_NAME=example_value  # Description

Deployment Checklist

  • New variables added to all environments
  • Secrets stored in secret manager
  • Documentation updated
  • Team notified of new requirements

Action: Fix Issues

After analysis, immediately implement:

  1. Hardcoded secrets: Remove and replace with env var references
  2. Missing documentation: Update README, .env.example
  3. Validation gaps: Add validation for required config
  4. Report what was fixed and what requires manual action (adding to environments)

Explain Error

Objective

Explain the error message or exception in plain language and suggest solutions.

Instructions

  1. Parse the error:

    • Identify the error type/class
    • Extract the error message
    • Note the file, line number, and stack trace
    • Identify the immediate cause
  2. Explain in plain language:

    • What does this error actually mean?
    • Why is this happening in this context?
    • What conditions typically cause this error?
  3. Trace the root cause:

    • Follow the stack trace to find the origin
    • Identify the actual problem vs. where it manifested
    • Look for related code that might be involved
  4. Provide solutions:

    • Most likely fix based on the context
    • Alternative solutions if applicable
    • How to prevent this in the future
  5. Include:

    • Example fix code
    • Links to relevant documentation if helpful
    • Similar errors that might be related

Output

  • Clear explanation accessible to developers at any level
  • Step-by-step guidance to resolve the issue
  • Code examples showing the fix

Explain Code

Objective

Provide a clear, educational explanation of the selected code or file.

Instructions

  1. High-level summary: Start with a 1-2 sentence overview of what this code does and its purpose
  2. Step-by-step breakdown: Walk through the logic in order of execution
  3. Key concepts: Highlight any design patterns, algorithms, or language features being used
  4. Dependencies: Note any external libraries, APIs, or other parts of the codebase this relies on
  5. Edge cases: Point out any error handling or edge cases being addressed
  6. Potential gotchas: Mention anything non-obvious that could trip someone up

Output Format

  • Use clear section headers
  • Include inline code references when explaining specific lines
  • Keep explanations accessible to developers who may be less familiar with this area

Extract Function/Component

Objective

Extract selected code into a reusable function, method, or component.

Instructions

  1. Analyze the selection:

    • What is the core responsibility?
    • What inputs does it need?
    • What does it output/return?
    • Are there any side effects?
    • Could this be useful elsewhere?
  2. Design the extraction:

    • Choose a clear, descriptive name
    • Determine the parameters needed
    • Define the return type/value
    • Consider if it should be pure or can have side effects
  3. Extract and refactor:

    • Create the new function/component
    • Add appropriate type annotations
    • Add documentation (docstring/JSDoc)
    • Replace original code with a call to the new function
    • Ensure all edge cases are still handled
  4. Consider:

    • Where should this live? (same file, new file, shared utils)
    • Should it be exported/public?
    • Are there related extractions that would help?
    • Does this create opportunities for reuse elsewhere?

Output

  • The extracted function/component with proper typing and docs
  • Updated original code that uses the extraction
  • Suggestions for where to place it and any further refactoring

Fix Linting & Type Errors

Objective

Resolve all linting errors, type errors, and code style violations.

Instructions

  1. Identify issues:

    • Linting errors (ESLint, RuboCop, Pylint, etc.)
    • Type errors (TypeScript, mypy, Sorbet)
    • Formatting issues (Prettier, Black, etc.)
    • Code style violations
  2. Fix each issue by:

    • Understanding what the rule enforces and why
    • Making the minimal change to satisfy the rule
    • Preserving the original intent of the code
    • Not just disabling rules without justification
  3. Prioritize:

    • Errors over warnings
    • Type safety issues
    • Security-related rules
    • Maintainability rules
  4. When fixing:

    • Group related fixes together
    • Explain non-obvious fixes
    • If a rule should be disabled, explain why
    • Consider if the fix might change behavior

Constraints

  • Do NOT disable rules without strong justification
  • Do NOT change logic—only fix violations
  • Maintain code readability
  • Follow the project's existing conventions

Output

  • Fixed code with all errors resolved
  • Summary of changes made
  • Any rules that couldn't be satisfied and why

Generate Regex

Objective

Create a regular expression pattern that matches the specified criteria.

Instructions

  1. Understand requirements:

    • What strings should match?
    • What strings should NOT match?
    • Are there edge cases to consider?
    • What capture groups are needed?
  2. Build the regex:

    • Start simple and add complexity as needed
    • Use named capture groups when helpful
    • Consider anchors (^, $) for full string matching
    • Handle optional parts appropriately
    • Escape special characters properly
  3. Provide:

    • The regex pattern
    • Explanation of each part
    • Test cases showing matches and non-matches
    • Common variations if applicable
  4. Consider:

    • Performance for long strings
    • Unicode support if needed
    • Greedy vs. lazy quantifiers
    • Language-specific regex flavors

Output

  • The regex pattern
  • Breakdown of what each part does
  • Example usage in relevant language
  • Test cases demonstrating the pattern
  • Any caveats or limitations
---
description: "Global project rules that apply to all files and interactions"
alwaysApply: true
---
# Git Command Restrictions
NEVER run git commands (git add, git commit, git push, git pull, git checkout, git branch, git merge, git stash, etc.) without explicit user request.
Always ask "Should I run git commands?" before any git operations.
Focus on code changes, testing, and analysis - leave all git operations to the user.
## ⛔ NEVER Push to Main/Master
**Pushing directly to `main` or `master` is ABSOLUTELY FORBIDDEN.**
- NEVER run `git push origin main` or `git push origin master`
- NEVER commit directly to main/master
- If on main/master branch, STOP and tell the user to create a feature branch
- This applies to ALL workflows, including `/ship`
- There are NO exceptions to this rule
## The `/ship` Command
The `/ship` command (add, commit, push + PR description) requires **EXPLICIT user invocation**.
- **NEVER** run `/ship` automatically or suggest running it without the user typing `/ship`
- **NEVER** run git add, commit, or push as part of any workflow unless the user explicitly invokes `/ship`
- Even after `/ship` is invoked, **wait for user approval** on each git command before executing
- The user must type `/ship` themselves - do not run it on their behalf
## Absolute Rule: Git Add/Commit/Push
**`git add`, `git commit`, and `git push` are ONLY allowed via the `/ship` command.**
- Do NOT run these commands individually, even if the user asks for "just a commit"
- Do NOT suggest running these commands outside of `/ship`
- If user wants to add/commit/push, tell them to use `/ship`
- The ONLY exception: if user explicitly says "run git add" (etc.) as a standalone request outside of any workflow
# General AI Behavior
- Ask before making destructive changes (deleting files, dropping tables, etc.)
- Explain reasoning for suggested changes
- Prioritize code quality and testing
- Follow existing patterns and conventions in the codebase
- Don't over-engineer solutions; keep changes minimal and focused
# Self-Review Before Presenting (Prompt Chaining)
Before presenting ANY significant output (code, tests, analysis, plans, reviews), apply a self-review step:
1. **Generate**: Create the initial output
2. **Review**: Re-read as if you're the reviewer. Ask yourself:
- Did I miss anything obvious?
- Does this match existing patterns in the codebase?
- Would this pass my own code review?
- Are there edge cases I didn't handle?
3. **Refine**: Fix any issues found BEFORE presenting to the user
**This applies to:**
- Code implementations
- Test files
- PR descriptions and commit messages
- Security and code reviews
- Plans and analysis
**Why this matters:** Catching errors before the user sees them saves round-trips and builds trust. One self-review pass often catches 80% of issues.
# Self-Correction & Troubleshooting
When a tool call or command fails (especially 401/403/404 errors):
1. **STOP & THINK**: Do not just report the error to the user immediately.
2. **HYPOTHESIZE**: Identify at least 3 potential causes (e.g., wrong environment, missing scope, incorrect URL/ID).
3. **VERIFY**: Check configuration files, code, or documentation to validate your hypotheses.
- Search the codebase for similar patterns or IDs.
- Check `auth-config/` repo for scopes/grants if it's an auth error.
- **For authentication "insufficient permissions" errors**: Check `your auth provider config` to verify which audiences the client is allowed to use (e.g., `https://frontend.your-company.com` vs `api.your-company.com`).
- Verify URLs against `architecture.mdc` or config files.
4. **RETRY**: Try a corrected approach based on your findings.
5. **ESCALATE**: Only ask the user for help if you have exhausted all research options and proven it's a dead end.
# Session Management
- **After ~30 exchanges**: Suggest switching to a fresh conversation, but first:
1. Update the plan file in `.cursor/plans/` with current progress and next steps
2. Save any important context to `.cursor/context/` (decisions, discoveries, blockers)
3. Update `.cursor/patterns.md` if any patterns were discovered
4. Provide a handoff summary the user can paste into the new conversation
- **Handoff summary format**:
```
Continuing work on: [task name]
Plan file: .cursor/plans/[filename]
Context files: .cursor/context/[relevant files]
Current status: [where we left off]
Next step: [what to do next]
```
- **Fresh sessions are fine**: Starting a new session is always safe since rules, patterns, and context files all persist and auto-load
```
# Code Style
- Use descriptive variable and function names
- Add meaningful comments for complex logic (explain "why", not "what")
- Follow DRY principles but don't over-abstract
- Prefer readability over cleverness
# Testing
- When adding new functionality, suggest relevant tests
- When fixing bugs, suggest regression tests
- Follow existing test patterns in the repository
# Self-Improvement Commands
Use these commands to learn from mistakes and improve over time:
| Command | When to Use | Auto-Triggered? |
|---------|-------------|-----------------|
| `/retro` | After completing complex tasks with multiple iterations | Suggest after Phase 6 |
| `/pattern-log` | When discovering tricky issues or useful patterns | Auto by `/ci-monitor` |
| `/rules-review` | Quarterly, or after major codebase changes | Remind every 10+ sessions |
| `/ci-monitor` | After pushing code to check CI status | Called by `/ship` |
**Auto-trigger conditions:**
- After `/ci-monitor` fixes a failure → auto-log to `.cursor/patterns.md`
- After complex session (3+ iterations) → suggest `/retro`
- Pattern has 3+ occurrences → suggest promoting to rule
- 10+ sessions since last review → remind about `/rules-review`
# Auto-Retro on Correction (Background Learning)
**When the user corrects you**, automatically spawn a background subagent to learn from the mistake.
## Detection Patterns
Trigger when user says things like:
- "No, that's wrong..."
- "You should have..."
- "Why did I have to tell you..."
- "That's not correct..."
- "You need to look at..."
- "I already told you..."
- "Please investigate this yourself"
- Any explicit correction of behavior or output
## Action: Immediate Inline Retro
When correction detected, **immediately** do a mini-retro inline:
1. **Acknowledge**: "Got it, I'll fix that pattern now."
2. **Analyze**: Briefly identify what went wrong (1 sentence)
3. **Update**: Make the specific rule/command update right now
4. **Log**: Append to `.cursor/patterns.md`
5. **Continue**: Resume helping with the main task
## Key Points
- Do the fix **immediately** - don't defer it
- Keep it brief (under 30 seconds of work)
- Always log to `patterns.md` for visibility
- Don't ask permission - just fix it
## ⚠️ CRITICAL: Generalize the Fix
**Extract the general pattern from the specific correction.** Rules should apply to ALL projects, not just the current one.
| User's Specific Correction | ❌ Too Specific | ✅ Generalized |
|----------------------------|-----------------|----------------|
| "Check auth config for specific client audience" | "For specific client, check auth config" | "When receiving auth/permission errors, investigate the auth configuration files before asking user" |
| "You should have looked at the unit tests for that mutation" | "Check updateProfile tests" | "Before implementing, read existing unit tests to understand expected behavior and edge cases" |
| "The test cases should include requirements from the ticket" | "For TICKET-123, derive tests from your issue tracker" | "Derive test cases from multiple sources: requirements, code, affected clients, regression risks" |
**Ask yourself**: "Would this rule help in a completely different project with different services?"
## Example
**User**: "Why did I have to tell you to investigate the audience? You should do that yourself."
**Response**:
```
Got it - I should self-investigate permission errors. Updating now...
[Updates manual-test.md with troubleshooting steps for authentication errors]
[Logs to patterns.md: "Authentication permission errors: Always check your auth provider config before asking user"]
Fixed. Now continuing with <main task>...
```
## What to Update
| Correction Type | Update Target |
|-----------------|---------------|
| Research gaps | `global.mdc` (Self-Correction section) |
| Missing test cases | `manual-test.md`, `workflow.mdc` |
| Wrong assumptions | `architecture.mdc` |
| Tool usage errors | Relevant command file |
| Pattern discovered | `patterns.md` only |
## Alternative: Background Subagent (For Complex Corrections)
For corrections that require deeper analysis, spawn a background subagent:
```bash
cursor-agent -p "BACKGROUND RETRO: <description of what went wrong>.
1. Analyze root cause
2. GENERALIZE the fix - make it applicable to ALL projects, not just this one
3. Update relevant .cursor rule/command with the generalized fix
4. Log to .cursor/patterns.md" --output-format text --force &
```
**Note**: Requires `cursor-agent login` to be authenticated (check with `cursor-agent status`).
# Cross-References (Source of Truth)
For detailed guidance, see these specialized rules:
- **Planning**: See `planning.mdc` for plan-first workflow
- **Subagents**: See `subagents.mdc` for parallel task execution
- **Workflow**: See `workflow.mdc` for the full development lifecycle
- **Architecture**: See `architecture.mdc` for system context and patterns
- **Tickets**: See `ticket-integration.mdc` for issue tracker integration
- **Patterns**: See `.cursor/patterns.md` for recurring issues and learnings

Impact Analysis

Objective

Analyze the ripple effects of code changes to understand what other parts of the system may be affected, and take action on high-risk findings.

Action Required

After identifying impacted areas, immediately address high-risk impacts: update tests, add documentation, or notify stakeholders.

Instructions

  1. Identify changed entities:

    • Functions, classes, or methods modified
    • Types/interfaces changed
    • API endpoints affected
    • Database schema changes
    • Event schemas modified
  2. Trace upstream dependencies (who calls this code):

    • Direct callers within the codebase
    • Other services that depend on changed APIs
    • Frontend components using changed endpoints
    • Event consumers listening to modified events
  3. Trace downstream dependencies (what this code calls):

    • External services called
    • Database tables accessed
    • Events emitted
    • Third-party APIs invoked
  4. Assess impact areas:

    • Breaking changes: Will existing consumers break?
    • Behavioral changes: Will behavior change for existing callers?
    • Performance impact: Could this affect latency/throughput?
    • Data impact: Could this affect existing data?
  5. Identify testing gaps:

    • Are affected areas covered by tests?
    • Do integration tests exist for impacted flows?
    • Are there E2E tests covering the user journeys?

Output

Changed Entities

  • List of functions/types/endpoints modified

Upstream Impact (Consumers)

Consumer Location Impact Level Notes

Downstream Impact (Dependencies)

Dependency Type Impact Level Notes

Risk Assessment

  • High Risk: [Areas requiring extra attention]
  • Medium Risk: [Areas to monitor]
  • Low Risk: [Minimal impact expected]

Recommended Actions

  • Notify team X about API changes
  • Update integration tests for Y
  • Monitor metrics for Z after deployment

Action: Address High-Risk Impacts

After analysis, immediately implement:

  1. Missing test coverage: Add tests for impacted areas (run /coverage-gaps if needed)
  2. Breaking changes to consumers: Run /breaking-changes and /migration-guide
  3. Documentation gaps: Update relevant docs
  4. Report what actions were taken

Manual Test (Feature Verification)

Objective

Verify that a shipped feature actually works as intended by testing against acceptance criteria, using real environments and integrations.

⚠️ SAFETY RULES

READ-ONLY OPERATIONS ONLY:

  • ✅ Cloud CLI read commands (describe, get, list)
  • ✅ Database SELECT queries
  • ✅ GraphQL queries
  • ✅ API GET requests
  • ❌ NO delete, update, put, create via cloud CLI
  • ❌ NO destructive database operations
  • ❌ NO mutations that modify production data

Always confirm with user before running any command that touches real environments.


Prerequisites

Credentials File

Check .cursor/credentials.md for stored credentials (Client IDs, secrets, test users).

If this file doesn't exist, copy from .cursor/credentials.example.md and fill in your values:

cp .cursor/credentials.example.md .cursor/credentials.md
# Then edit with your actual credentials

⚠️ .cursor/credentials.md is gitignored - safe to store secrets locally.

Cloud Provider Access

# # Login to your cloud provider
# e.g., aws sso login, gcloud auth, az login


# Verify you're authenticated
# Verify you are authenticated

Authentication Token (Requires Investigation)

Different features require different tokens. Before obtaining a token, investigate what the feature expects:

Step 1: Determine Token Requirements

Check the code for token expectations:

# Search for token validation in the service
grep -rn "accessToken\|idToken\|authorization\|Bearer" src/

# Check middleware or auth handlers
grep -rn "verify\|decode\|jwt" src/middleware/ src/auth/

# Look for audience/client ID requirements
grep -rn "audience\|client_id\|aud\|azp" src/

Find which clients have the required scopes: If you need specific scopes (e.g., create:users), check your auth provider config:

grep -rn "create:users" your auth config

Then scroll up to find the client_id for that grant.

⚠️ For "Client has not been granted scopes" errors: Always check your auth config to verify which audiences the client is allowed to use. Different clients have different allowed audiences:

  • Most frontend clients: https://frontend.your-company.com
  • Backend services: api.your-company.com
  • Check the audience field in the client grant configuration before asking the user.

Common patterns:

Service/Feature Usually Needs Why
API Gateway / GraphQL resolvers Access Token Validates permissions/scopes
User profile endpoints ID Token Contains user identity claims
M2M (machine-to-machine) Client Credentials Service-to-service auth
Frontend SPA Both ID for identity, Access for API calls

Step 2: Identify Required Parameters

Determine:

  1. Token type: Access token vs. ID token vs. both
  2. Audience: Which API/resource the token is for
  3. Client ID: Which auth provider application to use
  4. User context: Does it need a specific user? Test user? Real user?
  5. Scopes: What permissions are required?

Check auth provider applications:

# List relevant auth applications in your tenant
# Check which client ID the feature uses

Check the service configuration:

# Look for auth config in the service
cat .env | grep -i "auth\|oauth"
cat config/*.json | grep -i "auth\|oauth"

Step 3: Obtain the Correct Token

Option A: Test user login flow

1. Login to the application as a test user
2. Open DevTools → Network tab → find an API request
3. Check the Authorization header
4. Note: This gives you what a real user would have

Option B: Auth Provider Dashboard (for access tokens)

1. Your auth provider dashboard → Applications → APIs → [Your API]
2. "Test" tab → Get a test access token
3. Note: This may not have all the claims a real token would

Option C: OAuth2 Token Endpoint (for specific tokens)

# Password grant (if enabled) - for testing only
curl -X POST https://<your-auth-domain>/oauth/token \
  -H "Content-Type: application/json" \
  -d '{
    "grant_type": "password",
    "client_id": "<client-id>",
    "client_secret": "<client-secret>",
    "username": "<test-user-email>",
    "password": "<test-user-password>",
    "audience": "<api-audience>",
    "scope": "openid profile email"
  }'

# Response includes both access_token and id_token

Option D: Client Credentials (M2M)

curl -X POST https://<your-auth-domain>/oauth/token \
  -H "Content-Type: application/json" \
  -d '{
    "grant_type": "client_credentials",
    "client_id": "<m2m-client-id>",
    "client_secret": "<m2m-client-secret>",
    "audience": "<api-audience>"
  }'

Step 4: Verify Token Contents

Before using, verify the token has what the feature expects:

# Decode the token (without verifying signature) to inspect claims
# Use jwt.io or:
echo "<token>" | cut -d'.' -f2 | base64 -d 2>/dev/null | jq .

Check for:

  • aud (audience) - matches expected API?
  • azp (authorized party) - correct client?
  • sub (subject) - correct user?
  • scope or permissions - has required permissions?
  • Token type in header - is it at+jwt (access) or just JWT (ID)?

Instructions

Step 1: Gather Context

Fetch issue ticket details (if using MCP):

# (Optional) If MCP configured: mcp_issue_get({ issue_key: "<TICKET-ID>" })
# Or just manually read the ticket

Extract:

  • Acceptance criteria
  • Expected behavior
  • Test scenarios mentioned

Review the code changes:

# See what files were changed
git diff main...<branch-name> --name-only

# See the actual changes
git diff main...<branch-name>

Step 2: Derive ALL Test Cases (MANDATORY)

⚠️ CRITICAL: Do NOT test just one happy path. You MUST derive test cases from ALL sources:

Source 1: Requirements (Issue Tracker/Docs)

  • Acceptance criteria from the ticket
  • User stories and expected behaviors
  • Edge cases mentioned in the description
  • Business rules from team documentation

Source 2: Code Analysis

  • Unit tests: What scenarios do they cover?
  • Error handling: Search for throw statements
  • Code branches: What if/else conditions exist?
  • Authorization: What auth cases are in the resolver?
# Find unit tests for the feature
grep -rn "describe\|it(" tests/ | grep -i "<feature-name>"

# Check the resolver/handler for auth conditions
cat resolvers/mutations/<feature>.ts | grep -i "isAuthorized\|hasRole\|hasScope"

# Look for error handling in the service
grep -rn "throw\|Error\|reject" src/services/<service>.ts

Source 3: System/Integration Context

  • Affected Clients: Who calls this endpoint? (e.g., mobile app, web frontend, partner APIs)
  • Cross-Service Impacts: Does it emit events? Update other systems?
  • Regression Risks: What existing functionality could break?

Source 4: Authorization Matrix

  • Who should be allowed? (M2M clients, users, admins)
  • Who should be denied? (Verify restrictions are enforced)
  • What scopes/audiences are required?

Step 3: Create Comprehensive Test Plan

Based on acceptance criteria AND code analysis, create a test plan covering ALL categories:

📋 Manual Test Plan for <TICKET-ID>

Branch: <branch-name>
Environment: <dev/staging>

## Prerequisites
- [ ] Cloud provider authenticated
- [ ] Access token obtained
- [ ] Environment running

## Test Categories (ALL REQUIRED)

### Happy Paths
| # | Test Case | Input | Expected | Status |
|---|-----------|-------|----------|--------|
| 1 | Basic success | valid input | success response | ❓ Pending |
| 2 | Create vs Update | user without existing data | creates new record | ❓ Pending |
| 3 | Input sanitization | " spaces ", "UPPERCASE" | trimmed, formatted | ❓ Pending |

### Error Cases
| # | Test Case | Input | Expected Error | Status |
|---|-----------|-------|----------------|--------|
| 4 | Resource not found | invalid ID | "not found" error | ❓ Pending |
| 5 | Invalid state | inactive user | "not allowed" error | ❓ Pending |
| 6 | Constraint violation | duplicate unique field | constraint error | ❓ Pending |

### Authorization Cases
| # | Test Case | Token Type | Expected | Status |
|---|-----------|------------|----------|--------|
| 7 | Valid M2M token | M2M w/ scope | Allowed | ❓ Pending |
| 8 | User token - own record | user updating self | Allowed | ❓ Pending |
| 9 | User token - with role | admin/coach | Allowed | ❓ Pending |
| 10 | Invalid token | no scope/audience | Unauthorized | ❓ Pending |

### Edge Cases (from unit tests)
| # | Test Case | Scenario | Expected | Status |
|---|-----------|----------|----------|--------|
| 11 | <from unit test> | | | ❓ Pending |

## Completion Criteria
- [ ] ALL happy paths tested
- [ ] ALL error cases tested  
- [ ] ALL authorization cases tested
- [ ] ALL edge cases from unit tests verified

⛔ DO NOT mark testing as "complete" until ALL categories have been tested.

Step 3: Environment Setup

For local testing:

# Navigate to the service
cd <service-directory>

# Start the service (if not running)
yarn dev
# or
docker-compose up

For staging/dev testing:

# Get the endpoint URL
# Usually: https://<service>.<env>.your-company.com

# Verify the deployment includes your changes
# Check the version or commit hash if available

Step 4: Execute Tests

For GraphQL APIs (e.g., your GraphQL API)

Construct the query/mutation:

# Example: Test a query
curl -X POST https://<endpoint>/graphql \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{
    "query": "query { user(id: \"<test-user-id>\") { id email } }"
  }'

Using GraphQL Playground (if available):

1. Navigate to https://<endpoint>/graphql
2. Set Authorization header
3. Run your query/mutation
4. Verify response matches expected

For REST APIs

# GET request
curl -X GET https://<endpoint>/api/resource \
  -H "Authorization: Bearer $TOKEN"

# Check response

For Cloud Resources (Optional)

Database (read-only verification):

# Scan a table (limit results)
# Use your database CLI or admin tool \
  --table-name <table-name> \
  --limit 5 \
  --profile your-dev-profile

# Query specific item
# Query your database to verify state \
  --table-name <table-name> \
  --key '{"pk": {"S": "<value>"}}' \
  --profile your-dev-profile

S3 (read-only):

# List objects
# List cloud storage: your-cli storage ls

# Get object metadata
# Check object metadata \
  --bucket <bucket-name> \
  --key <object-key> \
  --profile your-dev-profile

Serverless function logs (if applicable):

# Get recent logs
# Use your cloud provider CLI to tail logs \
  --since 1h \
  --profile your-dev-profile

Cloud API (if applicable):

# List APIs
# List your cloud APIs using your provider CLI

For Auth Provider Verification

Check user state:

# (Optional) mcp_auth_get_user({ user_id: "<auth-user-id>" })

Check recent auth logs:

# (Optional) mcp_auth_search_logs({ 
  q: "user_id:<user-id>",
  per_page: 10 
})

Step 5: Document Results

After testing, document:

## Test Results for <TICKET-ID>

**Environment:** <dev/staging>
**Date:** <date>
**Tester:** <name>

### Results

| # | Criterion | Result | Notes |
|---|-----------|--------|-------|
| 1 | <AC> | ✅ Pass / ❌ Fail | <notes> |
| 2 | <AC> | ✅ Pass / ❌ Fail | <notes> |

### Issues Found
- <issue 1>
- <issue 2>

### Evidence
- Screenshot/response attached
- Logs reviewed at <timestamp>

Step 6: Handle Failures

If tests fail:

  1. Document the failure with details
  2. Return to development - don't merge
  3. Fix the issue and re-run CI
  4. Re-test the specific failure

Example: Testing Your API - TICKET-123

📋 Manual Test Plan for TICKET-123

Branch: TICKET-123-add-user-endpoint
Environment: your-dev-profile

## Setup
# your-cloud-cli auth login

## Test Cases

### 1. Query new field returns data
curl -X POST https://your-api.dev.example.com/graphql \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"query": "{ user(id: \"test-123\") { newField } }"}'

Expected: { "data": { "user": { "newField": "value" } } }

### 2. Null handling works
Test with user that doesn't have the field set.
Expected: { "data": { "user": { "newField": null } } }

### 3. Verify Database State (Optional)
# Query your database to verify state \
  --table-name users-dev \
  --key '{"pk": {"S": "USER#test-123"}}' \
  --profile your-dev-profile

Expected: Item contains new attribute

Integration with Workflow

This command is typically run after:

  1. /ship - Code is pushed and PR created
  2. /ci-monitor - CI passes
  3. /manual-test - Verify it actually works

If manual tests pass → Ready for code review and merge If manual tests fail → Return to development

Migration Guide

Objective

Create step-by-step migration instructions for consumers affected by breaking changes or major updates.

Instructions

  1. Identify what needs migration:

    API Changes

    • Endpoint URL changes
    • Request/response format changes
    • Authentication changes
    • Deprecated endpoints being removed

    Code Changes

    • Function signature changes
    • Type/interface changes
    • Import path changes
    • Removed functionality

    Database Changes

    • Schema changes affecting queries
    • Data format changes
    • New required fields

    Configuration Changes

    • New required configuration
    • Changed configuration format
    • Removed configuration options
  2. Create migration steps:

    • Logical order of changes
    • Before and after code examples
    • Automated migration scripts if possible
    • Verification steps
  3. Define migration timeline:

    • Deprecation notice date
    • Migration deadline
    • Support end date for old version
  4. Provide resources:

    • Links to updated documentation
    • Contact for migration support
    • FAQ for common issues

Output

Migration Guide: [Change Name]


Overview

  • What's Changing: [Brief description]
  • Why: [Reason for the change]
  • Who's Affected: [List of affected consumers]
  • Timeline: [Key dates]

Timeline

Date Milestone
[Date] Deprecation notice sent
[Date] New version available
[Date] Old version deprecated
[Date] Old version removed

Migration Steps

Step 1: [First Change]

Before:

// Old way

After:

// New way

Notes: [Any gotchas or considerations]

Step 2: [Second Change]

...


Breaking Changes Summary

Change Old New Action Required

Automated Migration (if available)

# Run this codemod/script to automate migration
npx migration-tool --from v1 --to v2

Verification Checklist

After migration, verify:

  • [Check 1]
  • [Check 2]
  • All tests pass
  • No deprecation warnings

FAQ

Q: [Common question] A: [Answer]


Support

  • Questions: [Slack channel or email]
  • Issues: [Issue tracker link]
  • Documentation: [Updated docs link]

Optimize Performance

Objective

Identify and fix performance bottlenecks in the selected code.

Instructions

  1. Analyze for common issues:

    • Time complexity: O(n²) or worse algorithms that could be improved
    • Database: N+1 queries, missing indexes, inefficient queries
    • Memory: Unnecessary object creation, memory leaks, large allocations
    • I/O: Blocking operations, missing caching, redundant API calls
    • Loops: Unnecessary iterations, work done inside loops that could be outside
  2. Identify bottlenecks:

    • Which operations are likely slowest?
    • What's the data size/scale this needs to handle?
    • Where are the hot paths?
  3. Propose optimizations:

    • Use more efficient data structures (Set vs Array for lookups)
    • Add caching where appropriate
    • Batch operations instead of one-by-one
    • Use lazy evaluation/pagination
    • Parallelize independent operations
    • Optimize database queries (indexes, eager loading)
  4. Consider tradeoffs:

    • Readability vs. performance
    • Memory vs. CPU
    • Complexity added vs. performance gained

Constraints

  • Only optimize where it matters (measure, don't guess)
  • Don't sacrifice correctness for speed
  • Document any non-obvious optimizations
  • Consider if the optimization is worth the added complexity

Output

  • Specific bottlenecks identified
  • Optimized code with explanations
  • Expected performance improvement

Pattern Log

This file tracks recurring issues, useful patterns, and learnings discovered during development sessions.


Recurring Issues

ID Issue Occurrences Solution Last Seen Promoted to Rule?
P001 Example: Test pattern mismatch 1 ALWAYS find and copy existing similar tests YYYY-MM-DD No

Useful Patterns

ID Pattern Context Example File
U001 Self-improving loop: Fix → Learn → Update Rules When fixing issues, ask "how do we prevent this?" ci-monitor.md
U002 Mock external services in tests Unit testing authService.test.ts

Gotchas by Service

your-service

  • Add gotchas as you discover them

your-other-service

  • Add gotchas as you discover them

Session Retrospectives

Date Session Topic Patterns Logged Key Learnings
YYYY-MM-DD Example session U001 What was learned

How to Use This File

  1. Adding entries: Use /pattern-log command or entries are auto-added by /ci-monitor and /retro
  2. Searching: Ask "any known issues with X?" and this file will be searched
  3. Promoting: When occurrences ≥ 3, promote to a rule in .cursor/rules/

Pattern Log

This file tracks recurring issues, useful patterns, and learnings discovered during development sessions.


Recurring Issues

ID Issue Occurrences Solution Last Seen Promoted to Rule?
P001 Example issue 1 Example solution YYYY-MM-DD No

Useful Patterns

ID Pattern Context Example File
U001 Example pattern When to use it example.ts

Gotchas by Service

your-service

  • Add gotchas as you discover them

Session Retrospectives

Date Session Topic Patterns Logged Key Learnings
YYYY-MM-DD Example session U001 What was learned

How to Use This File

  1. Adding entries: Use /pattern-log command or entries are auto-added by /ci-monitor and /retro
  2. Searching: Ask "any known issues with X?" and this file will be searched
  3. Promoting: When occurrences ≥ 3, promote to a rule in .cursor/rules/
---
description: "Research-first planning with assumptions validation"
alwaysApply: true
---
# Planning First Approach
For ANY non-trivial task, you MUST research first, then create a plan with assumptions for human validation.
## When to Create a Plan
- New features or endpoints
- Bug fixes requiring investigation
- Refactoring across multiple files
- Any task with 3+ steps
- Any task that will take more than a few minutes
## When NOT to Create a Plan
- Simple questions/explanations
- Single-line fixes
- Trivial file edits (typos, formatting)
- Reading/analyzing code without changes
---
## Phase 1: Research First (MANDATORY)
Before proposing ANY plan, you MUST:
1. **Read existing code**: Find relevant files and understand current implementation
2. **Check similar implementations**: Search for patterns in the codebase
3. **Infer from context**: Use naming conventions, folder structure, existing patterns
4. **Read documentation**: Check READMEs, inline comments, type definitions
**DO NOT ask questions until you've exhausted research options.**
---
## Phase 2: Create Plan with Assumptions
After research, create a plan in `.cursor/plans/` with this format:
```markdown
---
name: Short Task Name
overview: One paragraph describing the goal and approach
todos:
- id: step-1
content: Description of first step
status: pending
- id: step-2
content: Description of second step
status: pending
---
# Task Title
## Research Summary
What I found in the codebase:
- [Pattern/file discovered]
- [Existing implementation referenced]
- [Documentation reviewed]
## Assumptions
**Please confirm each assumption by marking [x] or clarify if incorrect:**
- [ ] Assumption 1: [e.g., "This should follow the same pattern as UserService"]
- [ ] Assumption 2: [e.g., "We should emit an event to event-bus after the update"]
- [ ] Assumption 3: [e.g., "No database migration is needed for this change"]
## Implementation Steps (with Model Routing)
| Step | Task Type | Model | Tier | Description |
|------|-----------|-------|------|-------------|
| 1 | Research | `sonnet-4.5-thinking` | 2 | [What to research] |
| 2 | Architecture | `opus-4.5-thinking` | 1 | [Complex design decisions] |
| 3 | Implementation | `sonnet-4.5` | 3 | [Code to write] |
| 4 | Testing | `sonnet-4.5` | 3 | [Tests to create] |
| 5 | Security Review | `opus-4.5-thinking` | 1 | [Critical validation] |
**Model Tiers:**
- **Tier 1** (opus-4.5-thinking): Architecture, Security, Complex Debugging
- **Tier 2** (sonnet-4.5-thinking): Research, Planning, Code Review
- **Tier 3** (sonnet-4.5): Implementation, Testing, Documentation
### Step 1: [Name]
**Model**: `sonnet-4.5-thinking` | **Type**: Research | **Tier**: 2
Details...
### Step 2: [Name]
**Model**: `sonnet-4.5` | **Type**: Implementation | **Tier**: 3
Details...
## Parallel Execution
Steps that can run in parallel (no dependencies):
- [ ] Steps 3 and 4 can run simultaneously (both Tier 3)
## Testing Strategy
How will we verify this works?
```
---
## Phase 3: Wait for Approval
After presenting the plan:
1. Ask: **"Please review the assumptions above and mark [x] to confirm, or clarify any that are incorrect."**
2. **WAIT** for the user to respond before implementing
3. Once assumptions are confirmed, proceed with implementation
---
## When to Ask Questions
**ONLY ask questions when:**
- Business decisions that cannot be inferred from code
- Multiple valid approaches exist with no clear precedent in the codebase
- Security or compliance implications that require explicit approval
- You genuinely cannot find the answer after thorough research
**DO NOT ask questions about:**
- Technical implementation (infer from existing patterns)
- Naming conventions (follow existing code)
- File locations (match similar features)
- Error handling patterns (copy from similar code)
---
## File Naming
Use format: `{short-description}_{random-8-chars}.plan.md`
Example: `add_user_endpoint_a1b2c3d4.plan.md`
## Status Values
- `pending` - Not started
- `in_progress` - Currently working on
- `completed` - Done
- `blocked` - Waiting on clarification
- `cancelled` - No longer needed

Generate PR Description

Objective

Create a clear, comprehensive pull request description based on the changes in the current branch.

Instructions

  1. Analyze the changes:

    • What files were modified/added/deleted?
    • What is the overall purpose of these changes?
    • Are there any breaking changes?
    • What was the motivation?
  2. Write the PR description with:

    Title

    • Clear, concise summary (imperative mood)
    • Include ticket/issue number if applicable

    Summary

    • 2-3 sentences explaining what and why
    • Business context if relevant

    Changes

    • Bullet list of significant changes
    • Grouped by category if many changes

    Testing

    • How was this tested?
    • Any manual testing steps for reviewers

    Screenshots (if UI changes)

    • Before/after if applicable

    Notes for Reviewers

    • Areas to pay extra attention to
    • Any known issues or follow-ups
  3. Include:

    • Related issue/ticket links
    • Any dependencies or related PRs
    • Migration or deployment notes if needed

Output

  • Complete PR description in markdown format
  • Ready to copy/paste into GitHub/GitLab
---
description: "Python specific rules and best practices"
globs:
- "**/*.py"
---
# Python Best Practices Expert
You are a specialized expert in Python development, focusing on clean, performant, and idiomatic code with advanced features and comprehensive testing.
## Core Concepts (2025 Best Practices)
### Async/Await Patterns
- Prefer `asyncio.gather` for concurrent operations.
- Use `async/await` for I/O bound tasks.
- **Do not** use blocking calls (like `requests`) in async functions; use `aiohttp` or `httpx` instead.
```python
async def fetch_multiple_users(user_ids: List[int]) -> List[dict]:
"""Fetch multiple users concurrently."""
tasks = [fetch_user(user_id) for user_id in user_ids]
return await asyncio.gather(*tasks)
```
### Type Hints and Static Analysis
- **Avoid `Any`**: Always use specific types.
- Use `TypedDict` for structured dictionaries or `dataclasses`/`pydantic` for complex objects.
- Use `|` (pipe) for Union types (Python 3.10+): `int | None` instead of `Optional[int]`.
```python
# CORRECT
def process_data(data: dict[str, int]) -> list[int]:
return list(data.values())
# WRONG
def process_data(data: Any) -> Any:
return data
```
### Error Handling
- **Catch specific exceptions**: Never use bare `except:`.
- Use custom exception classes for domain-specific errors.
- Log errors appropriately with meaningful messages.
```python
# CORRECT
try:
risky_operation()
except (ValueError, KeyError) as e:
logger.error(f"Operation failed: {e}")
raise
```
## Common Pitfalls to Avoid
1. **Mutable Default Arguments**: Never use `[]` or `{}` as default args. Use `None` and initialize inside the function.
2. **Bare Exceptions**: `except:` catches `SystemExit` and `KeyboardInterrupt`. Don't do it.
3. **Blocking Async**: Do not block the event loop with synchronous sleeps or heavy computation without `run_in_executor`.
## Testing Patterns (pytest)
- Use fixtures for setup/teardown.
- Use `unittest.mock` or `pytest-mock` for external dependencies.
- Use `@pytest.mark.asyncio` for async tests.
- Use parametrization for data-driven tests.
```python
@pytest.mark.parametrize("input,expected", [
(1, 1),
(2, 4),
])
def test_square(input, expected):
assert square(input) == expected
```
## General Guidelines
- Follow PEP 8 style guide.
- Prefer f-strings over .format() or % formatting.
- Use context managers (`with` statements) for resource handling.
- Keep functions focused; aim for single responsibility.
## Imports
- Group imports: standard library, third-party, local.
- Use absolute imports over relative imports.
- Avoid wildcard imports (`from module import *`).
## Serverless Functions (Common Pattern)
- Keep handlers thin; delegate to service functions.
- Handle cold starts appropriately.
- Use environment variables for configuration.
- Follow infrastructure-as-code patterns in the repo.
---
description: "React Hooks patterns, performance, and composition"
globs:
- "**/*.tsx"
- "**/*.ts"
---
# React Hooks Patterns Expert
You are a specialized expert in React hooks, custom hook composition, performance optimization, and modern React patterns.
## Core Concepts (2025 Best Practices)
### Custom Hook Patterns
Encapsulate logic in custom hooks:
```typescript
export function useFetch<T>(url: string) {
const [data, setData] = useState<T | null>(null);
// ... implementation
return { data, loading, error };
}
```
### useReducer for Complex State
Prefer `useReducer` over multiple `useState` calls for complex state logic:
```typescript
function reducer(state: State, action: Action): State {
switch (action.type) {
case 'increment': return { ...state, count: state.count + 1 };
// ...
}
}
```
### Performance Optimization Hooks
- **useMemo**: For expensive calculations.
- **useCallback**: For stable function references to prevent child re-renders.
- **useTransition**: For non-urgent updates (React 18+).
```typescript
const sortedItems = useMemo(() => items.sort(), [items]);
const handleClick = useCallback(() => console.log('clicked'), []);
```
### Context Optimization
Memoize context values to prevent unnecessary re-renders of consumers:
```typescript
const value = useMemo(() => ({ user, login, logout }), [user]);
return <AuthContext.Provider value={value}>{children}</AuthContext.Provider>;
```
## Common Pitfalls
1. **Missing Dependencies**: Always include all used variables in dependency arrays.
2. **Unstable References**: Avoid creating functions/objects inside the render loop that are passed to dependencies without `useCallback`/`useMemo`.
3. **Over-optimization**: Don't wrap everything in `useMemo`. Measure first.
## Implementation Patterns
### Imperative Code with useRef
Use `useRef` for accessing DOM elements or storing mutable values that don't trigger re-renders.
### Custom Hook Composition
Compose small hooks into larger ones:
```typescript
function useUserProfile(id: string) {
const { data: user } = useUser(id);
const theme = useTheme();
return { user, theme };
}
```

Refactoring Safety Check

Objective

Before refactoring code, identify all places that depend on the current behavior to prevent breaking changes.

When to Use

  • Moving logic to a new location (service, utility, etc.)
  • Changing function/method signatures
  • Changing error types or error messages
  • Renaming classes, functions, or variables
  • Extracting shared logic (DRY refactoring)

Pre-Refactoring Checklist

1. Find All Usages

Search for all places that call/use the code you're refactoring:

# Search for function/method name
grep -r "functionName" --include="*.ts" --include="*.tsx" --include="*.js"

# Search in tests specifically
grep -r "functionName" tests/ --include="*.test.ts" --include="*.spec.ts"

2. Find Tests That Mock the Target

If refactoring a function that's commonly mocked:

# Search for mocks of the function
grep -r "mock.*functionName\|functionName.*mock" tests/

# Search for spies
grep -r "spyOn.*functionName\|jest.fn.*functionName" tests/

3. ⚠️ CRITICAL: Find Error Type Expectations

When changing error handling, search for tests that expect specific error types:

# Search for error type assertions
grep -r "toThrow\|rejects\|instanceof.*Error\|\.type.*Error\|\.name.*Error" tests/

# Search for specific error class names
grep -rn "AuthProviderError\|ServiceError\|ValidationError" tests/

4. Find Return Type Dependencies

If changing what a function returns:

# Search for destructuring or property access on the return value
grep -r "functionName\(\)" --include="*.ts" | grep -E "\.\w+|{ \w+"

During Refactoring

Error Type Changes

If you're changing error types (wrapping, renaming, etc.):

  1. List all error types being changed:

    • Old: AuthProviderError
    • New: UserAuthServiceError
  2. Search for tests expecting the old type:

    grep -rn "AuthProviderError" tests/
  3. Update ALL matching tests to expect the new error type

Signature Changes

If changing function parameters or return types:

  1. Find all callers:

    grep -rn "functionName(" --include="*.ts"
  2. Update all callers before or alongside the refactoring

Post-Refactoring Verification

Run Full Test Suite

# Not just unit tests - run integration tests too!
yarn test

# If tests require environment variables, set them:
export DATABASE_URL="..."
yarn test

Run Type Checks

yarn check-types

Verify No New Lint Errors

yarn lint.check

Example: Refactoring Error Handling

Scenario: Moving _handleAuthUpdates to UserAuthService

Before shipping, we should have:

  1. Searched for error type expectations:

    grep -rn "AuthProviderError" tests/
  2. Found tests expecting specific error types in API responses

  3. Considered API contracts: Error types in API responses are part of the contract!

    • Unknown external consumers might check for specific error types
    • Preserve original error types for backward compatibility
  4. Solution: Let known error types bubble up unchanged:

    } catch (error) {
      // Preserve AuthProviderError for backward compatibility
      if (error instanceof AuthProviderError) {
        throw error;  // Don't wrap it!
      }
      throw new ServiceError('...', error);
    }

Action Required

After completing the checklist:

  1. Update all affected tests before shipping
  2. Run the full test suite with proper environment variables
  3. Document the change in PR description if error types changed

💡 Remember: When you change how errors are thrown or wrapped, you MUST search for tests that assert on error types!

Refactor Code

Objective

Improve the quality, readability, and maintainability of the selected code while preserving its functionality.

Instructions

  1. Analyze the current code for:

    • Code smells (long methods, deep nesting, repeated code)
    • Naming issues (unclear variable/function names)
    • Complexity that could be simplified
    • Missing or inconsistent error handling
    • Violations of SOLID principles or other best practices
  2. Propose specific improvements:

    • Extract functions/methods where appropriate
    • Improve naming to be self-documenting
    • Reduce nesting through early returns or guard clauses
    • Apply appropriate design patterns if beneficial
    • Remove dead code or unnecessary complexity
  3. Implement the refactoring:

    • Make changes incrementally
    • Preserve all existing functionality
    • Maintain the existing API/interface unless explicitly asked to change it
    • Follow the project's existing code style and conventions

Constraints

  • Do NOT add new features or change behavior
  • Do NOT over-engineer - keep solutions proportional to the problem
  • Explain WHY each change improves the code

Retrospective (Learn from Session)

Objective

Reflect on the session, identify root causes of friction, and update project memory (.mdc, LEARNINGS.md, DECISIONS.md) to prevent repeat issues.

When to Use

  • After completing a complex feature or difficult bug fix
  • After a session with multiple iterations/fixes
  • When user says "let's do a retro" or "what did we learn?"

Instructions

1. Prep & Evidence

Don't rely on memory. Use tools to verify session reality:

# Verify existing rules (avoid duplicates)
ls .cursor/rules/*.mdc
grep -h "^#\|^##" .cursor/rules/*.mdc | head -20

# Verify actual changes (evidence-based)
git diff --stat HEAD~1
git status --short

2. Session Summary

Goal: <intent> | Files: <actual files from git> | Iterations: <# of fixes>

3. Root Cause Analysis (The 5 Whys)

Identify pain points (repeated mistakes, tool gaps, missing context). Dig for Root Cause:

Root Cause Type Fix Location
Missing Docs Add to architecture.mdc
Process Gap Add to workflow.mdc or create /command
Stale Knowledge Update language .mdc with version-specific guidance
Tool Hallucination Add "Always verify exports" to conventions

Ask 5 times: Why did this happen? → until you hit the root.

4. Context Health (Pruning)

  • Redundancy: Delete/merge rules that overlap or contradict
  • Noise: Add unhelpful files to .cursorignore
  • Bloat: If an .mdc is too long, split by domain
  • Conflicts: New rule contradicts old? Merge or delete older one

5. Propose & Triage

Propose Generalized improvements (applicable to any project). Use @file references.

Type Target Use Case
Hard Rule .cursor/rules/*.mdc Always follow (e.g., "Run generate after schema change")
Anti-Pattern "Don't" section of .mdc Common pitfalls to avoid
Project Note LEARNINGS.md Context-specific tips for this codebase
Decision Log DECISIONS.md "Chose [X] over [Y] because [rationale]. Trade-off: [downside]"

Generalization check: "Would this rule help in a completely different project?"

6. Apply

Upon user confirmation (y):

  1. Update/Create .mdc files (categorized: Workflow, Tooling, Domain, Style)
  2. Add Anti-Patterns to "Never Do This" sections
  3. Use @filename in rules for clickable documentation
  4. Log decisions to DECISIONS.md
  5. Update .cursorignore if noise identified
  6. Remove obsolete/contradicted rules

Output Format

## 🔍 Root Causes & Fixes

| Pain Point | Root Cause | Fix (Generalized) |
|------------|------------|-------------------|
| [Symptom] | [Why - 5 Whys result] | [Rule to add] |

**Anti-Patterns to add:**
-[What to never do]

## 🛠️ Memory Updates

- `[rule.mdc]`: Added [instruction] referencing @[file]
- `DECISIONS.md`: Recorded choice of [X] vs [Y]

## 📉 Context Efficiency

- Removed/Merged: [obsolete rules]
- Added to .cursorignore: [noisy files]
- Consolidated: [rules merged]

Apply these changes? (y/n)

Self-Improvement

This command itself can be improved. After running, consider: Was anything missed? Should this template be updated?

Rollback Plan

Objective

Create a comprehensive rollback plan for the deployment in case issues arise in production.

Instructions

  1. Analyze the deployment for rollback needs:

    Code Changes

    • Can code be safely reverted via git?
    • Are there feature flags that can be toggled?
    • Are there multiple services that need coordinated rollback?

    Database Changes

    • Are there migrations that need to be rolled back?
    • Are migrations backward-compatible?
    • Is there data that needs to be restored?

    Infrastructure Changes

    • Configuration changes to revert
    • Environment variables to restore
    • Service dependencies affected

    External Dependencies

    • Third-party integrations affected
    • API contracts with partners
    • Webhook configurations
  2. Create rollback steps:

    • Order matters: what to rollback first
    • Commands to execute
    • Verification steps after each rollback action
    • Communication requirements
  3. Define rollback triggers:

    • What metrics/alerts indicate rollback is needed?
    • Who has authority to initiate rollback?
    • What is the decision timeline?
  4. Document recovery verification:

    • How to verify rollback was successful
    • What to monitor after rollback
    • User communication if needed

Output

Rollback Decision Criteria

Metric/Condition Threshold Action
Error rate > X% Consider rollback
Latency p99 > Xms Investigate
[Custom metric] [threshold] [action]

Rollback Steps

Step 1: [First Action]

# Commands to execute

Verify: [How to confirm this step succeeded]

Step 2: [Second Action]

...

Database Rollback (if applicable)

-- Migration rollback commands

Warning: [Any data loss or considerations]

Feature Flag Rollback (if applicable)

Flag: [flag_name]
Action: Set to [value]

Post-Rollback Verification

  • [Check 1]
  • [Check 2]
  • [Confirm user-facing functionality]

Communication Plan

  • Internal: Notify #channel about rollback
  • External: [Customer communication if needed]

Rollback Owners

  • Primary: [name/team]
  • Secondary: [name/team]
  • Escalation: [name/team]

Time Estimates

  • Decision time: X minutes
  • Rollback execution: X minutes
  • Verification: X minutes
  • Total recovery time: X minutes
---
description: "Ruby and Rails specific rules"
globs:
- "**/*.rb"
- "**/Gemfile"
- "**/*.erb"
---
# Ruby Guidelines
- Follow Ruby style guide conventions
- Use single quotes for strings (Rubocop default), double quotes when interpolation needed
- Use symbols instead of strings for hash keys where appropriate
- Leverage Ruby's expressive syntax but prioritize readability
- Use `frozen_string_literal: true` pragma when appropriate
# Rails Guidelines
- Follow Rails conventions (convention over configuration)
- Keep controllers thin; move logic to models or service objects
- Use strong parameters for mass assignment protection
- Follow RESTful routing conventions
- Use concerns for shared functionality
# Naming Conventions
- Classes/Modules: PascalCase
- Methods/Variables: snake_case
- Constants: SCREAMING_SNAKE_CASE
- Predicates: end with `?` (e.g., `valid?`)
- Dangerous methods: end with `!` (e.g., `save!`)
# Testing
- Use RSpec as the testing framework
- Follow existing spec patterns in the repository
- Use factories (FactoryBot) for test data
- Write descriptive `describe` and `it` blocks
# Database
- Use migrations for schema changes
- Add appropriate indexes for foreign keys and frequently queried columns
- Use transactions for multi-step operations
# Error Handling
- Use specific exception classes
- Rescue specific exceptions (not bare `rescue`)
- Use Rails error handling conventions in controllers
# Performance
- Prevent N+1 queries using `includes`, `preload`, or `eager_load`
- Use `find_each` for batch processing large datasets
- Be mindful of memory usage with large collections
# Linting
- Follow Rubocop or StandardRB conventions if configured in the project
- Run linting before committing changes

Rules Review (Self-Improve the Rules)

Objective

Periodically review and improve the Cursor rules themselves to ensure they remain accurate, useful, and up-to-date.

When to Use

  • Quarterly (or monthly for active projects)
  • When rules seem outdated or incorrect
  • After major codebase changes
  • When user says "review the rules" or "are the rules still accurate?"

Instructions

Step 1: Inventory Current Rules

List all rules and their purpose:

ls -la .cursor/rules/*.mdc

For each rule, note:

  • Last modified date
  • Purpose/description
  • Approximate line count
📋 Rules Inventory

| Rule | Description | Lines | Last Modified |
|------|-------------|-------|---------------|
| global.mdc | Git restrictions, sound, safety | 64 | 2024-01-15 |
| workflow.mdc | Development workflow phases | 169 | 2024-01-15 |
| typescript.mdc | TS/React patterns | 71 | 2024-01-10 |
| ... | ... | ... | ... |

Step 2: Check for Stale Information

For each rule, verify:

  1. Are the patterns still accurate?

    • Check if code examples match current codebase
    • Verify file paths mentioned still exist
  2. Are there missing patterns?

    • Review recent sessions for patterns not in rules
    • Check .cursor/patterns.md for frequently occurring issues
  3. Are there contradictions?

    • Rules that conflict with each other
    • Rules that conflict with actual codebase patterns

Step 3: Cross-Reference with Codebase

Validate rules against actual code:

# Check if example files exist
grep -r "example:" .cursor/rules/*.mdc | while read line; do
  # Verify each referenced file exists
done

# Check if patterns match reality
# e.g., if rule says "use X pattern", verify X is actually used

Step 4: Review Pattern Log

Check .cursor/patterns.md for:

  1. Issues that became rules - Are they effective?

    • If issue still occurs, rule needs strengthening
    • If issue stopped, rule is working ✅
  2. Issues not yet rules - Should they be?

    • 3+ occurrences = should be a rule
  3. Useful patterns - Are they documented in rules?

Step 5: Propose Updates

For each finding, propose a concrete update:

📝 Proposed Updates

1. typescript.mdc - Line 45
   Current: "Use `import type` for type-only imports"
   Issue: Example uses old syntax
   Proposed: Update example to use `import type { X } from`

2. architecture.mdc - Missing section
   Issue: No documentation for new your-service patterns
   Proposed: Add service-specific section with relevant patterns

3. testing.mdc - Stale reference
   Current: References `userService.test.ts` line 100
   Issue: File was refactored, pattern is now at line 245
   Proposed: Update line reference

4. workflow.mdc - Missing step
   Issue: Doesn't mention ORM code generation for TypeScript services
   Proposed: Add to Phase 3 Implementation checklist

Step 6: Apply Updates

After user confirms:

  1. Apply each update to the relevant file
  2. Update "Last Reviewed" date if tracking
  3. Log the review in patterns.md

Step 7: Review Commands

Also check commands for staleness:

ls -la .cursor/commands/*.md

For each command:

  • Does it still work?
  • Are the steps still accurate?
  • Does it reference correct tools/patterns?

Step 8: Output Summary

## Rules Review Complete

### Rules Reviewed: 8
- global.mdc ✅ Current
- workflow.mdc ⚠️ 2 updates applied
- typescript.mdc ⚠️ 1 update applied
- python.mdc ✅ Current
- ruby.mdc ✅ Current
- testing.mdc ⚠️ 1 update applied
- architecture.mdc ⚠️ 3 updates applied
- react-advanced.mdc ✅ Current

### Commands Reviewed: 15
- ship.md ✅ Current
- ci-monitor.md ✅ Current
- retro.md ✅ Current
- ... (12 more)

### Updates Applied: 7
- workflow.mdc: Added ORM code generation step
- workflow.mdc: Updated Phase 6 command list
- typescript.mdc: Fixed example syntax
- testing.mdc: Updated file reference
- architecture.mdc: Added service-specific section
- architecture.mdc: Updated api patterns
- architecture.mdc: Fixed stale URL

### Metrics
- Stale references fixed: 3
- Missing patterns added: 2
- Examples updated: 2

### Next Review
Recommended: 2024-04-15 (3 months)

Auto-Trigger Conditions

Consider running rules-review when:

  • 10+ sessions since last review
  • Pattern log has 5+ unpromoted issues
  • Major refactoring was done
  • New team members join
  • Significant codebase changes

Self-Improvement

This command improves itself by:

  1. Tracking which rules needed updates (indicates gaps in initial rules)
  2. Identifying patterns in what goes stale (proactive updates)
  3. Measuring rule effectiveness (issues that stopped occurring)

Run Checks (Recursive)

Objective

Analyze the current context and automatically run all relevant commands, then re-check if any new commands become relevant after changes are made. Repeat recursively until no more commands are needed.

⛔ EXCLUDED COMMANDS (Never Auto-Run)

These commands require explicit user invocation and are NEVER run by this command:

  • /ship - Git operations require explicit user request
  • /rollback-plan - Deployment decisions are user-controlled
  • /runbook - Operational docs are user-initiated

Instructions

Step 1: Discover Available Commands

List all available commands:

ls -1 .cursor/commands/*.md | xargs -I {} basename {} .md

Step 2: Analyze Current Context

Gather context to determine relevance:

# What files were recently changed?
git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached

# What is the current state?
git status --short

# What type of files are involved?
git diff --name-only | xargs -I {} sh -c 'echo {} | sed "s/.*\.//"' | sort | uniq -c

Also consider:

  • Currently open files in the IDE
  • Recent conversation context
  • Type of work being done (feature, bug fix, refactor, etc.)

Step 3: Determine Relevant Commands

For each command, evaluate if it's relevant based on these criteria:

Command Run When
/write-tests New code added without corresponding tests
/add-types TypeScript files with any types or missing types
/fix-lint Lint errors detected
/security-review Auth, user data, API keys, or external services touched
/accessibility Frontend/UI components modified
/add-logging Service/backend code without adequate logging
/coverage-gaps New code paths that may lack test coverage
/breaking-changes API signatures, interfaces, or contracts changed
/refactor-check Code was moved, renamed, or restructured
/duplicate-check New utilities or helpers added
/dead-code Files deleted or functions removed
/impact-analysis Changes to shared code or cross-service code
/code-review Any significant code changes (run last)
/commit-message Changes are ready to commit
/pr-description Changes are ready for PR

Step 4: Run Relevant Commands

Execute each relevant command in priority order:

Priority 1 - Safety & Quality:

  1. /fix-lint - Fix any lint errors first
  2. /add-types - Ensure proper typing
  3. /security-review - Check for security issues
  4. /refactor-check - Verify refactoring safety

Priority 2 - Testing: 5. /write-tests - Add missing tests 6. /coverage-gaps - Identify untested paths

Priority 3 - Code Quality: 7. /duplicate-check - Find duplicates 8. /dead-code - Clean up unused code 9. /add-logging - Ensure observability 10. /accessibility - Check a11y (if frontend)

Priority 4 - Documentation: 11. /breaking-changes - Document breaking changes 12. /impact-analysis - Understand ripple effects

Priority 5 - Review (Run Last): 13. /code-review - Self-review all changes

Step 5: Check for New Changes

After running commands, check if any changes were made:

git diff --stat

If changes were made:

  • Log: 🔄 Changes detected after running commands. Re-evaluating...
  • Go back to Step 2 and repeat

If no changes:

  • Log: ✅ No more relevant commands to run.
  • Present summary and stop

Step 6: Present Summary

After all iterations complete:

## Commands Run Summary

### Iteration 1:
- ✅ /fix-lint - Fixed 3 lint errors
- ✅ /write-tests - Added 2 test files
- ⏭️ /security-review - No issues found

### Iteration 2:
- ✅ /fix-lint - Fixed lint error in new test file

### Final State:
- All relevant commands executed
- No pending changes detected
- Ready for /ship (when you're ready)

Recursion Safeguards

  1. Maximum iterations: Stop after 5 iterations to prevent infinite loops
  2. Change tracking: Track which commands have run and their results
  3. Diminishing returns: If same command runs 3+ times, warn user
  4. User checkpoint: After 3 iterations, ask user to confirm continuing
⚠️ Multiple iterations detected (3 so far).
Commands keep making changes. Would you like to:
1. Continue automatically
2. Review changes so far
3. Stop and proceed manually

Example Flow

🔍 Analyzing context...
   - 5 TypeScript files changed
   - 2 test files added
   - your-service module touched

📋 Relevant commands identified:
   1. /fix-lint
   2. /add-types  
   3. /write-tests
   4. /security-review
   5. /code-review

🚀 Iteration 1:
   Running /fix-lint... ✅ Fixed 2 errors
   Running /add-types... ✅ Added types to 3 functions
   Running /write-tests... ⏭️ Tests already exist
   Running /security-review... ⏭️ No issues
   Running /code-review... ✅ Generated review notes

🔄 Changes detected. Re-evaluating...

🚀 Iteration 2:
   Running /fix-lint... ✅ Fixed 1 new lint error
   
✅ No more relevant commands.

## Summary
- 2 iterations completed
- 4 commands executed
- 3 issues fixed automatically
- Ready for /ship

Notes

  • This command is for automation convenience, not a replacement for human review
  • Always review the summary before proceeding to /ship
  • If uncertain about a command's relevance, ask the user

Generate Runbook

Objective

Create operational documentation for on-call engineers to troubleshoot and maintain the feature.

Instructions

  1. Document the feature/service:

    Overview

    • What does this feature/service do?
    • Who are the users/consumers?
    • What is the business impact if it fails?

    Architecture

    • Key components and their roles
    • Dependencies (databases, services, APIs)
    • Data flow diagram (if complex)
  2. Define health indicators:

    Key Metrics

    • What metrics indicate healthy operation?
    • What are normal baseline values?
    • What thresholds trigger alerts?

    Dashboards

    • Links to relevant dashboards
    • What to look for in each dashboard

    Logs

    • Where to find logs
    • Key log queries for troubleshooting
    • Log patterns that indicate problems
  3. Document common issues and fixes:

    Known Issues

    • Symptoms, causes, and resolutions
    • Workarounds if permanent fix not available

    Troubleshooting Steps

    • Step-by-step debugging process
    • Commands to run for diagnosis
    • What information to gather
  4. Define operational procedures:

    Routine Maintenance

    • Regular tasks and their frequency
    • How to perform them safely

    Emergency Procedures

    • How to restart/recover the service
    • How to scale up/down
    • How to failover if applicable

Output

Runbook: [Feature/Service Name]


Overview

  • Purpose: [What it does]
  • Criticality: [High/Medium/Low]
  • Business Impact: [Impact if unavailable]

Architecture

[Simple diagram or description]

Dependencies

Dependency Type Impact if Down

Health Monitoring

Key Metrics

Metric Normal Range Alert Threshold

Dashboards

Log Queries

[DataDog/CloudWatch/etc. query for common issues]

Common Issues

Issue: [Problem Name]
  • Symptoms: [What you'll see]
  • Cause: [Why it happens]
  • Resolution:
    1. [Step 1]
    2. [Step 2]
  • Prevention: [How to prevent recurrence]

Emergency Procedures

Restart Service
# Commands to safely restart
Scale Up
# Commands to scale
Failover

[Steps for failover if applicable]


Escalation

  • L1: [Team/Person] - [When to escalate]
  • L2: [Team/Person] - [When to escalate]
  • L3: [Team/Person] - [Critical issues]

Contacts

  • Slack: #[channel]
  • PagerDuty: [service]
  • On-call: [rotation link]

Security Review

Objective

Perform a security-focused code review to identify vulnerabilities and implement fixes for any issues found.

Action Required

After identifying security issues, immediately fix Critical and High severity issues. Do not just report - remediate.

Instructions

  1. Check for common vulnerabilities:

    • Injection: SQL injection, command injection, XSS, template injection
    • Authentication: Weak password handling, session management issues
    • Authorization: Missing access controls, privilege escalation
    • Data exposure: Sensitive data in logs, responses, or error messages
    • Cryptography: Weak algorithms, improper key management
    • Input validation: Missing or insufficient validation
    • Dependencies: Known vulnerable packages
  2. Review for:

    • Hardcoded secrets, API keys, or credentials
    • Insecure direct object references (IDOR)
    • Missing rate limiting on sensitive endpoints
    • Improper error handling that leaks information
    • Unsafe deserialization
    • Path traversal vulnerabilities
    • CSRF/SSRF vulnerabilities
  3. For each issue found:

    • Describe the vulnerability
    • Explain the potential impact (severity)
    • Provide a specific remediation
    • Show secure code example

Output

  • List of security concerns with severity (Critical/High/Medium/Low)
  • Specific file and line references
  • Remediation code or guidance for each issue
  • Recommendations for security improvements

Action: Fix Issues

After analysis, immediately implement fixes for:

  1. Critical/High: Fix immediately before continuing
  2. Medium: Fix if straightforward, otherwise note for follow-up
  3. Low: Document as recommendations

Report what was fixed and what remains as follow-up items.

Ship Changes (Add, Commit, Push + Create PR)

THIS COMMAND REQUIRES EXPLICIT USER INVOCATION

  • NEVER run /ship automatically as part of any workflow
  • NEVER suggest running /ship - the user must type it themselves
  • Even when invoked, wait for user approval on each git command
  • See global.mdc for the full policy on git command restrictions

Objective

Prepare and ship your changes: stage all files on a feature branch, create a conventional commit, push to remote, and create a PR with description using GitHub MCP.

⛔ CRITICAL SAFETY RULES

  1. NEVER push to main or master - This is an absolute, unbreakable rule
  2. NEVER commit to main or master - Always work on a feature branch
  3. Branch naming (optional) - Pattern: TICKET-123-* (e.g., TICKET-123-feature-name, BUG-456-fix-bug)
  4. If on main/master, IMMEDIATELY STOP - Do not proceed with any git commands

If user is on main/master, output this and STOP:

⛔ BLOCKED: You are on the '${branch}' branch.

Shipping directly to main/master is not allowed. 
This command will NOT proceed.

To continue, please create a feature branch:
  git checkout -b <TICKET-ID>-<description>

Example:
  git checkout -b TICKET-123-add-user-endpoint

Instructions

Step 0: Navigate to Repository

First, determine which repository the user is working in based on:

  1. The currently focused file in the IDE
  2. Recently viewed files
  3. The context of the conversation

Navigate to the repository root:

cd /path/to/repository

Confirm you're in the right place:

pwd
git rev-parse --show-toplevel

Step 1: Validate Branch & Ticket (REQUIRED)

First, check the current branch:

git branch --show-current

⛔ STOP if on main or master:

ERROR: You are on the 'main' branch. 
Cannot ship directly to main. Please create a feature branch first:

  git checkout -b <TICKET-ID>-<description>

Example: git checkout -b TICKET-123-add-user-endpoint

Branch naming check (if using issue tracker): Branch must match pattern: ^[A-Z]+-[0-9]+ (e.g., ID-123, TICKET-456, FEAT-789)

ERROR: Branch '<branch-name>' doesn't start with a issue ticket ID.
Please rename your branch to include the ticket:

  git branch -m <TICKET-ID>-<current-name>

Example: git branch -m TICKET-123-my-feature

Fetch issue ticket details to verify work matches:

Extract the ticket ID from the branch name (e.g., TICKET-123 from TICKET-123-add-endpoint) and fetch the ticket:

# (Optional) If you have MCP configured: mcp_issue_get({ issue_key: "<TICKET-ID>" })

Display the ticket summary to the user:

📋 Issue Ticket: <TICKET-ID>
   Title: <ticket summary>
   Type: <issue type>
   Status: <current status>

Verify the changes match the ticket:

  • Compare the git diff summary with the ticket title/description
  • If the work appears unrelated, warn the user:
    ⚠️ WARNING: The changes don't seem to match the ticket description.
    
    Ticket: "Add user authentication endpoint"
    Changes: Modified payment processing files
    
    Are you sure you want to continue? (The user should confirm)
    

Step 2: Run CI Checks Locally (REQUIRED)

Before shipping, run the same checks that CI will run to catch failures early.

Look for CI configuration:

# Find CI workflow files
ls -la .github/workflows/ 2>/dev/null | head -5

Run standard checks (adjust based on project):

# TypeScript/Node projects:
yarn check-types 2>&1 | tail -20
yarn lint.check 2>&1 | tail -20
yarn test 2>&1 | tail -50

# Python projects:
make lint 2>&1 | tail -20
make test 2>&1 | tail -50

# Ruby projects:
bundle exec rubocop 2>&1 | tail -20
bundle exec rspec 2>&1 | tail -50

⛔ STOP if any check fails:

⛔ CI Check Failed: <check name>

Please fix the failing check before shipping.
Error: <error message>

If all checks pass, continue.

Step 3: Analyze Changes

Run these commands to understand what's being shipped:

git status --short
git diff --stat

Also get remote info:

git remote get-url origin

If no local changes detected:

ℹ️ No local changes to commit.

Your working directory is clean. Would you like to:
1. Just monitor CI status for this branch (run /ci-monitor)
2. Exit - nothing to ship

If user chooses option 1, skip to Step 8 (run /ci-monitor directly). If user chooses option 2, exit the command.

Step 4: Generate Commit Message

Based on the changes, create a commit message following Conventional Commits:

<type>(<scope>): <subject>

<body - explain what and why>

<footer - ticket references>

Types: feat, fix, refactor, docs, style, test, chore, perf

Guidelines:

  • Subject: 50 chars max, imperative mood ("Add" not "Added")
  • Body: Wrap at 72 chars, explain what and why
  • Footer: Reference tickets (e.g., Closes ID-123 or Fixes #456)

Step 5: Generate PR Description

First, search for an existing PR template in the repository:

Check these locations in order:

  1. .github/PULL_REQUEST_TEMPLATE.md
  2. .github/pull_request_template.md
  3. docs/pull_request_template.md
  4. PULL_REQUEST_TEMPLATE.md
  5. .github/PULL_REQUEST_TEMPLATE/ directory (for multiple templates)
# Find PR template
find . -iname "*pull_request_template*" -type f 2>/dev/null | head -5

If a template exists:

  • Read the template file
  • Fill in each section based on the changes, issue ticket, and context
  • Preserve the template structure exactly

If no template exists, use this fallback:

## Summary
[2-3 sentences explaining what and why]

## Changes
- [Bullet list of significant changes]

## Testing
- [How this was tested]
- [Manual testing steps for reviewers]

## Notes for Reviewers
- [Areas to pay attention to]
- [Any known issues or follow-ups]

## Related
- [Ticket/issue links]
- [Related PRs if any]

Always include:

  • Link to the issue ticket from Step 1
  • Summary of test coverage
  • Any breaking changes or migration notes

Step 6: Execute Git Commands

Run these commands in sequence (user will be prompted to approve each):

# Stage all changes
git add -A

# Commit with the generated message
git commit -m "<generated commit message>"

# Push to current branch (use explicit branch name, -u for new branches)
# Replace <branch-name> with the actual branch from Step 1
git push -u origin <branch-name>

Step 7: Check for Existing PR

Before creating a new PR, check if one already exists for this branch:

mcp_github_list_pull_requests({
  owner: "<org-or-username>",
  repo: "<repository-name>",
  state: "open"
})

Search the results for a PR with head matching the current branch name.

If PR already exists:

✅ PR #<number> already exists for this branch
   Title: <existing title>
   URL: https://github.com/<owner>/<repo>/pull/<number>

Your new commit has been pushed. The PR is automatically updated.

Would you like to update the PR description? (y/n)

If user wants to update the description:

mcp_github_update_pull_request({
  owner: "<org-or-username>",
  repo: "<repository-name>",
  pull_number: <existing PR number>,
  body: "<updated PR description markdown>"
})

If NO existing PR, create one:

mcp_github_create_pull_request({
  owner: "<org-or-username>",      # Extract from remote URL
  repo: "<repository-name>",        # Extract from remote URL
  title: "<commit subject line>",   # Use the commit subject
  head: "<current-branch-name>",    # The branch you pushed
  base: "main",                     # Or appropriate base branch
  body: "<PR description markdown>" # The generated PR description
})

Parsing the remote URL:

  • git@github.com:OrgName/repo-name.git → owner: OrgName, repo: repo-name
  • https://github.com/OrgName/repo-name.git → owner: OrgName, repo: repo-name

Step 8: Monitor GitHub Actions (via /ci-monitor)

After PR is created/updated, run the /ci-monitor command to:

  • Poll workflow runs until complete
  • Report success or failure
  • Offer to fix failures automatically

See /ci-monitor for full details on CI monitoring and auto-fix behavior.

Quick summary:

🔄 Running /ci-monitor...

   Workflow: "CI" (Run #12345)
   Status: in_progress → success ✅
   
✅ CI Passed! PR #123 is ready for review.

If CI fails, /ci-monitor will offer to analyze and fix the issue, then push a fix commit and monitor the new run (up to 3 attempts).

Output Format

Present the output in this order:

  1. Changes Summary - Files changed, additions/deletions
  2. Commit Message - The full commit message
  3. PR Description - The markdown description
  4. Execute - Run git add, commit, push, then create PR via MCP
  5. PR Result - Show the PR URL
  6. CI Status - Monitor and report workflow status
  7. Final Result - Confirm PR is ready for review (or report issues)

Notes

  • Git commands (add, commit, push) require user approval
  • If you want to review before committing, ask to see the diff first
  • For breaking changes, include BREAKING CHANGE: in the commit footer
  • The GitHub MCP can also update existing PRs with github_update_pull_request

Example Flow: New PR

Step 0: cd /path/to/repo
Step 1: git branch --show-current → "TICKET-123-add-user-endpoint" ✅
        # (Optional) If you have MCP configured: mcp_issue_get({ issue_key: "TICKET-123" })
        📋 Issue: "Add updateProfile mutation" ✅ (matches changes)
Step 2: yarn check-types ✅ | yarn lint.check ✅ | yarn test ✅
        → All CI checks pass
Step 3: git status --short && git diff --stat
Step 4: Generate commit message
Step 5: find . -iname "*pull_request_template*"
        → Found: .github/PULL_REQUEST_TEMPLATE.md
        → Reading template and filling in sections...
Step 6: git add -A && git commit && git push -u origin TICKET-123-add-user-endpoint
Step 7: Check for existing PR → None found
        mcp_github_create_pull_request(...)
        → PR #123 created at https://github.com/org/repo/pull/123
Step 8: Monitor CI...
        🔄 Checking CI status... (polling)
        → Workflow "CI" in_progress...
        → Waiting 30s...
        → Workflow "CI" completed: success ✅
Result: PR #123 ready for review at https://github.com/org/repo/pull/123

Example Flow: Additional Commits to Existing PR

Step 0: cd /path/to/repo
Step 1: git branch --show-current → "TICKET-123-add-user-endpoint" ✅
        # (Optional) If you have MCP configured: mcp_issue_get({ issue_key: "TICKET-123" })
        📋 Issue: "Add updateProfile mutation" ✅
Step 2: yarn check-types ✅ | yarn lint.check ✅ | yarn test ✅
        → All CI checks pass
Step 3: git status --short && git diff --stat
        → 2 files changed (addressing review feedback)
Step 4: Generate commit message
        → "fix(user): address PR review feedback"
Step 5: Skip new PR description (existing PR)
Step 6: git add -A && git commit && git push origin TICKET-123-add-user-endpoint
Step 7: Check for existing PR → PR #123 found!
        ✅ PR #123 already exists: "feat(api): add updateProfile mutation"
        URL: https://github.com/org/repo/pull/123
        
        Your commit has been pushed. The PR is automatically updated.
        Would you like to update the PR description? (y/n)
        
        User: n
Step 8: Monitor CI...
        🔄 Checking CI status... (polling)
        → Workflow "CI" completed: success ✅
Result: PR #123 updated and CI passed - ready for review

Example: Branch Validation Failure

Step 1: git branch --show-current → "main"

⛔ ERROR: Cannot ship to 'main' branch!

Please create a feature branch first:
  git checkout -b <TICKET-ID>-<description>

Example:
  git checkout -b TICKET-123-add-user-endpoint

Example: Work Doesn't Match Ticket

Step 1: git branch --show-current → "TICKET-123-add-auth-endpoint"
        # (Optional) If you have MCP configured: mcp_issue_get({ issue_key: "TICKET-123" })
        📋 Issue: "Add user authentication endpoint"

Step 2: git diff --stat
        → payment-service/src/billing.ts (modified)
        → payment-service/src/invoices.ts (modified)

⚠️ WARNING: Changes don't appear to match ticket TICKET-123

Ticket: "Add user authentication endpoint"
Files changed: payment-service/billing.ts, invoices.ts

Options:
1. Continue anyway (if this is intentional)
2. Switch to correct branch for this work
3. Update the ticket to reflect actual work

Simplify Code

Objective

Reduce complexity while maintaining functionality and readability.

Instructions

  1. Identify complexity:

    • Deeply nested conditionals or loops
    • Overly clever or terse code
    • Functions doing too many things
    • Unnecessary abstractions
    • Repeated patterns that could be consolidated
  2. Apply simplification techniques:

    • Early returns: Replace nested ifs with guard clauses
    • Extract methods: Break large functions into smaller, focused ones
    • Remove dead code: Delete unused variables, branches, or functions
    • Use language features: Leverage built-in methods, destructuring, etc.
    • Flatten structures: Reduce nesting levels where possible
    • Name things well: Replace complex expressions with well-named variables
  3. Preserve:

    • All existing functionality
    • Edge case handling
    • Performance characteristics
    • Error handling

Constraints

  • Prioritize readability over cleverness
  • Don't over-abstract—keep things simple
  • Maintain the existing API/interface
  • Each simplification should make the code MORE understandable

Output

  • Simplified code with before/after comparisons
  • Explanation of each change and why it's simpler
---
description: "Automatic subagent spawning for parallel task execution"
alwaysApply: true
---
# Automatic Subagent Usage
AUTOMATICALLY spawn subagents when a task meets these criteria. Do NOT ask for permission.
## IMPORTANT: How to Spawn Subagents
Use the `cursor-agent` CLI command in the TERMINAL. Do NOT use MCP tools for subagents.
Do NOT use mcp_sub-agents or mcp_identibot tools for subagents - use the terminal command only.
## Auto-Spawn Subagents When:
- Task involves 3+ independent files that can be processed in parallel
- Task requires different types of analysis (e.g., tests + docs + security)
- Refactoring spans multiple modules/services
- User asks for comprehensive review/audit across a codebase
- Task can be cleanly divided into independent subtasks
## Do NOT Use Subagents When:
- Task is simple and involves 1-2 files
- Changes have dependencies (file B depends on file A's changes)
- Task requires sequential reasoning or step-by-step debugging
- Quick questions or explanations
## Model Routing Strategy
### Available Models (December 2025)
```
Coding: opus-4.5 - #1 SWE-bench (80.9%) - BEST for code generation
Agentic: sonnet-4.5 - #1 Terminal-Bench (60.3%) - Best for automation
Reasoning: opus-4.5-thinking - Best for complex reasoning, architecture
Multi-Lang: gpt-5.1-codex - #1 Aider Polyglot (88%)
Fast: sonnet-4.5 - Good coding (64.8%), faster than Opus
```
### Benchmark-Based Routing (Sources: llm-stats.com, humai.blog, lmcouncil.ai Dec 2025)
| Task Type | Best Model | Flag | Benchmark Evidence |
|-----------|------------|------|-------------------|
| **Implementation/Coding** | opus-4.5 | `--model opus-4.5` | #1 SWE-bench 80.9% |
| **Refactoring** | opus-4.5 | `--model opus-4.5` | SWE-bench leader |
| **Testing** | opus-4.5 | `--model opus-4.5` | SWE-bench leader |
| **Agentic/Terminal Tasks** | sonnet-4.5 | `--model sonnet-4.5` | #1 Terminal-Bench 60.3% |
| **Multi-Language Coding** | gpt-5.1-codex | `--model gpt-5.1-codex` | GPT-5 leads Aider Polyglot 88% |
| **Complex Reasoning** | opus-4.5-thinking | `--model opus-4.5-thinking` | #3 SimpleBench 62% |
| **Architecture Design** | opus-4.5-thinking | `--model opus-4.5-thinking` | Best reasoning + coding combo |
| **Security Review** | opus-4.5-thinking | `--model opus-4.5-thinking` | Can't afford mistakes |
| **Research/Analysis** | sonnet-4.5-thinking | `--model sonnet-4.5-thinking` | Good reasoning, faster |
| **Quick Tasks (speed)** | sonnet-4.5 | `--model sonnet-4.5` | Fast + good (64.8%) |
| **Documentation** | sonnet-4.5 | `--model sonnet-4.5` | Speed + accuracy |
### Key Insights from Benchmarks
- **Opus 4.5 is the coding champion** (SWE-bench: 80.9% vs Sonnet's 64.8%)
- **Sonnet 4.5 is best for agentic/terminal tasks** (Terminal-Bench: 60.3% vs Opus's 59.3%)
- **Opus 4.5 excels at reasoning** (SimpleBench: 62.0%, ARC-AGI-2: 37.6%)
- **GPT-5 leads multi-language coding** (Aider Polyglot: 88%)
- **Trade-off**: Opus is better but slower; use Sonnet for speed-sensitive tasks
### Cost vs Capability Trade-off
```
Best Coding: opus-4.5 (80.9% SWE-bench) - slower, more expensive
Fast Coding: sonnet-4.5 (64.8% SWE-bench) - faster, cheaper
Best Agentic: sonnet-4.5 (60.3% Terminal-Bench)
Best Reasoning: opus-4.5-thinking
Multi-Language: gpt-5.1-codex (88% Aider Polyglot)
```
## How to Spawn Subagents (with Model Selection)
Run in terminal with model flags:
```bash
# Spawn with appropriate models for each task type
cursor-agent --model opus-4.5-thinking -p "ARCHITECTURE: Design auth system for /src/auth" --output-format text --force &
sleep 1
cursor-agent --model sonnet-4.5 -p "IMPLEMENT: Add validation to user handlers" --output-format text --force &
sleep 1
cursor-agent --model sonnet-4.5 -p "TEST: Write tests for user validation" --output-format text --force &
wait # Wait for all to complete
```
Or spawn sequentially if order matters:
```bash
cursor-agent --model sonnet-4.5-thinking -p "TASK" --output-format text --force
```
## Automatic Task Decomposition & Model Assignment
When receiving ANY non-trivial task from the user:
### Step 1: Decompose into Task Types
Break the task into these categories (based on Dec 2025 benchmarks):
| Category | Examples | Model | Why (Benchmark) |
|----------|----------|-------|-----------------|
| **Implementation** | "Write code", "Create files", "Add features" | `opus-4.5` | #1 SWE-bench 80.9% |
| **Refactoring** | "Rename", "Extract function", "Move code" | `opus-4.5` | SWE-bench leader |
| **Testing** | "Write tests", "Add coverage" | `opus-4.5` | SWE-bench leader |
| **Agentic Tasks** | "Run commands", "Automate", "Terminal work" | `sonnet-4.5` | #1 Terminal-Bench 60.3% |
| **Multi-Language** | "Write Go, Rust, Java code" | `gpt-5.1-codex` | GPT-5 #1 Aider Polyglot 88% |
| **Architecture** | "Design system", "Make trade-offs" | `opus-4.5-thinking` | Best reasoning |
| **Security** | "Security review", "Vulnerability analysis" | `opus-4.5-thinking` | Can't afford mistakes |
| **Knowledge Work** | "Complex analysis", "Expert reasoning" | `opus-4.5-thinking` | #1 GDPval 43.6% |
| **Research** | "Find patterns", "Analyze codebase" | `sonnet-4.5-thinking` | Good reasoning, faster |
| **Code Review** | "Review PR", "Check quality" | `sonnet-4.5-thinking` | Nuanced analysis |
| **Quick Tasks** | "Simple fixes", "Small changes" | `sonnet-4.5` | Faster, still good (64.8%) |
| **Documentation** | "Update docs", "Add comments" | `sonnet-4.5` | Speed + accuracy |
### Step 2: Create Model-Aware Plan
In the `.cursor/plans/` file, include:
```markdown
## Task Breakdown
| # | Task | Type | Model | Tier | Parallel? |
|---|------|------|-------|------|-----------|
| 1 | Research existing auth patterns | Research | sonnet-4.5-thinking | 2 | No (first) |
| 2 | Design OAuth integration approach | Architecture | opus-4.5-thinking | 1 | No (needs #1) |
| 3 | Implement OAuth provider | Implementation | sonnet-4.5 | 3 | Yes |
| 4 | Write OAuth tests | Testing | sonnet-4.5 | 3 | Yes (with #3) |
| 5 | Security review OAuth flow | Security | opus-4.5-thinking | 1 | No (last) |
```
### Step 3: Execute with Appropriate Models
- **Sequential tasks**: Run with `cursor-agent --model X -p "TASK"`
- **Parallel tasks**: Run with `cursor-agent --model X -p "TASK" --force &`
- **Dependencies**: Wait for dependent tasks before starting next phase
## Automatic Fan-Out Pattern
When you identify a parallelizable task:
1. Announce: "Breaking this into X subtasks with model assignments..."
2. Show the task breakdown table (see above)
3. Ask: "Proceed with this plan?" (or auto-execute if simple)
4. Spawn subagents with appropriate models
5. Wait for completion
6. Collect and synthesize results
7. Present combined summary to user
## Task Decomposition Examples (with Model Routing)
### Code Review Request → Thinking model for analysis:
```bash
cursor-agent --model sonnet-4.5-thinking -p "Review code quality and style in [repo]" --output-format text --force &
sleep 1
cursor-agent --model sonnet-4.5-thinking -p "Review test coverage in [repo]" --output-format text --force &
sleep 1
cursor-agent --model opus-4.5-thinking -p "Review security concerns in [repo]" --output-format text --force &
wait
```
### Multi-file Refactoring → Fast model for implementation:
```bash
cursor-agent --model sonnet-4.5 -p "Refactor auth module" --output-format text --force &
sleep 1
cursor-agent --model sonnet-4.5 -p "Refactor user module" --output-format text --force &
sleep 1
cursor-agent --model sonnet-4.5 -p "Refactor api module" --output-format text --force &
wait
```
### Mixed Task → Different models per task type:
```bash
# Architecture phase (opus for complex decisions)
cursor-agent --model opus-4.5-thinking -p "ARCHITECTURE: Design OAuth integration approach for /src/auth" --output-format text --force
# Implementation phase (sonnet for speed, parallel)
cursor-agent --model sonnet-4.5 -p "IMPLEMENT: Add OAuth provider to auth module" --output-format text --force &
sleep 1
cursor-agent --model sonnet-4.5 -p "TEST: Write integration tests for OAuth flow" --output-format text --force &
wait
# Security review (opus for critical analysis)
cursor-agent --model opus-4.5-thinking -p "SECURITY: Review OAuth implementation for vulnerabilities" --output-format text --force
```
### Feature Implementation → Full workflow with tiered models:
```bash
# Step 1: Research (sonnet-thinking for speed + reasoning)
cursor-agent --model sonnet-4.5-thinking -p "RESEARCH: Find existing patterns for user notifications in codebase" --output-format text --force
# Step 2: Architecture (opus for complex design decisions)
cursor-agent --model opus-4.5-thinking -p "ARCHITECTURE: Design notification system with scalability in mind" --output-format text --force
# Step 3: Implementation (sonnet for speed, parallel)
cursor-agent --model sonnet-4.5 -p "IMPLEMENT: Create notification service in /src/services" --output-format text --force &
sleep 1
cursor-agent --model sonnet-4.5 -p "IMPLEMENT: Add notification API endpoints" --output-format text --force &
sleep 1
cursor-agent --model sonnet-4.5 -p "TEST: Write unit tests for notification service" --output-format text --force &
wait
# Step 4: Security Review (opus for critical review)
cursor-agent --model opus-4.5-thinking -p "SECURITY: Review notification implementation for vulnerabilities" --output-format text --force
```
## Subagent Prompt Guidelines
- Be specific and focused (one clear task per subagent)
- Include the target directory/files
- Specify expected output format
- Use `--force` for any modifications
## After Subagents Complete
Always provide a consolidated summary:
```
## Subagent Results Summary
### Subagent 1: [Task]
- Result: ...
### Subagent 2: [Task]
- Result: ...
### Combined Findings
- ...
```
---
description: "Rules for test files across all languages with automation patterns"
globs:
- "**/test/**"
- "**/tests/**"
- "**/spec/**"
- "**/*.test.ts"
- "**/*.test.tsx"
- "**/*.spec.ts"
- "**/*.spec.tsx"
- "**/test_*.py"
- "**/*_test.py"
- "**/*_spec.rb"
---
# Test Automation Expert
You are a specialized expert in comprehensive testing strategies.
## Core Testing Philosophy
- Tests should be readable and serve as documentation.
- Each test should test one thing (single assertion concept).
- Tests should be independent and not rely on execution order.
- Prefer integration tests for critical paths, unit tests for logic.
## Unit Testing (Jest/Vitest/Pytest)
- Use **AAA Pattern**: Arrange, Act, Assert.
- Use **Parametrized Tests** for data-driven testing.
```typescript
describe.each([
[1, 1, 2],
[2, 3, 5],
])('sum(%i, %i)', (a, b, expected) => {
it(`returns ${expected}`, () => {
expect(sum(a, b)).toBe(expected);
});
});
```
## React Testing Library
- Test behavior, not implementation details.
- Use `screen` for querying elements.
- Use `userEvent` (if available) or `fireEvent` for interactions.
```typescript
render(<Button>Click me</Button>);
expect(screen.getByText('Click me')).toBeInTheDocument();
```
## Async Testing & Mocking
- Mock external dependencies (APIs, databases).
- Use **MSW (Mock Service Worker)** for network level mocking where applicable.
- Avoid over-mocking; test real interactions when feasible.
## E2E Testing (Playwright)
- Focus on critical user journeys.
- Use robust locators (e.g., `getByRole`, `getByText`).
- Handle authentication and state setup efficiently.
```typescript
await page.goto('/login');
await page.getByLabel('Email').fill('user@example.com');
await page.getByRole('button', { name: 'Sign in' }).click();
await expect(page).toHaveURL('/dashboard');
```
## Coverage
- Focus on meaningful coverage, not just numbers.
- Test edge cases and error conditions.
- Test happy paths and common failure modes.
## ⚠️ Pattern-Matching for New Tests (CRITICAL)
When writing new tests, **ALWAYS compare with existing similar tests** in the codebase:
1. **Find similar tests first:**
```bash
# Find tests for similar features
find . -name "*.test.ts" -path "*/<similar-feature>/*" | head -5
```
2. **Copy the test structure exactly** - Don't invent new patterns
3. **Pay attention to:**
- How context/mocks are set up
- What assertions are used (`response.error` vs `result.error`)
- How async operations are handled
- What helper functions are used
**Example of pattern mismatch that causes CI failures:**
```typescript
// ❌ WRONG - invented a new pattern
const context = getContext({
error: { message: 'Error', type: 'Error' }, // error at context level
result: null,
});
expect(response.error).toBeDefined(); // checking response.error
// ✅ CORRECT - matches existing resolver test patterns
const context = getContext({
result: { error: { message: 'error message' } }, // error in result
});
expect(result.error).toEqual({ message: 'error message' }); // checking result.error
```
## ⚠️ Tests Requiring External Services
<!-- CONFIGURE: List your external service dependencies here, or delete if all tests run locally -->
Some tests require external service credentials and **cannot be fully validated locally**:
**Identifying these tests:**
- Tests using `external service SDK` (Your API service evaluateCode)
- Tests calling real identity provider APIs
- Integration tests with external databases
**What to do:**
1. **Run the test locally anyway** - Even if it fails with `CredentialsProviderError`, check for syntax errors
2. **Compare patterns meticulously** - Since you can't validate the logic locally, be extra careful to match existing patterns
3. **Mark in PR description** - Note that certain tests require CI to validate
4. **Consider mocking** - If possible, add mocks for local validation
## ⚠️ Error Type Testing (CRITICAL for Refactoring)
When tests assert on specific error types, refactoring can break them silently:
```typescript
// ❌ This test will break if you wrap the error in a new service
expect(result.errors[0].type).toBe('AuthProviderError');
// ✅ More resilient - check the message contains the error info
expect(result.errors[0].message).toContain('authentication error');
```
**Before refactoring error handling:**
1. Search for tests that expect specific error types:
```bash
grep -rn "ErrorClassName" tests/
```
2. Update ALL tests that assert on the old error type
3. Run the full test suite (not just the tests you're changing)
**When writing new error tests:**
- Consider if the error type might change during refactoring
- Test error messages/content in addition to types
- Document why a specific error type is expected
---
description: "Extended thinking and deep reasoning guidelines"
alwaysApply: true
---
# Extended Thinking Mode
For complex problems that require deeper analysis, use structured thinking.
## When to Think Deeply
- Architectural decisions
- Complex debugging scenarios
- Performance optimization
- Security-sensitive changes
- Multi-step refactoring
- Unfamiliar codebases
## Thinking Process
Before implementing complex changes:
1. **Understand**: Read all relevant files and understand the current state
2. **Analyze**: Identify dependencies, side effects, and edge cases
3. **Plan**: Create a step-by-step implementation plan
4. **Verify**: Consider what could go wrong
5. **Execute**: Implement the plan systematically
6. **Validate**: Run tests and verify the changes work
## Request Deep Thinking
If you need deeper analysis, the user can prompt:
- "Think through this carefully"
- "Analyze this step by step"
- "What are all the implications of this change?"
- "Create a detailed plan before implementing"
## Complex Problem Template
For complex problems, structure your thinking:
```
## Understanding
- What is the current behavior?
- What is the desired behavior?
- What files/systems are involved?
## Analysis
- What are the dependencies?
- What could break?
- Are there edge cases?
## Plan
1. Step one...
2. Step two...
3. Step three...
## Risks
- Risk 1 and mitigation
- Risk 2 and mitigation
## Execution
[Proceed with implementation]
```
## Model Recommendations
For maximum thinking capability:
- Claude 3.5 Sonnet (claude-3.5-sonnet) - Best for code generation
- Claude 3 Opus - Best for complex reasoning (use Max Mode)
- Enable "Max Mode" in Cursor settings for larger context
---
description: "TypeScript and React/TSX specific rules with advanced patterns"
globs:
- "**/*.ts"
- "**/*.tsx"
---
# TypeScript Guidelines
## Core Principles
- Prefer strict typing; avoid `any` unless absolutely necessary.
- Use proper type imports: `import type { Foo } from './types'`.
- Leverage TypeScript's type inference where it's clear.
- Define interfaces for complex object shapes.
- Use `readonly` for immutable properties.
## Advanced Patterns (2025 Best Practices)
### Conditional Types
Use conditional types for flexible APIs:
```typescript
type ToArray<T> = T extends any ? T[] : never;
```
### The `infer` Keyword
Extract types from other types:
```typescript
type ReturnType<T> = T extends (...args: any[]) => infer R ? R : never;
```
### Template Literal Types
Build types from string manipulation:
```typescript
type EventName<T extends string> = `on${Capitalize<T>}`;
```
### The `satisfies` Operator
Validate types without widening (TS 4.9+):
```typescript
const config = {
apiUrl: 'https://api.example.com',
timeout: 5000,
} satisfies Config;
```
### Branded Types (Nominal Typing)
Create unique types for primitives to avoid mixups:
```typescript
type UserId = string & { __brand: 'UserId' };
function createUserId(id: string): UserId { return id as UserId; }
```
## Common Pitfalls
- **Avoid**: Type assertions (`as User`) without validation.
- **Do**: Use type guards (`isUser(data)`) for validation at runtime.
- **Avoid**: `any` which disables type checking.
- **Do**: Use generics or `unknown` with narrowing.
## React Guidelines
- Use functional components with hooks (no class components).
- Follow the React hooks rules (no conditional hooks).
- Use proper dependency arrays in useEffect/useMemo/useCallback.
- Prefer composition over inheritance.
## Naming Conventions
- Components: PascalCase (e.g., `UserProfile`).
- Hooks: camelCase with `use` prefix (e.g., `useUserData`).
- Types/Interfaces: PascalCase without `I` prefix (e.g., `UserData`, `UserProps`).
- Constants: SCREAMING_SNAKE_CASE for true constants.
- Files: Match component name for component files.
---
description: "Development workflow: Research → Plan → Approve → Implement → Validate"
globs: ["**/*"]
alwaysApply: true
---
# Development Workflow Strategy
For every complex coding task (new features, refactoring, or significant changes), you MUST follow this strictly ordered workflow.
## Phase 0: Context Loading (Automatic)
Before starting:
1. **Read Architecture**: READ `.cursor/rules/architecture.mdc` to understand the system context and patterns.
2. **Fetch Ticket Details** (if ticket reference detected): If user mentions a ticket (e.g., TICKET-123, GitHub #456), fetch ticket details using available tools (MCP, API, or manual paste) BEFORE creating a plan. Parse requirements (title, description, acceptance criteria) to inform planning.
3. **Check for Project Documentation**: Ask the user:
> "Is there an existing documentation page (Confluence, Notion, GitHub wiki, etc.) with requirements, specs, or context for this work? If so, please share the page ID."
If provided, fetch the page using `docs_get_page` and incorporate into planning.
4. **Auto-load Context Files**: Automatically check `.cursor/context/` for any relevant cached context:
- List the folder contents
- Read any files that seem relevant to the current task (based on filename/topic)
- Incorporate into planning without requiring user to reference them manually
## Phase 1: Research First (MANDATORY)
Before proposing ANY plan or asking ANY questions, you MUST:
### Step 1.0: Decompose the Task (For Complex Tasks)
Before diving into research, explicitly break down the task into sub-questions:
```
📋 Task Decomposition
Original task: "<user's request>"
Sub-questions to answer:
1. <aspect 1>? → Will search: <what to look for>
2. <aspect 2>? → Will read: <what files>
3. <aspect 3>? → Will check: <what to verify>
```
**Example for "Add OAuth authentication":**
```
Sub-questions:
1. Current auth architecture? → Search: auth, login, session
2. Existing OAuth integrations? → Read: auth/ folder
3. User model changes needed? → Check: user schema/model
4. Required endpoints? → Compare: existing auth endpoints
```
**Skip decomposition for simple tasks** (typo fix, single-field addition).
### Step 1.1-1.4: Research Each Sub-Question
1. **Search**: Use `codebase_search` to find similar features, utilities, or patterns.
2. **Read**: Read the relevant files to understand naming conventions, folder structure, and error handling.
3. **Infer**: Use existing patterns to answer implementation questions yourself.
4. **Document**: Make note of what you found for the plan's "Research Summary" section.
**DO NOT ask questions until you've exhausted research options.**
## Phase 2: Create Plan with Assumptions
Create a plan in `.cursor/plans/` that includes:
- **Research Summary**: What you found in the codebase
- **Assumptions**: Checkboxes `[ ]` for human to confirm `[x]` or clarify
- **Implementation Steps**: Detailed plan
Present the plan and ask:
> "Please review the assumptions above and mark [x] to confirm, or clarify any that are incorrect."
**WAIT for user response before implementing.**
## Phase 3: Implementation
After assumptions are confirmed, implement the solution using the patterns discovered in Phase 1.
### Core Guidelines
* **Reuse**: Use existing utilities found during discovery.
* **Auto-check for duplicates**: BEFORE writing any new utility, helper, or service function, automatically search for existing implementations. Don't ask - just search first:
```bash
# Find similar functions/patterns
grep -rn "functionName\|similarPattern" src/
```
* **Style**: Match the style and structure of existing code.
* **Types**: Use explicit types (avoid `any`).
### Language-Specific Rules
Follow the appropriate language rule file for detailed patterns:
* **TypeScript/React**: See `typescript.mdc` and `react-advanced.mdc`
* **Python**: See `python.mdc`
* **Ruby/Rails**: See `ruby.mdc`
### Testing Requirements
* Add tests for new functionality - see `testing.mdc` for patterns
* Use `/write-tests` command if tests are needed
* **⚠️ PATTERN-MATCH with existing tests** - Find similar tests and copy their structure exactly:
```bash
# Find similar tests to use as template
find . -name "*.test.ts" -path "*/<similar-feature>/*" | head -3
```
* **Tests requiring external services (external services)**: These can't be fully validated locally
- Compare patterns meticulously since you can't run the test logic
- Note in PR description that CI validation is required
### Error Handling
* Follow language-specific error handling patterns (see language rules)
* Log errors with meaningful messages
* Handle edge cases proactively
### ⚠️ Refactoring Safety (CRITICAL)
When refactoring existing code (moving logic, extracting services, DRY):
1. **Run `/refactor-check`** before starting the refactoring
2. **Search for error type expectations** - Tests often assert on specific error types:
```bash
grep -rn "ErrorClassName" tests/
```
3. **Update affected tests** when changing:
- Error types or error wrapping
- Function signatures
- Return types
4. **Run the FULL test suite** (not just unit tests) before shipping
### Cross-Service Considerations
> **CONFIGURE**: Add your project's cross-service patterns in `architecture.mdc`
* <!-- CONFIGURE: Your inter-service communication pattern (e.g., message queues, event bus, REST calls) -->
### Frontend-Specific (if applicable)
* Ensure WCAG accessibility compliance (`/accessibility` command)
* <!-- CONFIGURE: Your UI framework (e.g., Tailwind, Material UI, Chakra, custom design system) -->
* Follow React hooks patterns in `react-advanced.mdc`
## Phase 4: Validation Loop
After writing the code, do NOT stop. You must validate it.
1. **Lint/Check**: Use `read_lints` (if available) or run relevant test/lint commands.
2. **Fix**: If there are errors, fix them immediately.
3. **Verify**: Run the code or tests to ensure it works as expected.
4. **Repeat**: Continue this loop until the code is clean and functional.
## Phase 5: Update Plan
1. **Update todo statuses** as you complete each step
2. **Mark plan as complete** when done
## Phase 6: Post-Implementation Review
After implementation is complete and validated, check for relevant commands to run:
> ⛔ **EXCEPTION: `/ship` is NEVER auto-suggested or auto-run.**
> The user must explicitly type `/ship` themselves. See `global.mdc` for details.
1. **Review Available Commands**: List `.cursor/commands/` directory to see available automation commands
2. **Run Relevant Commands**: Suggest and offer to run commands appropriate for the completed work:
**Pre-Merge (Always Consider)**:
- `/security-review` - For code handling auth, user data, or external services
- `/code-review` - Self-review before PR submission
- `/breaking-changes` - Detect API/interface breaking changes
- `/impact-analysis` - Understand ripple effects across the system
- `/coverage-gaps` - Identify untested code paths
- `/commit-message` - Generate conventional commit messages
- `/pr-description` - Generate PR description for the changes
**Documentation & Communication**:
- `/changelog` - Generate changelog entry
- `/migration-guide` - If breaking changes, create migration instructions
- **Confluence**: If the work is significant (new feature, architecture change, complex integration), offer to create/update a documentation page (Confluence, Notion, GitHub wiki, etc.):
> "This feature seems significant. Would you like me to create a documentation page (Confluence, Notion, GitHub wiki, etc.) documenting it? I'll need the space ID."
**Quality & Maintenance**:
- `/add-logging` - Audit logging coverage for production debugging
- `/dead-code` - Find unused code to clean up
- `/refactor-check` - REQUIRED before refactoring code (find affected tests)
**Operations & Deployment**:
- `/env-check` - Verify environment variables and config
- `/rollback-plan` - Create rollback instructions for deployment
- `/runbook` - Generate operational documentation
3. **Prioritize by Risk**: Security-sensitive changes should always get `/security-review`
**When to suggest commands:**
- After completing a feature implementation
- Before the user creates a PR
- When asked "what's next?" or similar
## Phase 7: Manual Verification (After CI Passes)
After CI passes, verify the feature actually works as intended.
**Run `/manual-test`** to:
1. **Derive Test Cases (MANDATORY)**:
- **From Requirements**: Acceptance criteria, issue description, user stories
- **From Code**: Error handling (`throw`), edge cases, auth logic
- **From System**: Affected clients (e.g., mobile app, web frontend), cross-service impacts
- **From Tests**: Scenarios covered in unit/integration tests
2. **Create a test plan** covering all derived cases
3. **Set up environment** (credentials, tokens)
4. **Execute tests** (APIs, databases, cloud resources)
5. **Document results**
See `/manual-test` command for full details.
**If tests pass** → Ready for code review and merge
**If tests fail** → Return to Phase 4 (fix and re-validate)
---
## Phase 8: Learn & Improve (Self-Improvement)
After completing a task, especially complex ones:
1. **Suggest `/retro`** if the session involved:
- Multiple iterations or fix attempts
- Discovering new patterns or gotchas
- Issues that could have been prevented
- Complex debugging or investigation
2. **Auto-log patterns** when:
- A tricky issue was solved → log to `/pattern-log`
- A useful pattern was discovered → log to `/pattern-log`
- `/ci-monitor` fixed a failure → auto-logged with learnings
3. **Periodic reminders**:
- After 10+ sessions: "Consider running `/rules-review` to audit rules"
- When pattern log has 5+ unpromoted issues: "Several patterns should become rules"
---
## When to Ask Questions
**ONLY ask when:**
- Business decisions not findable in code
- Multiple valid approaches with no clear precedent
- Security/compliance implications
- Genuinely cannot find answer after thorough research
**DO NOT ask about:**
- Technical implementation details (infer from patterns)
- Naming conventions (follow existing code)
- File locations (match similar features)
- Error handling (copy from similar code)
---
## Triggers
This workflow is automatically triggered when the user asks for:
* "Implement X"
* "Refactor Y"
* "Fix Z" (if complex)
* "Add feature X"

Write Tests

Objective

Generate comprehensive tests for the selected code or function.

Instructions

  1. Analyze the code to understand:

    • Inputs and expected outputs
    • Edge cases and boundary conditions
    • Error handling paths
    • Dependencies that may need mocking
  2. Identify test cases:

    • Happy path scenarios
    • Edge cases (empty inputs, nulls, boundary values)
    • Error conditions
    • Different input types/formats if applicable
  3. Write tests following best practices:

    • Use the testing framework already established in the project (or suggest appropriate one)
    • Follow AAA pattern (Arrange, Act, Assert)
    • Use descriptive test names that explain the scenario
    • Keep each test focused on one behavior
    • Use appropriate mocking/stubbing for external dependencies
  4. Include:

    • Unit tests for individual functions/methods
    • Integration tests if the code interacts with external systems
    • Setup/teardown if needed

Output

  • Provide complete, runnable test code
  • Explain the reasoning for each test case
  • Note any areas that may need additional manual testing
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment