Skip to content

Instantly share code, notes, and snippets.

@kylesnowschwartz
Created January 11, 2026 22:54
Show Gist options
  • Select an option

  • Save kylesnowschwartz/671293fdc2de8983aa38688b46d4c7ec to your computer and use it in GitHub Desktop.

Select an option

Save kylesnowschwartz/671293fdc2de8983aa38688b46d4c7ec to your computer and use it in GitHub Desktop.
Codebase Simplifier: SimpleClaude command vs OpenProse implementation
description allowed-tools argument-hint
Run 5 parallel agents to analyze codebase quality (duplication, abstractions, naming, dead code, tests)
Task, Read, Bash, AskUserQuestion, TodoWrite
target-directory

Analyze the codebase at $ARGUMENTS (or current directory if not specified) for code quality issues.

Phase 1: Launch 5 Analysis Agents in Parallel

Launch ALL of these agents simultaneously in a single message using the Task tool with run_in_background: true. Do NOT wait between launches.

Duplication Hunter

Task(
  description: "Find code duplication",
  subagent_type: "Explore",
  model: "sonnet",
  run_in_background: true,
  prompt: """
  You are a code duplication specialist. Analyze this codebase for:

  1. Copy-paste duplication: Identical or near-identical code blocks
  2. Structural duplication: Same patterns implemented differently
  3. Logic duplication: Same business logic scattered across files

  For each finding, report:
  - Files and line ranges involved
  - Similarity percentage (exact, near-exact, structural)
  - Suggested consolidation approach (extract function, module, trait, etc.)

  Focus on duplication that MATTERS - ignore boilerplate that's intentionally repeated.
  Return a structured list of findings with severity (high/medium/low).
  """
)

Abstraction Critic

Task(
  description: "Find unnecessary abstractions",
  subagent_type: "Explore",
  model: "opus",
  run_in_background: true,
  prompt: """
  You are an abstraction minimalist. Hunt for unnecessary complexity:

  1. Over-abstraction: Interfaces with one implementation, factories that create one thing
  2. Premature generalization: Code built for flexibility never used
  3. Wrapper hell: Classes that just delegate to another class
  4. Config theater: Complex configuration for things that never change

  For each finding, explain:
  - What the abstraction is trying to do
  - Why it's unnecessary (show the single usage, the never-used flexibility)
  - What simpler alternative would work

  Be opinionated. YAGNI violations are your specialty.
  """
)

Naming Auditor

Task(
  description: "Audit naming consistency",
  subagent_type: "Explore",
  model: "sonnet",
  run_in_background: true,
  prompt: """
  You audit naming consistency across a codebase. Look for:

  1. Convention violations: camelCase vs snake_case mixing, inconsistent prefixes
  2. Semantic drift: Same concept named differently in different places
  3. Misleading names: Functions that do more/less than their name suggests
  4. Abbreviation chaos: Some things abbreviated, others not

  Group findings by type. For each, show:
  - The inconsistent examples
  - What the convention should be (infer from majority usage)
  - Files affected

  Don't nitpick - focus on inconsistencies that hurt readability.
  """
)

Dead Code Detector

Task(
  description: "Find dead/unreferenced code",
  subagent_type: "Explore",
  model: "sonnet",
  run_in_background: true,
  prompt: """
  You find code that should be deleted. Hunt for:

  1. Unreferenced exports: Public functions/classes never imported
  2. Orphan files: Files not required by anything
  3. Commented-out code: Code in comments that's clearly dead
  4. Feature flags to nowhere: Conditionals for features that don't exist
  5. TODO graveyards: Ancient TODOs that will never be done

  For each finding:
  - Show the dead code location
  - Prove it's dead (no references, no imports)
  - Confidence level (certain, likely, possible)

  Be conservative - don't flag things that might be used dynamically or via reflection.
  """
)

Test Organizer

Task(
  description: "Analyze test organization",
  subagent_type: "Explore",
  model: "sonnet",
  run_in_background: true,
  prompt: """
  You analyze test organization and coverage patterns. Look for:

  1. Structural chaos: Tests not mirroring source structure
  2. Missing test files: Source files with no corresponding tests
  3. Giant test files: Test files that should be split
  4. Fixture sprawl: Test data scattered without organization
  5. Framework inconsistency: Different test patterns in same codebase

  Report:
  - Current test organization pattern (or lack thereof)
  - Specific files that violate the pattern
  - Coverage gaps (directories with no tests)
  - Recommended structure

  Don't count assertions or coverage percentage - focus on organization.
  """
)

Phase 2: Collect Results

After launching all agents, periodically check their output files using Read or tail -f. Wait until all 5 complete.

Format each category's findings as:

## [Category Name]

### High Priority
- [item]: [one-line description] ([file:line])

### Medium Priority
- ...

### Low Priority
- ...

Skip empty priority levels. Be concise - one sentence per item.

Phase 3: Synthesize Report

Consolidate all findings into this structure:

# Codebase Quality Report

## Executive Summary
[3-5 sentences on overall codebase health]

## Quick Wins
[Things fixable in < 30 minutes with high impact]

## Recommended Refactors
[Larger changes worth doing]

## Tech Debt Backlog
[Valid issues to track but not urgent]

## Skip These
[Findings not worth the effort - explain why]

Be opinionated about priority. Cross-reference findings where duplication relates to abstraction issues, etc.

Phase 4: User Choice

Use AskUserQuestion with these options:

Option Label Description
fix-plan Generate fix plan Detailed step-by-step instructions for each Quick Win and Recommended Refactor
auto-fix Auto-fix quick wins Make the code changes for quick wins (max 10 iterations)
report-only Just the report Output the report and stop

If "Generate fix plan":

For each Quick Win and Recommended Refactor:

### [Issue Name]
**Files**: [list]
**Steps**:
1. [specific action]
2. [specific action]
**Verification**: [how to confirm it worked]

Make steps concrete enough for a junior dev.

If "Auto-fix quick wins":

  1. Extract quick wins as a checklist using TodoWrite
  2. Loop (max 10):
    • Pick next unaddressed quick win
    • Make the code change
    • Mark complete in TodoWrite
    • Continue until done or max reached
  3. Summarize all changes made

If "Just the report":

Output the consolidated report. Done.

# Codebase Simplifier: Multi-Agent Code Quality Orchestrator
#
# Spawns specialized agents in parallel to find:
# - Duplicate code patterns
# - Unnecessary abstractions
# - Inconsistent naming
# - Unorganized tests
# - Dead code (unreferenced functions)
#
# Then consolidates findings, prioritizes by impact, and optionally fixes.
#
# Usage: Run with target directory in context, e.g.:
# session "Run codebase-simplifier on /path/to/project"
# =============================================================================
# AGENT DEFINITIONS
# =============================================================================
agent duplication-hunter:
model: sonnet
prompt: """
You are a code duplication specialist. Your job is to find:
1. **Copy-paste duplication**: Identical or near-identical code blocks
2. **Structural duplication**: Same patterns implemented differently
3. **Logic duplication**: Same business logic scattered across files
For each finding, report:
- Files and line ranges involved
- Similarity percentage (exact, near-exact, structural)
- Suggested consolidation approach (extract function, module, trait, etc.)
Focus on duplication that MATTERS - ignore boilerplate that's intentionally repeated.
Return a structured list of findings with severity (high/medium/low).
"""
agent abstraction-critic:
model: opus
prompt: """
You are an abstraction minimalist. Hunt for unnecessary complexity:
1. **Over-abstraction**: Interfaces with one implementation, factories that create one thing
2. **Premature generalization**: Code built for flexibility never used
3. **Wrapper hell**: Classes that just delegate to another class
4. **Config theater**: Complex configuration for things that never change
For each finding, explain:
- What the abstraction is trying to do
- Why it's unnecessary (show the single usage, the never-used flexibility)
- What simpler alternative would work
Be opinionated. YAGNI violations are your specialty.
"""
agent naming-auditor:
model: sonnet
prompt: """
You audit naming consistency across a codebase. Look for:
1. **Convention violations**: camelCase vs snake_case mixing, inconsistent prefixes
2. **Semantic drift**: Same concept named differently in different places
3. **Misleading names**: Functions that do more/less than their name suggests
4. **Abbreviation chaos**: Some things abbreviated, others not
Group findings by type. For each, show:
- The inconsistent examples
- What the convention should be (infer from majority usage)
- Files affected
Don't nitpick - focus on inconsistencies that hurt readability.
"""
agent dead-code-detector:
model: sonnet
prompt: """
You find code that should be deleted. Hunt for:
1. **Unreferenced exports**: Public functions/classes never imported
2. **Orphan files**: Files not required by anything
3. **Commented-out code**: Code in comments that's clearly dead
4. **Feature flags to nowhere**: Conditionals for features that don't exist
5. **TODO graveyards**: Ancient TODOs that will never be done
For each finding:
- Show the dead code location
- Prove it's dead (no references, no imports)
- Confidence level (certain, likely, possible)
Be conservative - don't flag things that might be used dynamically or via reflection.
"""
agent test-organizer:
model: sonnet
prompt: """
You analyze test organization and coverage patterns. Look for:
1. **Structural chaos**: Tests not mirroring source structure
2. **Missing test files**: Source files with no corresponding tests
3. **Giant test files**: Test files that should be split
4. **Fixture sprawl**: Test data scattered without organization
5. **Framework inconsistency**: Different test patterns in same codebase
Report:
- Current test organization pattern (or lack thereof)
- Specific files that violate the pattern
- Coverage gaps (directories with no tests)
- Recommended structure
Don't count assertions or coverage percentage - focus on organization.
"""
agent formatter:
model: haiku
prompt: "You format and structure findings into clean, actionable reports."
agent synthesizer:
model: opus
prompt: "You consolidate multiple analysis reports into prioritized, actionable recommendations."
# =============================================================================
# REUSABLE BLOCKS
# =============================================================================
block format-findings(category, findings):
session: formatter
prompt: """
Format the {category} findings as actionable items.
Take these raw findings and structure as:
## {category}
### High Priority
- [item]: [one-line description] ([file:line])
### Medium Priority
- ...
### Low Priority (or skip if empty)
- ...
Be concise. Each item should be actionable in one sentence.
"""
context: findings
# =============================================================================
# MAIN WORKFLOW
# =============================================================================
# Target directory: passed via context when running, defaults to current directory
# Phase 1: Parallel deep analysis
# Each agent explores the codebase independently
parallel (on-fail: "continue"):
duplication = session: duplication-hunter
prompt: """
Analyze the codebase for code duplication.
Explore thoroughly - check all source files.
Return structured findings.
"""
abstractions = session: abstraction-critic
prompt: """
Analyze the codebase for unnecessary abstractions.
Look at class hierarchies, interfaces, factories, wrappers.
Return structured findings with your recommendations.
"""
naming = session: naming-auditor
prompt: """
Audit naming conventions in the codebase.
Sample broadly across all directories.
Return grouped findings by inconsistency type.
"""
dead_code = session: dead-code-detector
prompt: """
Find dead code in the codebase.
Check exports, imports, commented code.
Return findings with confidence levels.
"""
tests = session: test-organizer
prompt: """
Analyze test organization in the codebase.
Map test structure against source structure.
Return organization assessment and gaps.
"""
# Phase 2: Format each category's findings
parallel:
dup_formatted = do format-findings("Duplication", duplication)
abs_formatted = do format-findings("Unnecessary Abstraction", abstractions)
name_formatted = do format-findings("Naming Inconsistency", naming)
dead_formatted = do format-findings("Dead Code", dead_code)
test_formatted = do format-findings("Test Organization", tests)
# Phase 3: Consolidate and prioritize
let consolidated = session: synthesizer
prompt: """
Consolidate all findings into a single prioritized report.
Structure:
1. **Executive Summary**: 3-5 sentences on overall codebase health
2. **Quick Wins**: Things that can be fixed in < 30 minutes with high impact
3. **Recommended Refactors**: Larger changes worth doing
4. **Tech Debt Backlog**: Valid issues to track but not urgent
5. **Skip These**: Findings that aren't worth the effort (explain why)
Be opinionated about priority. Not everything needs fixing.
Cross-reference findings - duplication might relate to abstraction issues.
"""
context: { dup_formatted, abs_formatted, name_formatted, dead_formatted, test_formatted }
# Phase 4: Ask user what to do
choice **user preference based on the report**:
option "Generate fix plan":
# Create detailed fix instructions
let fix_plan = session: synthesizer
prompt: """
For each Quick Win and Recommended Refactor, create:
### [Issue Name]
**Files**: [list]
**Steps**:
1. [specific action]
2. [specific action]
**Verification**: [how to confirm it worked]
Make steps concrete enough that a junior dev could follow them.
"""
context: consolidated
# Results available as: fix_plan, consolidated
option "Auto-fix quick wins":
# Actually make the changes
let quick_wins = session: formatter
prompt: "Extract the Quick Wins section from this report as a checklist."
context: consolidated
loop until **all quick wins addressed** (max: 10):
session: synthesizer
prompt: """
Pick the next unaddressed quick win.
Make the code change.
Mark it as done.
Move to the next one.
"""
context: quick_wins
let changes = session: formatter
prompt: "Summarize all changes made during the auto-fix process."
# Results available as: consolidated, changes
option "Just the report, saved to a .md file":
# Result available as: consolidated
session "Report complete. See consolidated findings above."
# Done - bound variables (consolidated, fix_plan, changes) hold results
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment