Skip to content

Instantly share code, notes, and snippets.

@bradleygolden
Created January 22, 2026 15:42
Show Gist options
  • Select an option

  • Save bradleygolden/f0a0c749c85184f16ce39cb290ea920f to your computer and use it in GitHub Desktop.

Select an option

Save bradleygolden/f0a0c749c85184f16ce39cb290ea920f to your computer and use it in GitHub Desktop.
Ralph Loop Reference Guide
# Ralph Loop Reference Guide
Conceptual guide for autonomous AI coding loops.
---
## Core Philosophy
### The Problem: Context Rot
Long-running AI sessions accumulate noise in the context window:
- Failed attempts and dead ends
- Stale information about file states
- Debugging artifacts
- Irrelevant conversation history
As context fills with rot, output quality degrades.
### The Solution: Files Are Memory
Ralph's insight: **Don't use the context window as memory. Use the filesystem.**
- Each iteration starts with **fresh context**
- State persists in **git history** and **progress files**
- The agent reads previous work from files, not conversation history
- Context rotates **before** pollution accumulates
Git is the memory layer. The context window is disposable.
---
## What the Loop Depends On
### PROMPT.md
The task definition. Read at the start of each iteration. Contains:
- What needs to be accomplished
- Completion criteria (machine-verifiable)
- Any constraints or conventions
### progress.txt
Append-only log of learnings and status. Each iteration:
1. Reads this to understand current state
2. Appends what was accomplished and learned
3. Notes any blockers
This short-circuits codebase exploration—the agent sees what's done without re-discovering it.
### AGENTS.md / CLAUDE.md
Persistent patterns and gotchas that survive across all iterations:
- Project conventions
- Known pitfalls
- Discoveries from previous runs
AI tools read this automatically. It becomes institutional knowledge.
### Git History
The source of truth. Each iteration should commit meaningful progress. Future iterations see what changed by examining commits, not by relying on context.
---
## Scaffolding PROMPT.md
### Structure
```markdown
# [Task Name]
## Objective
[What needs to be accomplished—one clear goal]
## Context
[Relevant background the agent needs to know]
## Requirements
[Specific deliverables, bullet points]
## Completion Criteria
[Machine-verifiable conditions for "done"]
## Constraints
[What NOT to do, boundaries, conventions to follow]
```
### Completion Criteria
The loop needs to know when to stop. Criteria must be **verifiable without human judgment**:
- "All tests pass"
- "No TypeScript errors"
- "Coverage >80%"
- "Endpoint returns 200 for valid input"
Avoid: "Code is clean" / "API is good" / "Works well"
### Task Sizing
Each task must be completable in **one context window**. If it's too big:
- Quality degrades as context fills
- Agent loses track mid-implementation
Break large work into sequential tasks, each with its own completion criteria.
### Escape Hatches
Include instructions for when the agent gets stuck:
```markdown
## If Blocked
- Document the blocker in progress.txt
- List what was attempted
- Note what a human should investigate
```
---
## Progress File Pattern
```markdown
=== Iteration 1 ===
- Completed: [what was done]
- Learned: [discovery that helps future iterations]
- Next: [what remains]
=== Iteration 2 ===
- Completed: [what was done]
- Blocked: [if applicable, what's preventing progress]
- Learned: [new discovery]
```
The agent reads this, skips to "Next", and continues. No archaeology required.
---
## AGENTS.md Pattern
```markdown
# Project Conventions
## Patterns
[How things are done in this codebase]
## Gotchas
[Pitfalls to avoid—things that look right but break]
## Learnings
[Accumulated discoveries from iterations]
```
Update this when the agent discovers something that would trip up future iterations.
---
## When Ralph Works Well
- Clear, measurable completion criteria
- Tasks with built-in verification (tests, linters, type checking)
- Greenfield work with defined specs
- Refactoring with existing test coverage
- Mechanical transformations across many files
## When Ralph Doesn't Work
- Subjective goals ("make it better")
- Tasks requiring human judgment or taste
- Unclear or shifting requirements
- Security-sensitive code needing careful review
- Exploratory work without defined endpoints
---
## Troubleshooting Concepts
### Loop won't converge
- Criteria may not be achievable
- Task may be too large for single context
- Missing information the agent can't discover
### Same errors repeating
- Learnings not being recorded to progress.txt or AGENTS.md
- Fundamental blocker needs human intervention
### Quality degrading within iterations
- Task too large, context filling up
- Break into smaller pieces
---
## Key Principles
1. **Files are memory** — Context is disposable, filesystem persists
2. **Fresh starts beat accumulated context** — Rotate before rot
3. **Verifiable criteria** — The loop must know when to stop
4. **Right-sized tasks** — Fit in one context window
5. **Record learnings** — Future iterations benefit from past discoveries
---
## Resources
- [ghuntley.com/ralph/](https://ghuntley.com/ralph/) — Original concept
- [ghuntley.com/loop/](https://ghuntley.com/loop/) — Philosophy deep-dive
- [awesome-ralph](https://github.com/snwfdhmp/awesome-ralph) — Curated resources
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment