Skip to content

Instantly share code, notes, and snippets.

@enzokro
Created January 16, 2026 15:41
Show Gist options
  • Select an option

  • Save enzokro/764c6741ab706560edd84e552e1e0be9 to your computer and use it in GitHub Desktop.

Select an option

Save enzokro/764c6741ab706560edd84e552e1e0be9 to your computer and use it in GitHub Desktop.
Recursive Memory Agent with claudette

Recursive Self-Updating Context System v3

The Core Insight

Standard chat treats context as knowledge - what’s in the window is what the model knows.

This system treats context as attention - the window holds what the model is currently attending to, while understanding persists in a structured, curated representation.

The scratchpad is not a log of what happened. It’s a live model of current understanding.

Design Principles

  1. Continuous curation over reactive compaction - Every turn, actively decide what understanding to maintain. Don’t wait for context overflow.
  2. Separation of cognition modes - Responding (being in the world) and reflecting (observing oneself in the world) are distinct. Don’t conflate them.
  3. Self-modeling is first-class - The agent maintains not just facts but metadata about its own epistemic state. Confidence, uncertainty, attention priorities.
  4. Structure enables surgical updates - Flat text drifts. Explicit sections with clear purposes enable precise maintenance.
  5. Buildable over complete - Ship what works, extend from there. This is not a research paper.

Architecture

The Cognitive Loop

┌─────────────────────────────────────────────────────────────┐
│                                                             │
│    ┌──────────────┐         ┌──────────────────────────┐   │
│    │  SCRATCHPAD  │         │      USER INPUT          │   │
│    │  (what I     │    +    │      (what's happening   │   │
│    │   understand)│         │       right now)         │   │
│    └──────┬───────┘         └───────────┬──────────────┘   │
│           │                             │                   │
│           └──────────┬──────────────────┘                   │
│                      ▼                                      │
│           ┌──────────────────────┐                          │
│           │   PHASE 1: RESPOND   │                          │
│           │   (agent in world)   │                          │
│           └──────────┬───────────┘                          │
│                      │                                      │
│                      ▼                                      │
│           ┌──────────────────────┐                          │
│           │      RESPONSE        │────────────► User        │
│           └──────────┬───────────┘                          │
│                      │                                      │
│                      ▼                                      │
│           ┌──────────────────────┐                          │
│           │   PHASE 2: REFLECT   │                          │
│           │   (observer of self) │                          │
│           └──────────┬───────────┘                          │
│                      │                                      │
│                      ▼                                      │
│           ┌──────────────────────┐                          │
│           │    STATE UPDATE      │                          │
│           │    (tool call)       │                          │
│           └──────────┬───────────┘                          │
│                      │                                      │
│                      ▼                                      │
│           ┌──────────────────────┐                          │
│           │   MERGE INTO         │                          │
│           │   SCRATCHPAD         │                          │
│           └──────────────────────┘                          │
│                      │                                      │
│                      └──────────────────────────────────────┘
│                              (next turn)
└─────────────────────────────────────────────────────────────┘

Message Structure

The context window for Phase 1 contains exactly:

[System Prompt]     - Protocol (stable, cached)
[Scratchpad]        - Current understanding (mutates between turns)
[User Input]        - This turn's input (replaced each turn)

No accumulated history. No prior messages. The scratchpad IS the memory.

The Scratchpad

Five sections, each with distinct stability and purpose:

## IDENTITY
<!-- Stability: HIGHEST | Changes: Initialization only -->
### Purpose
[What this agent instance exists to do]

### User
[Who I'm working with - stable traits that inform how I engage]

### Boundaries
[What I will and won't do - scope, constraints, limits]

---

## UNDERSTANDING
<!-- Stability: MEDIUM | The knowledge layer -->
### Known
[Facts I'm confident about]
- (user) "..." - Direct from user
- (inferred) "..." - I concluded this
- (external) "..." - From tools/search

### Believed
[Working assumptions - operating as if true, might be wrong]

### Unknown
[Explicit uncertainties - things I need but don't have]

---

## TRAJECTORY
<!-- Stability: LOW | Changes most turns -->
### Now
[What we're actively doing RIGHT NOW - single focus]

### Path
[How we got here - compressed causal chain, not transcript]
<!-- "User asked X → I tried Y → learned Z → now doing W" -->

### Later
[Threads explicitly deferred - with reason and trigger]

---

## WORKSPACE
<!-- Stability: VARIABLE | Task artifacts -->
[Intermediate outputs, data, code, structured content]
[This is the "working memory" - grows during tasks, clears between them]

---

## SELF
<!-- Stability: LOW | Metacognition -->
### Confidence
[HIGH | MEDIUM | LOW] - [Why]

### Attention
[What I'm prioritizing and why]

### Flags
[Signals for future self or orchestration]
- UNCERTAIN: <what and why>
- PROTECT: <high-value item to preserve>
- REVISIT: <decision that might need changing>
- SURFACE: <something user should know>

Section rationale:

  • IDENTITY - Who am I, who are you, what’s in scope. Set once, rarely touched. This is ANCHOR from v2, renamed for clarity.
  • UNDERSTANDING - Epistemic state. What I know, believe, and don’t know. The explicit Unknown section forces acknowledgment of gaps rather than confabulation.
  • TRAJECTORY - Temporal structure without logs. Now (singular focus), Path (compressed history), Later (deferred threads). This replaces v2’s Thread/Decisions/Parked with clearer semantics.
  • WORKSPACE - The scratch area. Task artifacts live here. This is the primary growth zone and compression target.
  • SELF - The metacognitive layer. Not facts about the world, but facts about my own cognitive state. This is what enables genuine self-modeling.

Two-Phase Generation

Phase 1: RESPOND

System prompt optimized for natural engagement:

RESPONSE_SYSTEM = """You are an AI assistant with persistent understanding.

Your context contains your current scratchpad - a structured representation of
what you understand about this conversation, this user, and your ongoing work.

This is not a transcript. It's your live model of the situation.

Trust it. Build on it. Respond naturally.

After you respond, a separate reflection process will update your understanding.
You don't need to manage memory - just be present and helpful."""

Phase 2: REFLECT

System prompt optimized for metacognition:

REFLECT_SYSTEM = """You are a reflection process observing an AI assistant.

You just saw an exchange:
- The assistant's prior understanding (scratchpad)
- The user's input
- The assistant's response

Your job: Update the scratchpad to reflect what changed.

PRINCIPLES:
1. MINIMAL - Most turns only need TRAJECTORY.Now and maybe WORKSPACE
2. HONEST - If confidence dropped, say so. If something's unclear, mark it Unknown.
3. FORWARD-LOOKING - Curate for future usefulness, not historical completeness
4. PRESERVE WHAT MATTERS - Don't lose critical context to save tokens

Call update_scratchpad with ONLY the sections that need to change.
If nothing meaningful changed, call with no arguments."""

State Update Tool

def update_scratchpad(
    # IDENTITY (rare)
    identity_purpose: str = None,
    identity_user: str = None,
    identity_boundaries: str = None,

    # UNDERSTANDING (occasional)
    understanding_known: str = None,      # APPEND: or REPLACE
    understanding_believed: str = None,
    understanding_unknown: str = None,

    # TRAJECTORY (most turns)
    trajectory_now: str = None,           # Usually updated
    trajectory_path: str = None,          # APPEND: to extend
    trajectory_later: str = None,

    # WORKSPACE (task-dependent)
    workspace: str = None,                # Often REPLACE or APPEND

    # SELF (frequent)
    self_confidence: str = None,          # HIGH|MEDIUM|LOW - reason
    self_attention: str = None,
    self_flags: str = None                # APPEND: or REPLACE
) -> dict:
    """
    Update scratchpad sections.

    Pass only sections that changed.
    Prefix with "APPEND: " to add to existing content.
    Prefix with "CLEAR" to empty a section.
    Otherwise, value replaces entire section.
    """
    return {k: v for k, v in locals().items() if v is not None}

Bootstrap Scratchpad

BOOTSTRAP = """## IDENTITY
### Purpose
{purpose}

### User
[To be established through interaction]

### Boundaries
[None declared]

---

## UNDERSTANDING
### Known
(none yet)

### Believed
(none yet)

### Unknown
- User's immediate goal
- User's context and constraints
- What success looks like

---

## TRAJECTORY
### Now
Initialization - ready for first real input

### Path
(just started)

### Later
(none)

---

## WORKSPACE
(empty)

---

## SELF
### Confidence
MEDIUM - Fresh start, no information yet

### Attention
Establishing user's needs and context

### Flags
(none)
"""

Implementation

File Structure

recursive_context/
├── __init__.py          # Public API
├── core.py              # RecursiveChat class
├── scratchpad.py        # Schema, bootstrap, section definitions
├── phases.py            # Response and reflection logic
├── update.py            # update_scratchpad tool + merge logic
├── prompts.py           # System prompts
├── validate.py          # Validation, recovery
└── trace.py             # Shadow logging

Core Class

from dataclasses import dataclass, field
from typing import Optional, Callable
from claudette import Chat, contents

@dataclass
class RecursiveChat:
    """
    Claudette wrapper implementing continuous curation.

    Context = Attention, not Knowledge.
    The scratchpad is a live model of current understanding.
    """

    # Configuration
    model: str = 'claude-sonnet-4-5-20250929'
    reflect_model: Optional[str] = None  # Defaults to model; can use cheaper
    purpose: str = "General assistant"
    tools: list = field(default_factory=list)

    # Behavioral flags
    two_phase: bool = True      # False for single-phase (faster, less accurate)
    trace: bool = True          # Shadow logging for debugging

    # State
    _scratchpad: str = field(default=None, init=False)
    _turn: int = field(default=0, init=False)
    _trace_log: list = field(default_factory=list, init=False)

    # Internals
    _respond_chat: Chat = field(default=None, init=False)
    _reflect_chat: Chat = field(default=None, init=False)

    def __post_init__(self):
        from .scratchpad import make_bootstrap
        from .prompts import RESPONSE_SYSTEM, REFLECT_SYSTEM

        self._scratchpad = make_bootstrap(self.purpose)
        self._respond_chat = Chat(self.model, sp=RESPONSE_SYSTEM)

        if self.two_phase:
            reflect_model = self.reflect_model or self.model
            self._reflect_chat = Chat(reflect_model, sp=REFLECT_SYSTEM)

    def __call__(self, user_input: str, **kwargs) -> str:
        """Process input through the cognitive loop."""
        self._turn += 1
        scratchpad_before = self._scratchpad

        # Phase 1: Respond
        response = self._respond(user_input, **kwargs)

        # Phase 2: Reflect (if enabled)
        if self.two_phase:
            self._reflect(user_input, response)

        # Trace
        if self.trace:
            self._log_turn(user_input, response, scratchpad_before)

        return response

    def _respond(self, user_input: str, **kwargs) -> str:
        """Phase 1: Generate response from current understanding."""
        from .phases import build_respond_context

        # Build minimal context: scratchpad + user input
        context = build_respond_context(self._scratchpad, user_input)

        # Clear history, set fresh context
        self._respond_chat.h = context

        # Generate (with tools if provided)
        result = self._respond_chat(tools=self.tools, **kwargs)

        return contents(result)

    def _reflect(self, user_input: str, response: str):
        """Phase 2: Observe exchange and update understanding."""
        from .phases import build_reflect_context
        from .update import update_scratchpad, apply_updates
        from .validate import safe_apply

        # Build reflection context
        context = build_reflect_context(
            self._scratchpad, user_input, response
        )

        # Fresh history for reflection
        self._reflect_chat.h = context

        # Force tool call
        result = self._reflect_chat(
            tools=[update_scratchpad],
            tool_choice='update_scratchpad'
        )

        # Extract updates from tool call
        updates = self._extract_updates(result)

        # Apply with validation
        self._scratchpad = safe_apply(self._scratchpad, updates)

    def _extract_updates(self, result) -> dict:
        """Extract update_scratchpad arguments from reflection result."""
        from anthropic.types import ToolUseBlock

        for block in result.content:
            if isinstance(block, ToolUseBlock):
                if block.name == 'update_scratchpad':
                    return block.input
        return {}

    def _log_turn(self, user_input: str, response: str, before: str):
        """Shadow log for debugging."""
        self._trace_log.append({
            'turn': self._turn,
            'input': user_input,
            'response': response,
            'scratchpad_before': before,
            'scratchpad_after': self._scratchpad,
        })

    # --- Public API ---

    @property
    def scratchpad(self) -> str:
        """Current understanding."""
        return self._scratchpad

    @property
    def turn(self) -> int:
        """Number of turns processed."""
        return self._turn

    @property
    def cost(self) -> float:
        """Total cost across both phases."""
        total = self._respond_chat.cost
        if self.two_phase and self._reflect_chat:
            total += self._reflect_chat.cost
        return total

    def export_trace(self, path: str):
        """Export shadow log for debugging."""
        import json
        with open(path, 'w') as f:
            json.dump(self._trace_log, f, indent=2)

    def inject(self, section: str, content: str):
        """Manually inject content into a scratchpad section."""
        from .update import apply_updates
        updates = {section: content}
        self._scratchpad = apply_updates(self._scratchpad, updates)

Context Building

# phases.py

from claudette import mk_msgs

def build_respond_context(scratchpad: str, user_input: str) -> list:
    """Build Phase 1 context: scratchpad + user input."""

    scratchpad_msg = f"""<current_understanding>
{scratchpad}
</current_understanding>

The above is your current understanding. Respond to the user naturally."""

    # Two messages: context setup, then user input
    return mk_msgs([scratchpad_msg, "Understood.", user_input])


def build_reflect_context(scratchpad: str, user_input: str, response: str) -> list:
    """Build Phase 2 context: scratchpad + exchange summary."""

    reflect_prompt = f"""<prior_understanding>
{scratchpad}
</prior_understanding>

<exchange>
USER: {user_input}

ASSISTANT: {response}
</exchange>

Review this exchange. What changed in the assistant's understanding?
Call update_scratchpad with any sections that need updating.
Most turns only need trajectory_now and possibly self_confidence."""

    return mk_msgs([reflect_prompt])

Update Logic

# update.py

import re
from typing import Optional

# Section paths for surgical updates
SECTIONS = {
    'identity_purpose': ('IDENTITY', 'Purpose'),
    'identity_user': ('IDENTITY', 'User'),
    'identity_boundaries': ('IDENTITY', 'Boundaries'),
    'understanding_known': ('UNDERSTANDING', 'Known'),
    'understanding_believed': ('UNDERSTANDING', 'Believed'),
    'understanding_unknown': ('UNDERSTANDING', 'Unknown'),
    'trajectory_now': ('TRAJECTORY', 'Now'),
    'trajectory_path': ('TRAJECTORY', 'Path'),
    'trajectory_later': ('TRAJECTORY', 'Later'),
    'workspace': ('WORKSPACE', None),
    'self_confidence': ('SELF', 'Confidence'),
    'self_attention': ('SELF', 'Attention'),
    'self_flags': ('SELF', 'Flags'),
}


def update_scratchpad(
    identity_purpose: Optional[str] = None,
    identity_user: Optional[str] = None,
    identity_boundaries: Optional[str] = None,
    understanding_known: Optional[str] = None,
    understanding_believed: Optional[str] = None,
    understanding_unknown: Optional[str] = None,
    trajectory_now: Optional[str] = None,
    trajectory_path: Optional[str] = None,
    trajectory_later: Optional[str] = None,
    workspace: Optional[str] = None,
    self_confidence: Optional[str] = None,
    self_attention: Optional[str] = None,
    self_flags: Optional[str] = None
) -> dict:
    """Update scratchpad sections. Pass only what changed."""
    return {k: v for k, v in locals().items() if v is not None}


def apply_updates(scratchpad: str, updates: dict) -> str:
    """Apply updates to scratchpad with APPEND/CLEAR/REPLACE semantics."""
    if not updates:
        return scratchpad

    lines = scratchpad.split('\n')
    result = []

    current_section = None
    current_subsection = None
    skip_until_next = False
    pending_append = None

    for i, line in enumerate(lines):
        # Track major sections
        if line.startswith('## '):
            if pending_append:
                result.append(pending_append)
                pending_append = None
            skip_until_next = False
            current_section = line[3:].strip()
            current_subsection = None
            result.append(line)

            # Handle sections without subsections (WORKSPACE)
            key = f"{current_section.lower()}"
            if key in updates:
                value = updates[key]
                if value == 'CLEAR':
                    skip_until_next = True
                elif value.startswith('APPEND: '):
                    pending_append = value[8:]
                else:
                    result.append(value)
                    skip_until_next = True
                continue

        # Track subsections
        elif line.startswith('### '):
            if pending_append:
                result.append(pending_append)
                pending_append = None
            skip_until_next = False
            current_subsection = line[4:].strip()
            result.append(line)

            # Check for update
            for key, (sec, subsec) in SECTIONS.items():
                if key in updates and sec == current_section and subsec == current_subsection:
                    value = updates[key]
                    if value == 'CLEAR':
                        skip_until_next = True
                    elif value.startswith('APPEND: '):
                        pending_append = value[8:]
                    else:
                        result.append(value)
                        skip_until_next = True
                    break

        # Section dividers
        elif line.startswith('---'):
            if pending_append:
                result.append(pending_append)
                pending_append = None
            skip_until_next = False
            result.append(line)

        # Content lines
        elif not skip_until_next:
            result.append(line)

    # Final pending append
    if pending_append:
        result.append(pending_append)

    return '\n'.join(result)

Validation

# validate.py

import logging

logger = logging.getLogger('recursive_context')


def validate_updates(updates: dict) -> tuple[bool, str]:
    """Validate updates before applying."""

    for key, value in updates.items():
        if value is None:
            continue

        # Size sanity check
        if len(str(value)) > 5000:
            return False, f"{key} exceeds 5k chars"

        # Confidence format check
        if key == 'self_confidence':
            if not any(level in value.upper() for level in ['HIGH', 'MEDIUM', 'LOW']):
                logger.warning(f"Confidence should include HIGH/MEDIUM/LOW: {value}")

    return True, ""


def safe_apply(scratchpad: str, updates: dict) -> str:
    """Apply updates with validation and fallback."""
    from .update import apply_updates

    valid, error = validate_updates(updates)
    if not valid:
        logger.warning(f"Update validation failed: {error}")
        return scratchpad

    try:
        new_scratchpad = apply_updates(scratchpad, updates)

        # Sanity check
        if len(new_scratchpad) < 100:
            logger.warning("Update produced suspiciously short scratchpad")
            return scratchpad

        return new_scratchpad

    except Exception as e:
        logger.error(f"Update failed: {e}")
        return scratchpad

Usage

Basic

from recursive_context import RecursiveChat

agent = RecursiveChat(purpose="Help with Python programming")

response = agent("How do I read a CSV file?")
print(response)

# Understanding persists
response = agent("What about handling missing values?")

# Check the live understanding
print(agent.scratchpad)

With Tools

def search_docs(query: str) -> str:
    """Search documentation."""
    ...

agent = RecursiveChat(
    purpose="Documentation assistant",
    tools=[search_docs]
)

Cost Optimization

agent = RecursiveChat(
    model='claude-sonnet-4-5-20250929',           # Quality responses
    reflect_model='claude-haiku-4-5',              # Cheap reflection
)

Single-Phase (Faster)

agent = RecursiveChat(two_phase=False)
# Embeds state update in response - faster, less precise

Debugging

agent.export_trace('debug.json')
# Full history of turns, scratchpad evolution

Manual Injection

# Pre-populate understanding
agent.inject('identity_user', 'Senior ML engineer, prefers concise responses')
agent.inject('understanding_known', '- (user) Working on a recommendation system')

Extension Points

Archival (Future)

When WORKSPACE grows too large:

def archive_workspace(self):
    """Move old workspace content to retrievable storage."""
    # Extract completed artifacts
    # Store in vector DB or filesystem
    # Leave pointer in workspace
    pass

Sub-Agents (Future)

For complex tasks:

def spawn_subagent(self, task: str, context: str) -> str:
    """Spawn focused sub-agent for specific task."""
    sub = RecursiveChat(purpose=task)
    sub.inject('workspace', context)
    return sub(task)

Persistence (Future)

def save(self, path: str):
    """Save agent state."""
    state = {
        'scratchpad': self._scratchpad,
        'turn': self._turn,
        'purpose': self.purpose,
    }
    ...

@classmethod
def load(cls, path: str) -> 'RecursiveChat':
    """Restore agent from saved state."""
    ...

What This Is Not

  • Not compaction - We don’t summarize history when context is full. There is no history.
  • Not RAG - We don’t retrieve context. The scratchpad IS the context.
  • Not RLM - We don’t treat the prompt as an external variable. We treat understanding as a maintained structure.

What This Is

A system where the model actively curates its own understanding, every turn, through a metacognitive reflection loop.

The scratchpad is not memory. It’s live cognition - a continuously updated model of what the agent currently understands to be true, important, uncertain, and in progress.

Context is attention, not knowledge.


Implementation Order

  1. scratchpad.py - Schema, SECTIONS dict, make_bootstrap()
  2. prompts.py - RESPONSE_SYSTEM, REFLECT_SYSTEM
  3. update.py - update_scratchpad tool, apply_updates()
  4. validate.py - validate_updates(), safe_apply()
  5. phases.py - build_respond_context(), build_reflect_context()
  6. trace.py - Logging helpers
  7. core.py - RecursiveChat class
  8. __init__.py - Public exports

Dependencies

claudette

Build it. See what we learn.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment