Skip to content

Instantly share code, notes, and snippets.

@bigsnarfdude
Created March 9, 2026 05:55
Show Gist options
  • Select an option

  • Save bigsnarfdude/12732517193c2ded035663c478d30b8a to your computer and use it in GitHub Desktop.

Select an option

Save bigsnarfdude/12732517193c2ded035663c478d30b8a to your computer and use it in GitHub Desktop.
super_memory.md

Now I have a clear picture of both. Here's the breakdown:

What's in this repo (openclaw-supermemory)

A plugin for OpenClaw that adds persistent memory via the Supermemory cloud service:

  • Auto-recall: Semantically searches past memories before each AI turn, injects relevant context
  • Auto-capture: Extracts lasting facts from conversations automatically
  • Deduplication: Prevents redundant context injection
  • Profile building: Maintains a persistent user profile (static facts, dynamic facts)
  • Smart filtering: Knows what to remember (preferences, decisions, entities) vs. what to forget (temporary intents, assistant actions)
  • Container routing: AI-driven categorization into separate memory namespaces

What's in autoresearch

An autonomous ML experimentation framework where Claude edits train.py, runs 5-minute experiments, logs results, and loops forever. Uses the "Ralph loop" pattern: markdown state files (progress.md, next_ideas.md, results.tsv) as persistent memory between iterations.

How supermemory ideas can level up autoresearch

Here are the concrete cross-pollination opportunities:

  1. Semantic recall instead of flat file reads

Right now autoresearch reads progress.md and results.tsv linearly every iteration. After 500+ experiments, these files bloat and waste context. Supermemory's approach: semantically search past experiments based on the current idea, retrieving only the 5-10 most relevant prior results. Instead of "read all history", it becomes "what happened last time we tried adjusting RoPE base frequency?"

  1. Auto-capture for experiment insights

Currently the agent manually updates progress.md with "what works/fails". Supermemory's auto-capture pattern could automatically extract insights from each experiment's logs and diffs, building a searchable knowledge base without relying on the agent to summarize correctly every time.

  1. Deduplication of experiment ideas

next_ideas.md can accumulate duplicate or near-duplicate ideas across iterations. Supermemory's deduplication logic (similarity-based) would prevent the agent from re-proposing experiments that are essentially the same as previous ones.

  1. Profile = "research state" that persists across sessions

Supermemory builds a user profile from conversations. Autoresearch could build a "research profile" — GPU constraints, confirmed good hyperparameter ranges, dead-end directions — that persists even when you switch GPUs or restart from scratch. Instead of encoding this in progress.md manually, it's extracted automatically.

  1. Container tags for experiment categories

Supermemory routes memories into containers (work, personal). Autoresearch could use the same pattern to separate memory by experiment category: lr-tuning, architecture, optimizer, rope, wild-card. The coordinator could then query specific categories when generating new experiments.

  1. Relative importance scoring

Supermemory returns similarity scores with each memory. Autoresearch could weight experiment history by relevance to current search direction rather than treating all past experiments equally. A RoPE experiment from 200 iterations ago is irrelevant when you're deep in LR tuning — unless you're about to try combining them.

  1. Forgetting mechanism

Supermemory has supermemory_forget. Autoresearch has no equivalent — failed experiments accumulate forever. A forgetting mechanism that prunes experiments below a relevance threshold would keep the agent's context focused on what matters.


The bottom line: autoresearch's markdown-as-memory pattern is clever but doesn't scale past ~100 experiments. The supermemory approach of semantic search + auto-extraction + deduplication would let autoresearch run thousands of experiments without context degradation. The simplest integration path would be adding a local semantic index (embeddings over results.tsv entries + done/*.md reports) rather than requiring the Supermemory cloud service.

Want me to prototype any of these ideas into autoresearch?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment