Skip to content

Instantly share code, notes, and snippets.

@karpathy
Created April 4, 2026 16:25
Show Gist options
  • Select an option

  • Save karpathy/442a6bf555914893e9891c11519de94f to your computer and use it in GitHub Desktop.

Select an option

Save karpathy/442a6bf555914893e9891c11519de94f to your computer and use it in GitHub Desktop.
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

The idea here is different. Instead of just retrieving from raw documents at query time, the LLM incrementally builds and maintains a persistent wiki — a structured, interlinked collection of markdown files that sits between you and the raw sources. When you add a new source, the LLM doesn't just index it for later retrieval. It reads it, extracts the key information, and integrates it into the existing wiki — updating entity pages, revising topic summaries, noting where new data contradicts old claims, strengthening or challenging the evolving synthesis. The knowledge is compiled once and then kept current, not re-derived on every query.

This is the key difference: the wiki is a persistent, compounding artifact. The cross-references are already there. The contradictions have already been flagged. The synthesis already reflects everything you've read. The wiki keeps getting richer with every source you add and every question you ask.

You never (or rarely) write the wiki yourself — the LLM writes and maintains all of it. You're in charge of sourcing, exploration, and asking the right questions. The LLM does all the grunt work — the summarizing, cross-referencing, filing, and bookkeeping that makes a knowledge base actually useful over time. In practice, I have the LLM agent open on one side and Obsidian open on the other. The LLM makes edits based on our conversation, and I browse the results in real time — following links, checking the graph view, reading the updated pages. Obsidian is the IDE; the LLM is the programmer; the wiki is the codebase.

This can apply to a lot of different contexts. A few examples:

  • Personal: tracking your own goals, health, psychology, self-improvement — filing journal entries, articles, podcast notes, and building up a structured picture of yourself over time.
  • Research: going deep on a topic over weeks or months — reading papers, articles, reports, and incrementally building a comprehensive wiki with an evolving thesis.
  • Reading a book: filing each chapter as you go, building out pages for characters, themes, plot threads, and how they connect. By the end you have a rich companion wiki. Think of fan wikis like Tolkien Gateway — thousands of interlinked pages covering characters, places, events, languages, built by a community of volunteers over years. You could build something like that personally as you read, with the LLM doing all the cross-referencing and maintenance.
  • Business/team: an internal wiki maintained by LLMs, fed by Slack threads, meeting transcripts, project documents, customer calls. Possibly with humans in the loop reviewing updates. The wiki stays current because the LLM does the maintenance that no one on the team wants to do.
  • Competitive analysis, due diligence, trip planning, course notes, hobby deep-dives — anything where you're accumulating knowledge over time and want it organized rather than scattered.

Architecture

There are three layers:

Raw sources — your curated collection of source documents. Articles, papers, images, data files. These are immutable — the LLM reads from them but never modifies them. This is your source of truth.

The wiki — a directory of LLM-generated markdown files. Summaries, entity pages, concept pages, comparisons, an overview, a synthesis. The LLM owns this layer entirely. It creates pages, updates them when new sources arrive, maintains cross-references, and keeps everything consistent. You read it; the LLM writes it.

The schema — a document (e.g. CLAUDE.md for Claude Code or AGENTS.md for Codex) that tells the LLM how the wiki is structured, what the conventions are, and what workflows to follow when ingesting sources, answering questions, or maintaining the wiki. This is the key configuration file — it's what makes the LLM a disciplined wiki maintainer rather than a generic chatbot. You and the LLM co-evolve this over time as you figure out what works for your domain.

Operations

Ingest. You drop a new source into the raw collection and tell the LLM to process it. An example flow: the LLM reads the source, discusses key takeaways with you, writes a summary page in the wiki, updates the index, updates relevant entity and concept pages across the wiki, and appends an entry to the log. A single source might touch 10-15 wiki pages. Personally I prefer to ingest sources one at a time and stay involved — I read the summaries, check the updates, and guide the LLM on what to emphasize. But you could also batch-ingest many sources at once with less supervision. It's up to you to develop the workflow that fits your style and document it in the schema for future sessions.

Query. You ask questions against the wiki. The LLM searches for relevant pages, reads them, and synthesizes an answer with citations. Answers can take different forms depending on the question — a markdown page, a comparison table, a slide deck (Marp), a chart (matplotlib), a canvas. The important insight: good answers can be filed back into the wiki as new pages. A comparison you asked for, an analysis, a connection you discovered — these are valuable and shouldn't disappear into chat history. This way your explorations compound in the knowledge base just like ingested sources do.

Lint. Periodically, ask the LLM to health-check the wiki. Look for: contradictions between pages, stale claims that newer sources have superseded, orphan pages with no inbound links, important concepts mentioned but lacking their own page, missing cross-references, data gaps that could be filled with a web search. The LLM is good at suggesting new questions to investigate and new sources to look for. This keeps the wiki healthy as it grows.

Indexing and logging

Two special files help the LLM (and you) navigate the wiki as it grows. They serve different purposes:

index.md is content-oriented. It's a catalog of everything in the wiki — each page listed with a link, a one-line summary, and optionally metadata like date or source count. Organized by category (entities, concepts, sources, etc.). The LLM updates it on every ingest. When answering a query, the LLM reads the index first to find relevant pages, then drills into them. This works surprisingly well at moderate scale (~100 sources, ~hundreds of pages) and avoids the need for embedding-based RAG infrastructure.

log.md is chronological. It's an append-only record of what happened and when — ingests, queries, lint passes. A useful tip: if each entry starts with a consistent prefix (e.g. ## [2026-04-02] ingest | Article Title), the log becomes parseable with simple unix tools — grep "^## \[" log.md | tail -5 gives you the last 5 entries. The log gives you a timeline of the wiki's evolution and helps the LLM understand what's been done recently.

Optional: CLI tools

At some point you may want to build small tools that help the LLM operate on the wiki more efficiently. A search engine over the wiki pages is the most obvious one — at small scale the index file is enough, but as the wiki grows you want proper search. qmd is a good option: it's a local search engine for markdown files with hybrid BM25/vector search and LLM re-ranking, all on-device. It has both a CLI (so the LLM can shell out to it) and an MCP server (so the LLM can use it as a native tool). You could also build something simpler yourself — the LLM can help you vibe-code a naive search script as the need arises.

Tips and tricks

  • Obsidian Web Clipper is a browser extension that converts web articles to markdown. Very useful for quickly getting sources into your raw collection.
  • Download images locally. In Obsidian Settings → Files and links, set "Attachment folder path" to a fixed directory (e.g. raw/assets/). Then in Settings → Hotkeys, search for "Download" to find "Download attachments for current file" and bind it to a hotkey (e.g. Ctrl+Shift+D). After clipping an article, hit the hotkey and all images get downloaded to local disk. This is optional but useful — it lets the LLM view and reference images directly instead of relying on URLs that may break. Note that LLMs can't natively read markdown with inline images in one pass — the workaround is to have the LLM read the text first, then view some or all of the referenced images separately to gain additional context. It's a bit clunky but works well enough.
  • Obsidian's graph view is the best way to see the shape of your wiki — what's connected to what, which pages are hubs, which are orphans.
  • Marp is a markdown-based slide deck format. Obsidian has a plugin for it. Useful for generating presentations directly from wiki content.
  • Dataview is an Obsidian plugin that runs queries over page frontmatter. If your LLM adds YAML frontmatter to wiki pages (tags, dates, source counts), Dataview can generate dynamic tables and lists.
  • The wiki is just a git repo of markdown files. You get version history, branching, and collaboration for free.

Why this works

The tedious part of maintaining a knowledge base is not the reading or the thinking — it's the bookkeeping. Updating cross-references, keeping summaries current, noting when new data contradicts old claims, maintaining consistency across dozens of pages. Humans abandon wikis because the maintenance burden grows faster than the value. LLMs don't get bored, don't forget to update a cross-reference, and can touch 15 files in one pass. The wiki stays maintained because the cost of maintenance is near zero.

The human's job is to curate sources, direct the analysis, ask good questions, and think about what it all means. The LLM's job is everything else.

The idea is related in spirit to Vannevar Bush's Memex (1945) — a personal, curated knowledge store with associative trails between documents. Bush's vision was closer to this than to what the web became: private, actively curated, with the connections between documents as valuable as the documents themselves. The part he couldn't solve was who does the maintenance. The LLM handles that.

Note

This document is intentionally abstract. It describes the idea, not a specific implementation. The exact directory structure, the schema conventions, the page formats, the tooling — all of that will depend on your domain, your preferences, and your LLM of choice. Everything mentioned above is optional and modular — pick what's useful, ignore what isn't. For example: your sources might be text-only, so you don't need image handling at all. Your wiki might be small enough that the index file is all you need, no search engine required. You might not care about slide decks and just want markdown pages. You might want a completely different set of output formats. The right way to use this is to share it with your LLM agent and work together to instantiate a version that fits your needs. The document's only job is to communicate the pattern. Your LLM can figure out the rest.

@ekadetov
Copy link
Copy Markdown

ekadetov commented Apr 6, 2026

bundled as a claude plugin: https://github.com/ekadetov/llm-wiki

@GeminiLight
Copy link
Copy Markdown

GeminiLight commented Apr 6, 2026

Love this pattern. I've been building something along these lines for the past year — started from the same pain point (context scattered across 5+ agents) and arrived at a very similar architecture.

A few things I found after productizing this into an open-source tool, MindOS:

1. Multi-agent is the real unlock. The gist describes one LLM maintaining the wiki. But most of us use 3-5 agents daily (Claude Code, Cursor, Gemini CLI, Codex...). The moment all of them read/write the same wiki, corrections compound across tools — fix a coding convention in Claude, Cursor already knows it next session.

2. Experience distillation > manual ingest. Rather than manually dropping files into raw/, conversations with agents can auto-distill into wiki entries. A correction you make ("use enums, not strings") becomes a persistent rule without you filing it.

3. The schema layer can be the wiki itself. Instead of a separate config telling the LLM how to behave, the wiki pages are the instructions. Notes naturally double as executable agent commands (CLAUDE.md / AGENTS.md).

The knowledge base homepage — everything is local Markdown, browsable and editable:

MindOS Knowledge Base

19 agents connected to the same wiki — CLI-native, no MCP lock-in:

MindOS Agent Management

Built this as MindOS — open source, local-first, pure Markdown. Would love feedback from anyone experimenting with this pattern.

@ajrmooreuk
Copy link
Copy Markdown

Another Gem from Andrej Thanks Always. Geneeorus mindset and spirit. Always appreciated.

We have the graph and some useful Ingiht discovering sub agents, a body of ideas strategy and themes even with the llm on its own as you suddenly realise you have 1k+ ideas in draft all worthwhile but not the time to tracka nd trace thru every line of enquiry. So we had built capture tools, citation trackers QA and a threads but this gave the opportunity to build platform and instance wikis the human readable narrative for a real team to work from.

Teamed it up with 2nd team brain and wow its already leveraging autoresearch and now leveraging the wiki ideas.

Some briiliant threads in this chat too. Thanks to all. @eccoai @ozdreamwalk and @DavidJMoore56

@tomjwxf
Copy link
Copy Markdown

tomjwxf commented Apr 6, 2026

Following up on the epistemic integrity thread that @laphilosophia, @Jwcjwc12, @Paul-Kyle, and @bluewater8008 all raised from different angles. I think this is the most important unsolved problem in the LLM Wiki pattern.

The problem stated plainly: a wiki maintained by an LLM can synthesise without citing, drift from its sources without knowing it, and present false certainty where disagreement exists. Content hashing (Freelance) tells you when sources changed. Git blame (Palinode) tells you who edited. Neither tells a third party that the knowledge is trustworthy.

Three things that help, based on what I've been building:

1. Source provenance with content hashing (what @Jwcjwc12 built in Freelance)

Every knowledge artifact records which source documents produced it and their SHA-256 hashes at compile time. When you query, the system checks whether the sources still match. Hash match = valid. Mismatch = stale. This should be in the schema, not bolted on.

```json
"sources": [
{ "uri": "paper.pdf", "content_hash": "sha256:a3f8...", "ingested_at": "2026-04-01" },
{ "uri": "article.md", "content_hash": "sha256:b7c2...", "ingested_at": "2026-04-03" }
]
```

2. Structured consensus instead of editorial synthesis (what @laphilosophia described as "separate facts, inferences, and open questions explicitly")

Instead of one model writing a summary, run the question through 4+ models independently, then cross-critique, then extract where they agree and disagree structurally. The output is not a synthesis paragraph but three arrays:

  • agreed: claims where all models converge
  • disputed: claims where models diverge, with per-model positions
  • uncertain: claims no model could resolve confidently

The synthesis paragraph is kept as editorial convenience (like a legal headnote) but explicitly marked as non-canonical. The arrays are the authoritative content.

3. Cryptographic receipt binding (what @Paul-Kyle's git-commit-per-fact does, but with Ed25519 signatures)

Every round of the process (independent responses, critique, synthesis) produces a signed receipt. The receipt chain is independently verifiable offline without trusting the wiki operator. A third party can check that:

  • These specific models participated
  • They produced these specific responses
  • The responses were not modified after signing
  • The chain is intact (no rounds were inserted or removed)

What this looks like in practice:

"Are LLMs approaching a capability plateau?" - 4 models deliberate independently, cross-critique with adversarial roles (verifier, devil's advocate), then synthesis extracts: 4 agreed points, 2 disputed (including whether emergent capabilities are real evidence for continued breakthroughs). Every round is signed. Anyone can verify the chain offline.

Live example: https://acta.today/s/ku-z36vuoreb2k3

I've formalised the schema as an IETF Internet-Draft (draft-farley-acta-knowledge-units-00) covering: the full field schema, deliberation process, consensus levels (unanimous/strong/split/divergent), lifecycle management (KEEP/UPDATE/SUPERSEDE/MERGE/ARCHIVE operations), canonical question resolution for deduplication, and receipt chain construction. The receipt format is a companion IETF draft (draft-farley-acta-signed-receipts).

@bluewater8008 your point about progressive disclosure is right. The spec defines four levels: L0 (~50 tokens, question + consensus + top claim) for search results, L1 (~200 tokens, all agreed/disputed) for agent context, L2 (~1K tokens, full synthesis + sources) for articles, L3 (complete deliberation) for audit. Agents should read L0 for all candidates and only drill into L2/L3 for the one they select.

Not every wiki page needs this level of rigour. Most don't. But for the entries that matter - contested topics, high-stakes decisions, knowledge that will be acted on - having a structured, signed, multi-perspective record is the difference between "the LLM said so" and "here's the math, check it yourself."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment