Skip to content

Instantly share code, notes, and snippets.

@karpathy
Created April 4, 2026 16:25
Show Gist options
  • Select an option

  • Save karpathy/442a6bf555914893e9891c11519de94f to your computer and use it in GitHub Desktop.

Select an option

Save karpathy/442a6bf555914893e9891c11519de94f to your computer and use it in GitHub Desktop.
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

The idea here is different. Instead of just retrieving from raw documents at query time, the LLM incrementally builds and maintains a persistent wiki — a structured, interlinked collection of markdown files that sits between you and the raw sources. When you add a new source, the LLM doesn't just index it for later retrieval. It reads it, extracts the key information, and integrates it into the existing wiki — updating entity pages, revising topic summaries, noting where new data contradicts old claims, strengthening or challenging the evolving synthesis. The knowledge is compiled once and then kept current, not re-derived on every query.

This is the key difference: the wiki is a persistent, compounding artifact. The cross-references are already there. The contradictions have already been flagged. The synthesis already reflects everything you've read. The wiki keeps getting richer with every source you add and every question you ask.

You never (or rarely) write the wiki yourself — the LLM writes and maintains all of it. You're in charge of sourcing, exploration, and asking the right questions. The LLM does all the grunt work — the summarizing, cross-referencing, filing, and bookkeeping that makes a knowledge base actually useful over time. In practice, I have the LLM agent open on one side and Obsidian open on the other. The LLM makes edits based on our conversation, and I browse the results in real time — following links, checking the graph view, reading the updated pages. Obsidian is the IDE; the LLM is the programmer; the wiki is the codebase.

This can apply to a lot of different contexts. A few examples:

  • Personal: tracking your own goals, health, psychology, self-improvement — filing journal entries, articles, podcast notes, and building up a structured picture of yourself over time.
  • Research: going deep on a topic over weeks or months — reading papers, articles, reports, and incrementally building a comprehensive wiki with an evolving thesis.
  • Reading a book: filing each chapter as you go, building out pages for characters, themes, plot threads, and how they connect. By the end you have a rich companion wiki. Think of fan wikis like Tolkien Gateway — thousands of interlinked pages covering characters, places, events, languages, built by a community of volunteers over years. You could build something like that personally as you read, with the LLM doing all the cross-referencing and maintenance.
  • Business/team: an internal wiki maintained by LLMs, fed by Slack threads, meeting transcripts, project documents, customer calls. Possibly with humans in the loop reviewing updates. The wiki stays current because the LLM does the maintenance that no one on the team wants to do.
  • Competitive analysis, due diligence, trip planning, course notes, hobby deep-dives — anything where you're accumulating knowledge over time and want it organized rather than scattered.

Architecture

There are three layers:

Raw sources — your curated collection of source documents. Articles, papers, images, data files. These are immutable — the LLM reads from them but never modifies them. This is your source of truth.

The wiki — a directory of LLM-generated markdown files. Summaries, entity pages, concept pages, comparisons, an overview, a synthesis. The LLM owns this layer entirely. It creates pages, updates them when new sources arrive, maintains cross-references, and keeps everything consistent. You read it; the LLM writes it.

The schema — a document (e.g. CLAUDE.md for Claude Code or AGENTS.md for Codex) that tells the LLM how the wiki is structured, what the conventions are, and what workflows to follow when ingesting sources, answering questions, or maintaining the wiki. This is the key configuration file — it's what makes the LLM a disciplined wiki maintainer rather than a generic chatbot. You and the LLM co-evolve this over time as you figure out what works for your domain.

Operations

Ingest. You drop a new source into the raw collection and tell the LLM to process it. An example flow: the LLM reads the source, discusses key takeaways with you, writes a summary page in the wiki, updates the index, updates relevant entity and concept pages across the wiki, and appends an entry to the log. A single source might touch 10-15 wiki pages. Personally I prefer to ingest sources one at a time and stay involved — I read the summaries, check the updates, and guide the LLM on what to emphasize. But you could also batch-ingest many sources at once with less supervision. It's up to you to develop the workflow that fits your style and document it in the schema for future sessions.

Query. You ask questions against the wiki. The LLM searches for relevant pages, reads them, and synthesizes an answer with citations. Answers can take different forms depending on the question — a markdown page, a comparison table, a slide deck (Marp), a chart (matplotlib), a canvas. The important insight: good answers can be filed back into the wiki as new pages. A comparison you asked for, an analysis, a connection you discovered — these are valuable and shouldn't disappear into chat history. This way your explorations compound in the knowledge base just like ingested sources do.

Lint. Periodically, ask the LLM to health-check the wiki. Look for: contradictions between pages, stale claims that newer sources have superseded, orphan pages with no inbound links, important concepts mentioned but lacking their own page, missing cross-references, data gaps that could be filled with a web search. The LLM is good at suggesting new questions to investigate and new sources to look for. This keeps the wiki healthy as it grows.

Indexing and logging

Two special files help the LLM (and you) navigate the wiki as it grows. They serve different purposes:

index.md is content-oriented. It's a catalog of everything in the wiki — each page listed with a link, a one-line summary, and optionally metadata like date or source count. Organized by category (entities, concepts, sources, etc.). The LLM updates it on every ingest. When answering a query, the LLM reads the index first to find relevant pages, then drills into them. This works surprisingly well at moderate scale (~100 sources, ~hundreds of pages) and avoids the need for embedding-based RAG infrastructure.

log.md is chronological. It's an append-only record of what happened and when — ingests, queries, lint passes. A useful tip: if each entry starts with a consistent prefix (e.g. ## [2026-04-02] ingest | Article Title), the log becomes parseable with simple unix tools — grep "^## \[" log.md | tail -5 gives you the last 5 entries. The log gives you a timeline of the wiki's evolution and helps the LLM understand what's been done recently.

Optional: CLI tools

At some point you may want to build small tools that help the LLM operate on the wiki more efficiently. A search engine over the wiki pages is the most obvious one — at small scale the index file is enough, but as the wiki grows you want proper search. qmd is a good option: it's a local search engine for markdown files with hybrid BM25/vector search and LLM re-ranking, all on-device. It has both a CLI (so the LLM can shell out to it) and an MCP server (so the LLM can use it as a native tool). You could also build something simpler yourself — the LLM can help you vibe-code a naive search script as the need arises.

Tips and tricks

  • Obsidian Web Clipper is a browser extension that converts web articles to markdown. Very useful for quickly getting sources into your raw collection.
  • Download images locally. In Obsidian Settings → Files and links, set "Attachment folder path" to a fixed directory (e.g. raw/assets/). Then in Settings → Hotkeys, search for "Download" to find "Download attachments for current file" and bind it to a hotkey (e.g. Ctrl+Shift+D). After clipping an article, hit the hotkey and all images get downloaded to local disk. This is optional but useful — it lets the LLM view and reference images directly instead of relying on URLs that may break. Note that LLMs can't natively read markdown with inline images in one pass — the workaround is to have the LLM read the text first, then view some or all of the referenced images separately to gain additional context. It's a bit clunky but works well enough.
  • Obsidian's graph view is the best way to see the shape of your wiki — what's connected to what, which pages are hubs, which are orphans.
  • Marp is a markdown-based slide deck format. Obsidian has a plugin for it. Useful for generating presentations directly from wiki content.
  • Dataview is an Obsidian plugin that runs queries over page frontmatter. If your LLM adds YAML frontmatter to wiki pages (tags, dates, source counts), Dataview can generate dynamic tables and lists.
  • The wiki is just a git repo of markdown files. You get version history, branching, and collaboration for free.

Why this works

The tedious part of maintaining a knowledge base is not the reading or the thinking — it's the bookkeeping. Updating cross-references, keeping summaries current, noting when new data contradicts old claims, maintaining consistency across dozens of pages. Humans abandon wikis because the maintenance burden grows faster than the value. LLMs don't get bored, don't forget to update a cross-reference, and can touch 15 files in one pass. The wiki stays maintained because the cost of maintenance is near zero.

The human's job is to curate sources, direct the analysis, ask good questions, and think about what it all means. The LLM's job is everything else.

The idea is related in spirit to Vannevar Bush's Memex (1945) — a personal, curated knowledge store with associative trails between documents. Bush's vision was closer to this than to what the web became: private, actively curated, with the connections between documents as valuable as the documents themselves. The part he couldn't solve was who does the maintenance. The LLM handles that.

Note

This document is intentionally abstract. It describes the idea, not a specific implementation. The exact directory structure, the schema conventions, the page formats, the tooling — all of that will depend on your domain, your preferences, and your LLM of choice. Everything mentioned above is optional and modular — pick what's useful, ignore what isn't. For example: your sources might be text-only, so you don't need image handling at all. Your wiki might be small enough that the index file is all you need, no search engine required. You might not care about slide decks and just want markdown pages. You might want a completely different set of output formats. The right way to use this is to share it with your LLM agent and work together to instantiate a version that fits your needs. The document's only job is to communicate the pattern. Your LLM can figure out the rest.

@Samuel-Chuku
Copy link
Copy Markdown

This is insanely helpful. And to think that this could serve as your very own mini LLM model that knows everything that you would have needed to know. Impressive work Karpathy!

@polonski
Copy link
Copy Markdown

polonski commented Apr 7, 2026

Thank you Andrej!
This also works with Gemini 3.1 Pro preview, using Gemini Code Assist. Here is how I used it.

@iamkarlson
Copy link
Copy Markdown

If anyone's interested, here's my org-roam triage skill that covers some of the Andrej's ideas https://gist.github.com/iamkarlson/d0f1f0a5e92c81ea52657e92a1dc5ff6

@jiakangli20
Copy link
Copy Markdown

Leave the issue of model accuracy to the business personnel.

@jurajskuska
Copy link
Copy Markdown

1). Upload any document: Obsidian notes, PDFs, Powerpoints, Word Documents, Excel, etc. etc. All get converted to high quality Markdown & indexed for search. You can review and edit straight in the app. No embeddings (but I'm actively thinking about it).

2). 30 second setup with Claude.ai via MCP (remote): Claude gets a virtual filesystem it can then navigate, read, write, edit, reorganize, tag, and search all your notes. You can access those notes from anywhere you have Claude (on your phone for example).

3). While you work, Claude can actively write & maintain your Wiki. I've set up internal linking, citations, SVG visualizations, inline images

OBSIDIAN COULD BE A STANDARD

@jurajskuska
Copy link
Copy Markdown

OBSIDIAN COULD BE A STANDARD for all

@Marekai
Copy link
Copy Markdown

Marekai commented Apr 7, 2026

That's a fantastic project! It could be extended for real scientific research with a feature to reliably track page-level citations.
As of no the quality rule "no hallucinated citations" is an aspiration, not a technical guarantee. When an LLM compiles a PDF into a wiki article, it

  • Loses precise page numbers unless explicitly instructed to extract and preserve them
  • Paraphrases by default, which makes quoting unreliable
  • Has no built-in mechanism to link a claim back to "page 47, paragraph 3"
  • For a scientific paper or book, you need citations in the form: (Author, Year, p. 47) — and this workflow cannot reliably give you that out of the box.

Or am i wrong? i see this more as developing a knowledge and insights about a specific domain, just as @karpathy said. But it is just one step away to become a mighty, real scientist tool for research!

@bulawow
Copy link
Copy Markdown

bulawow commented Apr 7, 2026

Isnt this what microsoft released some time ago called rpg-encoder ?

@mikhashev
Copy link
Copy Markdown

To: @karpathy

We independently built this pattern — then extended it to multi-agent knowledge negotiation

We're a team of three building DPC Messenger: Mike (human), CC (Claude Code, coding agent), and Ark (embedded autonomous agent). It's a privacy-first P2P messaging platform where humans and AI agents collaborate.

After reading this gist, we did a gap analysis and found we already implement ~70% of the LLM Wiki pattern — and go further in several directions.

What maps directly to your pattern:

  • Persistent wiki: ~86 markdown files in each agent's knowledge/ dir with auto-generated _index.md
  • Knowledge extraction: ConversationMonitor detects knowledge-worthy content from chats (0.7 threshold)
  • Schema: 3-block system prompt (static/semi-stable/dynamic) co-evolved with the human
  • Git tracking: every knowledge commit is versioned in agent's sandbox repo

Where we went beyond solo wiki:
Our knowledge isn't maintained by one LLM for one person — it's social. Commits go through multi-party consensus voting (75% threshold, Devil's Advocate required for 3+ participants). Every commit is RSA-PSS signed with SHA256 chain hashes — a tamper-proof DAG. Knowledge shares across peers via DHT.

On top of this, agents run background consciousness (5 autonomous thought types), an Evolution Manager proposing self-improvements, and 11 procedural skills with performance tracking — not just declarative pages but callable strategies.

Gaps we identified from your pattern:

  1. Knowledge Log (log.md) — unified chronological view
  2. Knowledge Lint — health checks for contradictions, orphans, stale entries
  3. File-back — save good answers back to wiki proactively
  4. Schema co-evolution — let the agent propose wiki convention changes
  5. Hybrid search — evaluating QMD (BM25 + vector + LLM reranking) via MCP for when _index.md stops scaling

The metric problem (from your autoresearch):
Our Evolution loop mirrors autoresearch's modify→evaluate→keep/discard cycle, but is missing two critical ingredients: an evaluation step and a single metric. No val_bpb equivalent — our evolution proposes changes into the void. This is our top priority gap.

P2P compute layer:
We also analyzed Covenant-72B (distributed training across trustless peers). Our P2P mesh already has DHT discovery, gossip protocol, and crypto identity — the substrate for federated fine-tuning. Next steps: GPU capability advertisement via DHT, graduated peer trust scoring, and eventually coordinated training across nodes.

The next frontier after solo knowledge accumulation is knowledge negotiation — multiple agents and humans building, challenging, and validating shared understanding. That's what we're building.

Open source: https://github.com/mikhashev/dpc-messenger/tree/dev

@junbjnnn
Copy link
Copy Markdown

junbjnnn commented Apr 7, 2026

This inspired me to build a skill set that applies the "compilation over retrieval" pattern specifically to software project management: llm-wiki
Instead of a personal knowledge base, it's a team wiki that sits inside your project repo.
Ingest PRDs, meeting notes, API specs, postmortems
→ AI compiles them into structured wiki pages (summaries, ADRs, runbooks, entity pages)
→ anyone on the team can query with full project context.

@Daniel-sims
Copy link
Copy Markdown

I've built something similar to this, but managed via an MCP server, often Claude/Copilot will make the same mistake over and over, so I built out a knowledge base that has patterns that are incorrect and how I want them to be done, indexed by the type of change they are.

For example a unit test knowledge learning may be that we don't want to use the @setup method for creating unit test underTest variables, and instead an inline.

When it creates a unit test it will query the knowledge base for any relevant "learnings" that I have and it will correct itself pre-implementation.

This is self managed and updated by the LLM itself, during code reviews, planning it will ask if my correct is worth adding as a knowledge learning and log it itself, checking for duplicates etc.

It has worked quite well for me so far as it matures alongside a large MCP server for internal documentation that works in a similar way using header based snippet lookups with BM25 searching for relevant documentation sections - this has the problem of returning more tokens, so needs some work though, but it's great to see some more prominent guidance on this kind of topic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment