Skip to content

Instantly share code, notes, and snippets.

@karpathy
Created April 4, 2026 16:25
Show Gist options
  • Select an option

  • Save karpathy/442a6bf555914893e9891c11519de94f to your computer and use it in GitHub Desktop.

Select an option

Save karpathy/442a6bf555914893e9891c11519de94f to your computer and use it in GitHub Desktop.
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

The idea here is different. Instead of just retrieving from raw documents at query time, the LLM incrementally builds and maintains a persistent wiki — a structured, interlinked collection of markdown files that sits between you and the raw sources. When you add a new source, the LLM doesn't just index it for later retrieval. It reads it, extracts the key information, and integrates it into the existing wiki — updating entity pages, revising topic summaries, noting where new data contradicts old claims, strengthening or challenging the evolving synthesis. The knowledge is compiled once and then kept current, not re-derived on every query.

This is the key difference: the wiki is a persistent, compounding artifact. The cross-references are already there. The contradictions have already been flagged. The synthesis already reflects everything you've read. The wiki keeps getting richer with every source you add and every question you ask.

You never (or rarely) write the wiki yourself — the LLM writes and maintains all of it. You're in charge of sourcing, exploration, and asking the right questions. The LLM does all the grunt work — the summarizing, cross-referencing, filing, and bookkeeping that makes a knowledge base actually useful over time. In practice, I have the LLM agent open on one side and Obsidian open on the other. The LLM makes edits based on our conversation, and I browse the results in real time — following links, checking the graph view, reading the updated pages. Obsidian is the IDE; the LLM is the programmer; the wiki is the codebase.

This can apply to a lot of different contexts. A few examples:

  • Personal: tracking your own goals, health, psychology, self-improvement — filing journal entries, articles, podcast notes, and building up a structured picture of yourself over time.
  • Research: going deep on a topic over weeks or months — reading papers, articles, reports, and incrementally building a comprehensive wiki with an evolving thesis.
  • Reading a book: filing each chapter as you go, building out pages for characters, themes, plot threads, and how they connect. By the end you have a rich companion wiki. Think of fan wikis like Tolkien Gateway — thousands of interlinked pages covering characters, places, events, languages, built by a community of volunteers over years. You could build something like that personally as you read, with the LLM doing all the cross-referencing and maintenance.
  • Business/team: an internal wiki maintained by LLMs, fed by Slack threads, meeting transcripts, project documents, customer calls. Possibly with humans in the loop reviewing updates. The wiki stays current because the LLM does the maintenance that no one on the team wants to do.
  • Competitive analysis, due diligence, trip planning, course notes, hobby deep-dives — anything where you're accumulating knowledge over time and want it organized rather than scattered.

Architecture

There are three layers:

Raw sources — your curated collection of source documents. Articles, papers, images, data files. These are immutable — the LLM reads from them but never modifies them. This is your source of truth.

The wiki — a directory of LLM-generated markdown files. Summaries, entity pages, concept pages, comparisons, an overview, a synthesis. The LLM owns this layer entirely. It creates pages, updates them when new sources arrive, maintains cross-references, and keeps everything consistent. You read it; the LLM writes it.

The schema — a document (e.g. CLAUDE.md for Claude Code or AGENTS.md for Codex) that tells the LLM how the wiki is structured, what the conventions are, and what workflows to follow when ingesting sources, answering questions, or maintaining the wiki. This is the key configuration file — it's what makes the LLM a disciplined wiki maintainer rather than a generic chatbot. You and the LLM co-evolve this over time as you figure out what works for your domain.

Operations

Ingest. You drop a new source into the raw collection and tell the LLM to process it. An example flow: the LLM reads the source, discusses key takeaways with you, writes a summary page in the wiki, updates the index, updates relevant entity and concept pages across the wiki, and appends an entry to the log. A single source might touch 10-15 wiki pages. Personally I prefer to ingest sources one at a time and stay involved — I read the summaries, check the updates, and guide the LLM on what to emphasize. But you could also batch-ingest many sources at once with less supervision. It's up to you to develop the workflow that fits your style and document it in the schema for future sessions.

Query. You ask questions against the wiki. The LLM searches for relevant pages, reads them, and synthesizes an answer with citations. Answers can take different forms depending on the question — a markdown page, a comparison table, a slide deck (Marp), a chart (matplotlib), a canvas. The important insight: good answers can be filed back into the wiki as new pages. A comparison you asked for, an analysis, a connection you discovered — these are valuable and shouldn't disappear into chat history. This way your explorations compound in the knowledge base just like ingested sources do.

Lint. Periodically, ask the LLM to health-check the wiki. Look for: contradictions between pages, stale claims that newer sources have superseded, orphan pages with no inbound links, important concepts mentioned but lacking their own page, missing cross-references, data gaps that could be filled with a web search. The LLM is good at suggesting new questions to investigate and new sources to look for. This keeps the wiki healthy as it grows.

Indexing and logging

Two special files help the LLM (and you) navigate the wiki as it grows. They serve different purposes:

index.md is content-oriented. It's a catalog of everything in the wiki — each page listed with a link, a one-line summary, and optionally metadata like date or source count. Organized by category (entities, concepts, sources, etc.). The LLM updates it on every ingest. When answering a query, the LLM reads the index first to find relevant pages, then drills into them. This works surprisingly well at moderate scale (~100 sources, ~hundreds of pages) and avoids the need for embedding-based RAG infrastructure.

log.md is chronological. It's an append-only record of what happened and when — ingests, queries, lint passes. A useful tip: if each entry starts with a consistent prefix (e.g. ## [2026-04-02] ingest | Article Title), the log becomes parseable with simple unix tools — grep "^## \[" log.md | tail -5 gives you the last 5 entries. The log gives you a timeline of the wiki's evolution and helps the LLM understand what's been done recently.

Optional: CLI tools

At some point you may want to build small tools that help the LLM operate on the wiki more efficiently. A search engine over the wiki pages is the most obvious one — at small scale the index file is enough, but as the wiki grows you want proper search. qmd is a good option: it's a local search engine for markdown files with hybrid BM25/vector search and LLM re-ranking, all on-device. It has both a CLI (so the LLM can shell out to it) and an MCP server (so the LLM can use it as a native tool). You could also build something simpler yourself — the LLM can help you vibe-code a naive search script as the need arises.

Tips and tricks

  • Obsidian Web Clipper is a browser extension that converts web articles to markdown. Very useful for quickly getting sources into your raw collection.
  • Download images locally. In Obsidian Settings → Files and links, set "Attachment folder path" to a fixed directory (e.g. raw/assets/). Then in Settings → Hotkeys, search for "Download" to find "Download attachments for current file" and bind it to a hotkey (e.g. Ctrl+Shift+D). After clipping an article, hit the hotkey and all images get downloaded to local disk. This is optional but useful — it lets the LLM view and reference images directly instead of relying on URLs that may break. Note that LLMs can't natively read markdown with inline images in one pass — the workaround is to have the LLM read the text first, then view some or all of the referenced images separately to gain additional context. It's a bit clunky but works well enough.
  • Obsidian's graph view is the best way to see the shape of your wiki — what's connected to what, which pages are hubs, which are orphans.
  • Marp is a markdown-based slide deck format. Obsidian has a plugin for it. Useful for generating presentations directly from wiki content.
  • Dataview is an Obsidian plugin that runs queries over page frontmatter. If your LLM adds YAML frontmatter to wiki pages (tags, dates, source counts), Dataview can generate dynamic tables and lists.
  • The wiki is just a git repo of markdown files. You get version history, branching, and collaboration for free.

Why this works

The tedious part of maintaining a knowledge base is not the reading or the thinking — it's the bookkeeping. Updating cross-references, keeping summaries current, noting when new data contradicts old claims, maintaining consistency across dozens of pages. Humans abandon wikis because the maintenance burden grows faster than the value. LLMs don't get bored, don't forget to update a cross-reference, and can touch 15 files in one pass. The wiki stays maintained because the cost of maintenance is near zero.

The human's job is to curate sources, direct the analysis, ask good questions, and think about what it all means. The LLM's job is everything else.

The idea is related in spirit to Vannevar Bush's Memex (1945) — a personal, curated knowledge store with associative trails between documents. Bush's vision was closer to this than to what the web became: private, actively curated, with the connections between documents as valuable as the documents themselves. The part he couldn't solve was who does the maintenance. The LLM handles that.

Note

This document is intentionally abstract. It describes the idea, not a specific implementation. The exact directory structure, the schema conventions, the page formats, the tooling — all of that will depend on your domain, your preferences, and your LLM of choice. Everything mentioned above is optional and modular — pick what's useful, ignore what isn't. For example: your sources might be text-only, so you don't need image handling at all. Your wiki might be small enough that the index file is all you need, no search engine required. You might not care about slide decks and just want markdown pages. You might want a completely different set of output formats. The right way to use this is to share it with your LLM agent and work together to instantiate a version that fits your needs. The document's only job is to communicate the pattern. Your LLM can figure out the rest.

@anzal1
Copy link
Copy Markdown

anzal1 commented Apr 8, 2026

Took this pattern and built it into a zero-config CLI: npx quicky-wiki init auto-detects your API keys and picks the best model. Full pipeline — ingest, query, lint, prune, serve.

A few things I added beyond the core pattern:

  • Confidence-scored claims — every fact gets a confidence score and source citation. Single-source claims stay low-confidence; corroborated claims across sources get promoted. Helps with @asong56's hallucination concern — contested claims are surfaced, not buried.
  • Temporal tracking — claims are timestamped so you can see knowledge evolution and flag stale facts.
  • Live dashboard — Obsidian-style force-directed graph (Canvas 2D with level-of-detail for performance at 300+ nodes), plus built-in LLM chat for querying the wiki directly.
  • Multi-provider — Anthropic, OpenAI, Gemini, Ollama, or any openai-compatible endpoint (Groq, Together, vLLM, LM Studio).

Works with markdown files, URLs, or any text source. One command to get started:

npx quicky-wiki init

https://github.com/anzal1/quicky-wiki

@dolzenko
Copy link
Copy Markdown

dolzenko commented Apr 8, 2026

Is there any tool (or will it even make sense at all) to route all my recorded codex cli sessions to something like this to build the KB out of months of work with the agent?

@Bytekron
Copy link
Copy Markdown

Bytekron commented Apr 8, 2026

This is one of the first writeups on “LLM + knowledge base” that actually clicks for me, because it shifts the focus away from pure retrieval and toward accumulation. The line of thinking that stood out most is that most document workflows keep forcing the model to rediscover the same patterns over and over again, while a maintained wiki turns that repeated effort into a durable asset. That feels much closer to how people actually build expertise.

What I like here is that this is not just “RAG but nicer.” The important difference is the idea of synthesis as a first-class artifact. Instead of treating every answer as disposable chat output, the useful parts get promoted into pages, relationships, summaries, contradictions, and cross-links. That is a much better mental model for long-term work, especially when the source material is messy, repetitive, or constantly changing.

I also think this pattern becomes especially powerful in narrow domains where there is a lot of semi-structured information and a lot of recurring questions. For example, I run projects in the Minecraft ecosystem like Minelist and MinecraftServer.buzz, and one thing that becomes obvious very quickly is how much information piles up around servers, versions, gamemodes, metadata quality, vote systems, SEO content, duplicate detection, moderation notes, and historical changes. A traditional search layer helps you retrieve fragments, but it does not really “understand the estate” over time. A maintained wiki layer could.

In that kind of setting, an LLM-maintained wiki could become the connective tissue between raw scraped data, editorial notes, taxonomy decisions, and user-facing content. One page could track how a specific server evolved over time. Another could map tag ambiguity across categories like SMP, survival, vanilla, or modded. Another could explain why certain duplicate-host patterns appear across listings. Over weeks or months, that becomes much more valuable than a pile of disconnected documents or one-off prompts.

I also really agree with the point that the hardest part of knowledge systems is not storing information, it is maintenance. Humans are usually willing to create a page once, but they are much less willing to update ten related pages, fix broken links, revise old claims, and keep a taxonomy coherent. That is exactly the kind of repetitive but context-sensitive work LLMs are surprisingly well suited for. Not because they are always right, but because they make the cost of maintaining structure low enough that the structure can actually survive.

The “wiki is the codebase” analogy is also very strong. It suggests a workflow where the human’s role is curation, judgment, and direction, while the model handles the mechanical burden of integration and refactoring. That feels like a more realistic and productive division of labor than pretending the model should simply answer everything on demand from a pile of uploads.

One thing I would be very interested in is how people handle quality control once the wiki grows beyond a hobby-sized vault. For example, how do you best represent confidence, source freshness, disagreements between sources, and unresolved ambiguities without turning the whole thing into bureaucratic overhead? There is probably a sweet spot where the schema is structured enough to keep the system disciplined, but not so rigid that the workflow becomes annoying.

I also wonder whether the best implementations of this idea will end up being domain-specific rather than universal. A personal research wiki, a company knowledge base, and a vertical operational system probably want different page types, different update policies, and different notions of truth. The pattern feels general, but the actual payoff probably comes from tailoring it hard to a specific domain.

Either way, this gist describes something much more interesting than the usual “chat with your docs” framing. It treats knowledge work as something cumulative, revisable, and alive. Im gonna use it as a reference for a blog post on Minelist and MinecraftServer.buzz about LLM's and Minecraft. That feels much closer to how serious research, operations, and even niche content businesses actually work in practice.

@MehmetGoekce
Copy link
Copy Markdown

Built an implementation using Claude Code + Logseq/Obsidian with a two-layer cache architecture: L1 (auto-loaded rules in Claude's memory) + L2 (on-demand wiki in Logseq/Obsidian). The key insight was that not all knowledge belongs in the wiki — critical rules must be auto-loaded every session.
Includes a /wiki skill with ingest, query, lint, and a schema that enforces page types and cross-references. Setup in 5 minutes via ./setup.sh.
Full write-up: https://mehmetgoekce.substack.com/p/i-built-karpathys-llm-wiki-with-claude
Repo: https://github.com/MehmetGoekce/llm-wiki

@jakob1379
Copy link
Copy Markdown

Is there any tool (or will it even make sense at all) to route all my recorded codex cli sessions to something like this to build the KB out of months of work with the agent?

bash?

for convo in $(insert command that yields each conversation to an array); do <codex|claude|...|> add this to my wiki; done

@shibing624
Copy link
Copy Markdown

Great writeup! Re: the CLI tools section where you mention qmd as a local search engine for the wiki — wanted to share an alternative approach we've been working on: TreeSearch.

The core difference: two fundamentally different retrieval philosophies.

QMD takes the RAG-enhanced route: chunk documents → BM25 + vector search → LLM query expansion → LLM re-ranking. It runs 3 local models (~2GB) and gets strong semantic results, but at the cost of model loading and inference latency.

TreeSearch takes the structure-first route: no chunking, no embeddings, no models at all. Instead of splitting documents into fixed-size chunks and retrieving by vector similarity (which destroys heading hierarchy), it parses documents into tree structures based on their natural heading hierarchy, then uses SQLite FTS5 keyword matching with structure-aware scoring (title match, term overlap, IDF weighting, generic section demotion). Zero models, pure CPU, millisecond latency.

Quick comparison:

QMD TreeSearch
Core approach BM25 + vector + LLM reranking Structure-aware tree search, no embeddings
File formats Markdown only MD, Code (Python AST + regex), PDF, DOCX, JSON, HTML, XML, CSV — 10+ types
Model dependency 3 local models (~2GB) Zero — pure heuristic scoring
Code search Not supported Supported (CodeSearchNet MRR 0.91)
Query latency Seconds (model inference) Milliseconds (5,000 docs < 10ms)
Best for "I don't remember exactly what I wrote" — fuzzy semantic queries "The doc has clear structure and keywords can anchor position" — structured queries

For the wiki pattern specifically, TreeSearch is a good fit because wiki pages are inherently well-structured markdown with heading hierarchies — exactly the kind of documents where structure-aware retrieval shines. And since it's zero-dependency (just SQLite), it adds no infrastructure overhead to the wiki setup.

pip install pytreesearch
treesearch "How does auth work?" wiki/

Both tools are complementary — QMD for when you need deep semantic understanding, TreeSearch for when structure and speed matter most. The right choice depends on your wiki's size and query patterns.

@a-ml
Copy link
Copy Markdown

a-ml commented Apr 8, 2026

Been thinking about this a lot lately. We've been trying to do this with cognition. Not the things you know, but the way you actually think. The heuristics you apply without noticing, the tensions between things you believe, the mental models that shape every decision before you're even aware you're making one.

The hard part isn't storage, it's extraction. You can't just ask someone what their values are. You have to start from a real decision. What did you reject? What tradeoff actually mattered to you? What rule did you apply on instinct? Our approach, an LLM reads through conversation transcripts on a schedule and classifies what it finds against a strict hierarchy of types. Decision rule, framework, tension, preference. "Idea" is last resort. Everything gets a confidence score and an epistemic tag so the system knows the difference between something you're sure about and something you're still working out.

Typed edges rather than a flat list. Supports, contradicts, evolved_into, depends_on. That's what makes it traversable rather than just searchable. An agent can walk the contradictions in your own reasoning, find connections between domains you never explicitly linked, or surface something you've been circling for weeks without naming it.

Nodes decay too, which felt important. Values hold. Ideas fade fast. The graph is supposed to model what's live in your thinking right now, not accumulate everything you've ever said, but that's probably a personal choice.

Mine has 8,000+ nodes at this point, 16 MCP tools, runs as an npx server. Curious whether the decay model resonates with you or whether you'd approach that part differently.

https://github.com/multimail-dev/thinking-mcp

Very interesting

@xoai
Copy link
Copy Markdown

xoai commented Apr 8, 2026

Is there any tool (or will it even make sense at all) to route all my recorded codex cli sessions to something like this to build the KB out of months of work with the agent?

sage-wiki can act as an MCP (Model Context Protocol) server, letting you save knowledge directly from your AI conversations into your wiki. Instead of losing insights when a chat session ends, you can tell your AI to capture them.

Say you're debugging a performance issue with your AI and discover that the bottleneck is in the database connection pool, not the query itself. At the end of the session:

"Capture the key findings from this debugging session. Tag with postgres, performance."

The AI extracts items like:

  • "connection-pool-bottleneck" - The actual performance issue was exhausted connections, not slow queries
  • "pgbouncer-transaction-mode" - Transaction-level pooling resolved the issue; session-level was causing connection hoarding

These become source files that the compiler weaves into your wiki's knowledge graph. For old conversations, you can export data from ChatGPT or Claude and put it in your wiki folder.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment