Skip to content

Instantly share code, notes, and snippets.

@karpathy
Created April 4, 2026 16:25
Show Gist options
  • Select an option

  • Save karpathy/442a6bf555914893e9891c11519de94f to your computer and use it in GitHub Desktop.

Select an option

Save karpathy/442a6bf555914893e9891c11519de94f to your computer and use it in GitHub Desktop.
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

The idea here is different. Instead of just retrieving from raw documents at query time, the LLM incrementally builds and maintains a persistent wiki — a structured, interlinked collection of markdown files that sits between you and the raw sources. When you add a new source, the LLM doesn't just index it for later retrieval. It reads it, extracts the key information, and integrates it into the existing wiki — updating entity pages, revising topic summaries, noting where new data contradicts old claims, strengthening or challenging the evolving synthesis. The knowledge is compiled once and then kept current, not re-derived on every query.

This is the key difference: the wiki is a persistent, compounding artifact. The cross-references are already there. The contradictions have already been flagged. The synthesis already reflects everything you've read. The wiki keeps getting richer with every source you add and every question you ask.

You never (or rarely) write the wiki yourself — the LLM writes and maintains all of it. You're in charge of sourcing, exploration, and asking the right questions. The LLM does all the grunt work — the summarizing, cross-referencing, filing, and bookkeeping that makes a knowledge base actually useful over time. In practice, I have the LLM agent open on one side and Obsidian open on the other. The LLM makes edits based on our conversation, and I browse the results in real time — following links, checking the graph view, reading the updated pages. Obsidian is the IDE; the LLM is the programmer; the wiki is the codebase.

This can apply to a lot of different contexts. A few examples:

  • Personal: tracking your own goals, health, psychology, self-improvement — filing journal entries, articles, podcast notes, and building up a structured picture of yourself over time.
  • Research: going deep on a topic over weeks or months — reading papers, articles, reports, and incrementally building a comprehensive wiki with an evolving thesis.
  • Reading a book: filing each chapter as you go, building out pages for characters, themes, plot threads, and how they connect. By the end you have a rich companion wiki. Think of fan wikis like Tolkien Gateway — thousands of interlinked pages covering characters, places, events, languages, built by a community of volunteers over years. You could build something like that personally as you read, with the LLM doing all the cross-referencing and maintenance.
  • Business/team: an internal wiki maintained by LLMs, fed by Slack threads, meeting transcripts, project documents, customer calls. Possibly with humans in the loop reviewing updates. The wiki stays current because the LLM does the maintenance that no one on the team wants to do.
  • Competitive analysis, due diligence, trip planning, course notes, hobby deep-dives — anything where you're accumulating knowledge over time and want it organized rather than scattered.

Architecture

There are three layers:

Raw sources — your curated collection of source documents. Articles, papers, images, data files. These are immutable — the LLM reads from them but never modifies them. This is your source of truth.

The wiki — a directory of LLM-generated markdown files. Summaries, entity pages, concept pages, comparisons, an overview, a synthesis. The LLM owns this layer entirely. It creates pages, updates them when new sources arrive, maintains cross-references, and keeps everything consistent. You read it; the LLM writes it.

The schema — a document (e.g. CLAUDE.md for Claude Code or AGENTS.md for Codex) that tells the LLM how the wiki is structured, what the conventions are, and what workflows to follow when ingesting sources, answering questions, or maintaining the wiki. This is the key configuration file — it's what makes the LLM a disciplined wiki maintainer rather than a generic chatbot. You and the LLM co-evolve this over time as you figure out what works for your domain.

Operations

Ingest. You drop a new source into the raw collection and tell the LLM to process it. An example flow: the LLM reads the source, discusses key takeaways with you, writes a summary page in the wiki, updates the index, updates relevant entity and concept pages across the wiki, and appends an entry to the log. A single source might touch 10-15 wiki pages. Personally I prefer to ingest sources one at a time and stay involved — I read the summaries, check the updates, and guide the LLM on what to emphasize. But you could also batch-ingest many sources at once with less supervision. It's up to you to develop the workflow that fits your style and document it in the schema for future sessions.

Query. You ask questions against the wiki. The LLM searches for relevant pages, reads them, and synthesizes an answer with citations. Answers can take different forms depending on the question — a markdown page, a comparison table, a slide deck (Marp), a chart (matplotlib), a canvas. The important insight: good answers can be filed back into the wiki as new pages. A comparison you asked for, an analysis, a connection you discovered — these are valuable and shouldn't disappear into chat history. This way your explorations compound in the knowledge base just like ingested sources do.

Lint. Periodically, ask the LLM to health-check the wiki. Look for: contradictions between pages, stale claims that newer sources have superseded, orphan pages with no inbound links, important concepts mentioned but lacking their own page, missing cross-references, data gaps that could be filled with a web search. The LLM is good at suggesting new questions to investigate and new sources to look for. This keeps the wiki healthy as it grows.

Indexing and logging

Two special files help the LLM (and you) navigate the wiki as it grows. They serve different purposes:

index.md is content-oriented. It's a catalog of everything in the wiki — each page listed with a link, a one-line summary, and optionally metadata like date or source count. Organized by category (entities, concepts, sources, etc.). The LLM updates it on every ingest. When answering a query, the LLM reads the index first to find relevant pages, then drills into them. This works surprisingly well at moderate scale (~100 sources, ~hundreds of pages) and avoids the need for embedding-based RAG infrastructure.

log.md is chronological. It's an append-only record of what happened and when — ingests, queries, lint passes. A useful tip: if each entry starts with a consistent prefix (e.g. ## [2026-04-02] ingest | Article Title), the log becomes parseable with simple unix tools — grep "^## \[" log.md | tail -5 gives you the last 5 entries. The log gives you a timeline of the wiki's evolution and helps the LLM understand what's been done recently.

Optional: CLI tools

At some point you may want to build small tools that help the LLM operate on the wiki more efficiently. A search engine over the wiki pages is the most obvious one — at small scale the index file is enough, but as the wiki grows you want proper search. qmd is a good option: it's a local search engine for markdown files with hybrid BM25/vector search and LLM re-ranking, all on-device. It has both a CLI (so the LLM can shell out to it) and an MCP server (so the LLM can use it as a native tool). You could also build something simpler yourself — the LLM can help you vibe-code a naive search script as the need arises.

Tips and tricks

  • Obsidian Web Clipper is a browser extension that converts web articles to markdown. Very useful for quickly getting sources into your raw collection.
  • Download images locally. In Obsidian Settings → Files and links, set "Attachment folder path" to a fixed directory (e.g. raw/assets/). Then in Settings → Hotkeys, search for "Download" to find "Download attachments for current file" and bind it to a hotkey (e.g. Ctrl+Shift+D). After clipping an article, hit the hotkey and all images get downloaded to local disk. This is optional but useful — it lets the LLM view and reference images directly instead of relying on URLs that may break. Note that LLMs can't natively read markdown with inline images in one pass — the workaround is to have the LLM read the text first, then view some or all of the referenced images separately to gain additional context. It's a bit clunky but works well enough.
  • Obsidian's graph view is the best way to see the shape of your wiki — what's connected to what, which pages are hubs, which are orphans.
  • Marp is a markdown-based slide deck format. Obsidian has a plugin for it. Useful for generating presentations directly from wiki content.
  • Dataview is an Obsidian plugin that runs queries over page frontmatter. If your LLM adds YAML frontmatter to wiki pages (tags, dates, source counts), Dataview can generate dynamic tables and lists.
  • The wiki is just a git repo of markdown files. You get version history, branching, and collaboration for free.

Why this works

The tedious part of maintaining a knowledge base is not the reading or the thinking — it's the bookkeeping. Updating cross-references, keeping summaries current, noting when new data contradicts old claims, maintaining consistency across dozens of pages. Humans abandon wikis because the maintenance burden grows faster than the value. LLMs don't get bored, don't forget to update a cross-reference, and can touch 15 files in one pass. The wiki stays maintained because the cost of maintenance is near zero.

The human's job is to curate sources, direct the analysis, ask good questions, and think about what it all means. The LLM's job is everything else.

The idea is related in spirit to Vannevar Bush's Memex (1945) — a personal, curated knowledge store with associative trails between documents. Bush's vision was closer to this than to what the web became: private, actively curated, with the connections between documents as valuable as the documents themselves. The part he couldn't solve was who does the maintenance. The LLM handles that.

Note

This document is intentionally abstract. It describes the idea, not a specific implementation. The exact directory structure, the schema conventions, the page formats, the tooling — all of that will depend on your domain, your preferences, and your LLM of choice. Everything mentioned above is optional and modular — pick what's useful, ignore what isn't. For example: your sources might be text-only, so you don't need image handling at all. Your wiki might be small enough that the index file is all you need, no search engine required. You might not care about slide decks and just want markdown pages. You might want a completely different set of output formats. The right way to use this is to share it with your LLM agent and work together to instantiate a version that fits your needs. The document's only job is to communicate the pattern. Your LLM can figure out the rest.

@jmcastagnetto
Copy link
Copy Markdown

Using an LLM as an assistant to organize one's digital mess is a good idea. Perhaps compounded with the ideas/framework from the Zettlekasten method (https://zettelkasten.de/introduction/) - I've tried to do this manually but never had the required time to organize all the digital minutiae that live in my computer.

@jurajskuska
Copy link
Copy Markdown

Hi Andrej, here are some points we reached a bit earlier using OBSIDIAN. Greetings from bratislava to you.

The Destination: Self-Improving Multi-Agent System

What the architecture becomes when the loop matures:

🤖 Specialised agents, not one generalist — small agents each own a narrow task: research, audit, context update, safety check. Scoped context means fewer mistakes, faster execution, no overload.

Parallel execution — agents run simultaneously. Research agent finds information while audit agent checks integrity while context agent updates gaps. Human is not the bottleneck.

🔁 Self-improving context loop — agents report what was missing or wrong. Context is updated. Next run starts better than the last. The loop runs until context is sufficient — then agents operate with minimal human input.

🛡️ Human + safety agents as overseers — human is not doing the work, human is treating: reviewing flagged weaknesses, approving context updates, watching for injection or drift. Specialised safety agents run the 4-eyes check automatically.

🧠 Autoresearch as natural output — when context is rich enough and agents are specialised enough, research loops run autonomously. Human sets the question, agents find the answer, safety layer validates, context is updated with findings.

📈 Self-learning by design — every session adds to the indexed layer. Every gap found improves the next run. The system learns from its own history without anyone explicitly teaching it.

🎯 Human role shifts — from operator to architect. From doing to directing. From fixing gaps manually to reviewing what the system flagged and approving the fix.

🐜 Small models as executors, large models as architects — bigger models are not always desired. With proper context equipment, smaller models execute reliably and cheaply — like ants working the same target in parallel. Larger models are reserved for what they do best: creating new solutions, designing better approaches, solving novel problems. The division is natural: architect once, execute many times.

🔨 "You wouldn't nail pins with a big hammer." — Juraj, 2026-04-05

💡 Human creativity is the multiplier — the system amplifies what humans bring, it doesn't replace it. When the human is properly involved — setting direction, spotting what agents miss, injecting creative leaps — the synergy produces results none of the parts could reach alone. Agents execute with precision. Humans provide the spark.

🌐 End state: a parallel, self-correcting, context-driven agent network — where MD files are the shared language, Obsidian is the human dashboard, SQLite is the speed layer, large models design, small models execute, human creativity drives the direction, and the loop never stops improving.

@Nimo1987
Copy link
Copy Markdown

Nimo1987 commented Apr 6, 2026

This note was a big inspiration. I ended up building an open-source implementation of the idea here:

https://github.com/Nimo1987/atomic-knowledge

I pushed it in the direction of a markdown-first work-memory protocol for existing agents: explicit ingest/query/writeback/maintenance flows, a provisional candidate buffer before durable pages, and a small example KB plus evals.

Thanks for the original framing.

@romgenie
Copy link
Copy Markdown

romgenie commented Apr 6, 2026

@karpathy, WOW, this was such an amazing setup. I've revised it heavily from initial, but I'm deeply impressed with it. This would have taken me months to organize.

https://github.com/CompleteTech-LLC-AI-Research/beyond-the-token-bottleneck

image image image image image image image image

@jurajskuska
Copy link
Copy Markdown

Hi Andrej, we were also trying to take care of the safety. Please read it also if you wnat. Juraj from Bratislava and Vienna

Synergies — What the System Solved Together

Things that emerged from the combined human + AI layer working in tandem:

Safety and sandboxing per agent
The architecture enables each AI agent to operate in its own sandbox — isolated context, isolated tool access. Safety is structural, not dependent on trust alone. The 4-eyes / CLAUDE.md injection work extended this: the system now has a model for detecting when the soft layer is compromised.

Avoiding wrong assumptions on both sides
Shared session MDs mean neither side is operating on a private mental model of where things stand. Misalignment is surfaced early — in the Decisions Made section, in Known Issues — rather than discovered mid-task after wasted work. Both sides stay on the same branch.

Speed of search via SQLite + context-mode
ctx_search against indexed SQLite replaces manual digging through raw files. Deep recall that would have taken many Read calls and minutes of context loading now takes one query. Speed compounds: faster recall → more time for actual work.

Incremental context management — collaborative
The startup context loop (Tokens + Missing) is not a one-shot setup — it's an iterative system that both sides improve. Human adjusts what goes in, Claude reports what was missing. Neither side can optimise this alone. The collaboration is the mechanism.

Second deeper level — JSONL indexing
Claude's own conversation transcripts are indexed and made searchable. The knowledge Claude generated is not lost between sessions — it becomes a queryable layer. Deep questions ("what exactly did we decide about X three weeks ago?") are answerable without human memory or manual search. The system's own history becomes an asset.

[!note] Through-line
The move was from Claude as a tool the human operates, to Claude as a collaborator with shared state. The Obsidian layer is what made that possible.

@jurajskuska
Copy link
Copy Markdown

Dear Andrej from Bratislava, main communication file between human and AI agent is by us session MD in OBSIDIAN. Here is breakdown what it is currently providing to humand to help him improve the effectivity and quality of the context provided.

This file captures Claude's evaluation of the session MD format — what each section does, why it exists, and what helps most for effective human-AI collaboration.


Section-by-Section Breakdown

Frontmatter (date, tools_used, files_changed, related)
Machine-readable index. Lets future sessions and search tools instantly know what happened without reading the full note. Human doesn't need to scan — agent can query it.

SessionStart sources block
Tells both parties exactly what Claude knew at the start. Eliminates the "did you already know X?" ambiguity. Human doesn't need to re-explain context that was already injected.

JSONL sources block
Links to raw transcripts. If something is disputed or needs deep recall, the source is right there — agent can index and search it without human having to dig through history.

Startup Context Tokens table
Measures the cost of what was pre-loaded. Human can see if they're over-loading Claude (wasted tokens) or under-loading (Claude will be asking for things). Makes startup calibration a data decision, not a guess.

Missing From Startup Context
The most valuable feedback loop section. Claude reports what it had to search for mid-session that should have been pre-loaded. Human adjusts startup files. Over cycles, the startup converges — Claude arrives ready to work, fewer interruptions asking "where is X?"

Summary
3-sentence state of play. Human can read one paragraph and know if they agree with what happened. Quick alignment check, no need to read everything.

Decisions Made
Explicit record of what was decided and why. Prevents re-litigating the same questions next session. Agent can reference this instead of asking human to re-explain a past choice.

State After Session
Snapshot of what's actually running/configured right now. Agent starts next session knowing current reality, not assumed reality. Human doesn't need to answer "what state are we in?"

Known Issues / Warnings
Deferred problems, explicitly labeled. Human can prioritize — nothing silently forgotten. Agent won't accidentally assume something works when it's flagged here.

Next Steps
Handoff list from this session to next. Human doesn't carry it in their head. Agent reads it at startup and knows where to begin without a briefing.

Related wikilinks
Graph of what connects to what. Agent can pull adjacent context on demand without asking human "what did we do last time about X?"

Stats (ctx_stats)
Token/context efficiency report. Shows whether context-mode was actually protecting the context window. Over time: evidence that the tooling is working or needs adjustment.

[!note] Core Pattern
Every section moves information out of the human's head and into a queryable record. The bottleneck in human-AI collaboration is usually the human having to re-orient, re-explain, or re-decide something already settled. These sections are all attempts to make that unnecessary.

@chipsageSupport
Copy link
Copy Markdown

Too many expert here. can i get some advice here? my PC: Intel Core Ultra 7 155H with 32G RAM.
If i want to build such wiki for semiconductor industry locally (first start with my manually written knowledge base doc), what llm i should download locally? Qwen2.5-7B instruct?

@denniscarpio30-jpg
Copy link
Copy Markdown

Been running this pattern in production for months, but from a non-engineering context - enterprise service delivery management (client stakeholder coordination, ticket tracking, document generation across multiple clients). Claude Code + Obsidian.

Three things that made the biggest difference:

Entity pages for people, not just concepts. I maintain wiki pages for ~15 key stakeholders with communication preferences and decision patterns. The LLM checks these before drafting any email or meeting prep. Immediate quality jump in client communications.

The schema file is the real flywheel. Every correction I give the LLM gets filed back into CLAUDE.md so it never repeats the same mistake. Over months this compounds into something surprisingly sophisticated - tone rules per client, anticipation protocols, agent dispatch logic.

Automate the maintenance or it dies. Scheduled agents run nightly - meeting prep generation, stale ticket scanning, dashboard updates - all writing directly into the wiki. The knowledge base stays current not because I remember to update it, but because the system does it on a schedule. This is what makes the pattern sustainable long-term.

You don't need to be a developer to build this. The LLM builds and maintains the whole thing. You just need to be disciplined about feeding corrections back into the schema.

@bashiraziz
Copy link
Copy Markdown

Based on this idea, I have created a repo https://github.com/bashiraziz/llm-wiki-template. I used Claude for it and am now working create Claude skill as well for others to use it, if they so desire.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment