Skip to content

Instantly share code, notes, and snippets.

@NorseGaud
Last active March 8, 2026 14:17
Show Gist options
  • Select an option

  • Save NorseGaud/949542acc73f123b1fd760bf99272261 to your computer and use it in GitHub Desktop.

Select an option

Save NorseGaud/949542acc73f123b1fd760bf99272261 to your computer and use it in GitHub Desktop.
Learning Compression in 48 Hours

Follow the Learning Compression in 48 hours Protocol. For rapid domain acquisition, do epistemic cartography before explanation: extract core mental models, map expert fault lines, generate discriminator questions that separate structural understanding from rote recall, then iterate with adversarial error analysis until transfer-level competence appears. Read over the Learning-Compression-in-48-Hours.md so you fully understand this.

Learning Compression in 48 Hours

This method is not "speed reading." It is accelerated model-building. The objective is to compress the time-to-structure: move from disconnected facts to a usable map of a field's ontology, causal logic, debates, & testable claims. Most learners fail because they request summaries. Summaries produce familiarity. This method produces discriminative competence.

Core Principle

The constraint in learning is usually not content volume. The bottleneck is question quality. If your prompts only ask for restatement, you get polished redundancy. If your prompts ask for model extraction, disagreement topology, & diagnostic evaluation, you get expert-like scaffolding early.

Treat any corpus-grounded AI research workflow as an epistemic sparring partner, not a highlighter.

Input Architecture (What to Ingest First)

A single textbook creates monoculture bias. Build a multi-source corpus before running analysis:

  • 4-8 textbooks or comprehensive references on the same domain.
  • 10-25 high-quality papers, ideally including conflicting schools.
  • Lecture transcripts, syllabi, qual-exam prep docs, lab notes, or conference tutorials.
  • Optional: practitioner artifacts (standards docs, postmortems, case studies).

The aim is viewpoint diversity with enough overlap to detect stable invariants.

Phase 1: Epistemic Cartography (First 20-40 Minutes)

Start with this prompt:

"What are the 5 core mental models that every expert in this field shares? For each model: define it, state what it explains, give boundary conditions, & cite sources that support it."

This forces abstraction above terminology. You are extracting shared latent structure, not memorizing chapter flow.

Then immediately ask:

"Show the 3 highest-stakes places where experts in this field fundamentally disagree. For each disagreement: define the crux, present each side's strongest argument, list empirical evidence each side uses, & identify what evidence would falsify each side."

Now you have a first-pass map of consensus, fault lines, & open uncertainty. This is what most students take months to infer implicitly.

Phase 2: Discriminator Construction

Next prompt:

"Generate 10 questions that distinguish deep structural understanding from memorized recall in this field. Each question should require model-based reasoning, not definition recall. Also provide what an expert-quality answer must include."

These are discriminator questions. Their function is binary classification: "can reason" vs "can repeat." If a question can be answered by quoting a sentence, discard it.

Phase 3: Adversarial Self-Testing Loop (Hours 2-8)

For each discriminator question:

  1. Answer from your own understanding first, without seeing model answers.
  2. Ask the assistant to score your answer against the expert criteria.
  3. For every miss, run:
    • "Explain precisely why this answer is wrong or incomplete."
    • "Identify the missing model, hidden assumption, or causal link."
    • "Give the minimum revision that would make the answer defensible."
  4. Rewrite from scratch.
  5. Ask for one transfer variant: same principle, different context.

This loop is where compression happens. Error diagnosis updates your internal model faster than passive review because each correction is attached to a failed prediction.

Phase 4: Compression Into a Conversation-Ready Model (Hours 8-24)

Once you pass the first discriminator set, force synthesis:

"Build a one-page field map: key models, major disagreements, canonical methods, common failure modes, unresolved questions, & practical implications."

Then:

"Simulate an oral qualifying exam. Ask one question at a time, escalate difficulty when I answer correctly, & interrupt me when reasoning breaks."

The oral format exposes brittleness quickly. You are training for live reasoning under pressure, not static note quality.

Phase 5: 48-Hour Stabilization Protocol

Day 1 produces rapid gains that decay unless reconsolidated. Use this schedule:

  • T+12h: Re-answer 5 discriminator questions from memory.
  • T+24h: Do a second oral exam simulation with novel cases.
  • T+36h: Explain the field to an imagined novice in plain language, then reframe in technical language.
  • T+48h: Produce a "what changed in my model?" memo: old assumptions, corrected assumptions, remaining unknowns.

Spacing with retrieval prevents the illusion of mastery that comes from short-term fluency.

Prompt Stack (Copy/Paste Sequence)

  1. "Extract 5 core mental models shared by experts. Include definitions, explanatory scope, boundary conditions, & source support."
  2. "Map 3 fundamental expert disagreements. For each: crux, strongest arguments per side, empirical basis, falsification criteria."
  3. "Generate 10 discriminator questions that separate deep understanding from memorization. Include expert answer criteria."
  4. "Evaluate my answer to Question X against those criteria. Diagnose missing assumptions, broken logic, & model errors."
  5. "Give one transfer question that tests the same model in a different domain context."
  6. "Run a live oral exam with escalating difficulty. Interrupt weak reasoning immediately."
  7. "Create a one-page synthesis map of the field, including consensus, disputes, methods, failure modes, & open problems."

Why This Works

This method exploits four mechanisms simultaneously: model-first abstraction, contrastive reasoning across disagreements, retrieval with feedback, & transfer testing under variation. Together they approximate the learning conditions that produce expert intuition: repeated prediction, correction, & reorganization of concepts.

In short, you are not trying to know more facts faster. You are trying to converge faster on the same latent structure experts use to think.

Failure Modes (Do Not Ignore)

  • If your corpus is low quality, the map will be wrong faster.
  • If you skip disagreement mapping, your understanding will be one-sided.
  • If you avoid adversarial feedback, you will confuse confidence for competence.
  • If you only do one long session, decay will erase most gains.
  • If your prompts ask for summaries, you will get polished shallowness.

Fast learning is real, but only when epistemic rigor stays high.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment