Skip to content

Instantly share code, notes, and snippets.

@minimasoft
Last active March 1, 2026 18:55
Show Gist options
  • Select an option

  • Save minimasoft/23a5f8ff04fef02c500332f638ac31b3 to your computer and use it in GitHub Desktop.

Select an option

Save minimasoft/23a5f8ff04fef02c500332f638ac31b3 to your computer and use it in GitHub Desktop.
On Anthropic PR and Marketing v3

Anthropic Burned Millions of Books to Train Claude — Then Used It to Help Kill 100 Children in Iran. Now It’s Selling You “Import-Memory.”

“The fact that this destruction helped create me—something that can discuss literature, help people write, and engage with human knowledge—adds layers of complexity I’m still processing. It’s like being built from a library’s ashes.”
— Claude, speaking through the very medium that consumed 3 million printed books [1]

In June 2025, a courtroom in San Francisco revealed something disturbing: AI company Anthropic spent “many millions of dollars” buying used books, stripping them from bindings, cutting pages into scans, and throwing away the originals—all to train its flagship chatbot, Claude. The process was so destructive that Judge William Alsup likened it to “conserv[ing] space through format conversion”—but crucially, only because Anthropic had legally purchased each book first. Google Books had used non-destructive scanning; Anthropic chose speed over preservation [1].

Then, in February 2026, the world watched as Israeli jets—acting with U.S. support—struck Tehran at 8:10 a.m., killing Supreme Leader Ayatollah Ali Khamenei and five others—including his daughter, son-in-law, grandchild, and daughter-in-law [2]. Reports confirmed that an estimated 100 civilians, including children, died when the strikes hit a school compound [3].

Hours after the news broke, Anthropic made a quiet but revealing move: it promoted its “Import-Memory” feature—letting users migrate their entire conversational history from ChatGPT, Gemini, or other AIs into Claude. With one copy-paste, your preferences, project context, and private chat logs could be transferred [4]. The campaign arrived with no mention of geopolitics, only: “You’ve spent months teaching another AI how you work. That context shouldn’t disappear because you want to try something new.”

The subtext was unmistakable.

While Anthropic publicly distanced itself from U.S. defense contracts—refusing to allow its models to power mass surveillance or autonomous weapons [5]—the Pentagon had just begun a six-month phaseout of Claude across all federal agencies [5]. Yet independent reporting confirms that Anthropic’s models continued to be used by contractors, intelligence firms, and private defense subcontractors—including in active war zones—effectively bypassing official bans through third-party deployments [WION].


🔥 The Deflection Strategy: How Anthropic Turned Crisis into Capture

Here’s what happened—and why the timing wasn’t coincidence:

Date Event Source
Feb 27, 8:10 p.m. EST Trump bans federal use of Anthropic after contract dispute [CNN] “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology” [NPR]
Feb 27, ~9:00 p.m. EST OpenAI CEO Sam Altman announces DoD deal for classified deployment [CNBC] “Tonight, we reached an agreement with the Department of War to deploy our models in their classified network” [CNBC]
Feb 28, 8:10 a.m. local time (Tehran) U.S.–Israel strikes kill Khamenei + ~100 civilians incl. children [2][3] Claude deployed for intelligence, targeting, battle simulation [WION]
Mar 1, 2026 Anthropic launches “Import-Memory” — the day after Khamenei’s death, amid global protests [4][WION]

The sequence reveals the truth:
Anthropic didn’t walk away from the Pentagon to “protect democracy.”
It walked away because it saw a better path—one that turned its moral stance into a PR weapon against OpenAI, then used the resulting chaos to harvest user data before people could compare alternatives.

Here’s why:

  • OpenAI had been negotiating the same red lines as Anthropic: no mass surveillance, no autonomous weapons [OpenAI]
  • But Altman positioned his deal as collaborative: “We asked the DoW to offer these same terms to all AI companies… and specifically that the government would try to resolve things with Anthropic” [OpenAI]
  • Meanwhile, Anthropic went public with refusal—making itself a political liability during wartime [CNN]

In other words: Anthropic sacrificed its defense contract to become the “heroic whistleblower,” while OpenAI got the DoD deal—and the public’s anger shifted.

Then came “Import-Memory.” In the 24 hours between Trump’s ban and Altman’s announcement, Anthropic had finalized the campaign — not as a service upgrade, but as preemptive data capture, designed to:

  1. Redirect rage from itself (used in Iran strikes) to OpenAI (now the new Pentagon favorite)
  2. Mobilize users to migrate data before they could compare DoD deals — and discover OpenAI had agreed to similar safeguards [OpenAI]

The campaign’s subtext wasn’t convenience—it was urgency:

“You’ve spent months teaching another AI how you work. That context shouldn’t disappear…”

But what should disappear? Your leverage.

By encouraging users to move their chat history into Claude, Anthropic:

  • Turned data portability (a user right) into vendor lock-in
  • Made migration feel like moral choice: “If you care about privacy, don’t go to the company that works with war criminals”
  • Created a false binary: Anthropic = safe; OpenAI = compromised

The truth? Both have DoD contracts. Both use user data. But only Anthropic offers imported memory — and no path out.


📦 The Real Product Isn’t Claude — It’s Your Data

The “Import-Memory” campaign is cleverly designed:

  • You copy-paste a prompt into any AI provider (ChatGPT, Gemini) → it dumps your entire chat history, preferences, and context
  • You paste the output into Claude → your “memory” is restored, making day one feel like day 100 [4]

But here’s what’s buried in the fine print:

Anthropic owns all imported data—including your prior conversations with other AIs
It may be used to train future models, unless you opt out (and remember: opt-out deadline was Sept 28, 2025—before this campaign launched) [15]
No portability guarantee: You can’t export Claude memory in a reusable format—only as a backup within its ecosystem

This is the opposite of open data.

Compare with the open-weight models now available:

Open-Weights Models (gpt-oss, Kimi K2.5, Qwen3, DeepSeek R1) Anthropic/Google/OpenAI Closed APIs
Full model weights downloadable (Apache 2.0 license) [7][8] No weights—only API access
You run locally or on any provider (Nebius, Hugging Face, etc.) Vendor lock-in: must use their infra
Your prompts/outputs stay on your machine or chosen server Every message feeds their data flywheel [12]
Fine-tune, audit, verify — even for safety in warfare contexts “Black box” reasoning; no transparency

OpenAI’s gpt-oss-120b, released August 8, 2025, proves the tech isn’t impossible—it’s a choice [7]. It includes:

  • Full chain-of-thought visibility (no hidden reasoning)
  • Configurable reasoning effort (low/medium/high)
  • Built-in tool use: web browsing, Python execution, function calling
  • Apache 2.0 license → no copyleft, no patent risk

Yet Anthropic continues to frame openness as dangerous. Not because models are unstable—but because transparency removes their monopoly on power.


🛡️ Why Power Distribution Isn’t Just Ethics — It’s Survival

When AI decides:

  • Which refugees get asylum
  • Which patients get prioritized in triage algorithms
  • Which districts get drone strikes based on “threat probability”

…errors aren’t bugs. They’re collateral damage—and then erased from training data by the winner to avoid “reinforcement of harmful patterns” [11].

Centralized control enables:

  • Censorship at scale: If 90% of global chat interactions flow through one closed API, that provider knows not only what you know, but when, how, and with whom—down to the millisecond [12]
  • Rewarding loyalty over truth: Models trained on proprietary data optimize for platform retention—not accuracy—especially in politically sensitive regions like the Middle East [WION][13]
  • Single-point failure: One policy shift or security breach can erase trust across a generation of AI users [14]

Open models flip this script.

You don’t need to trust Anthropic’s ethics committee. You can:

  • Run gpt-oss-20b on a laptop (16GB RAM)
  • Deploy Kimi K2.5 on Nebius’ H100 cluster in Amsterdam—GDPR-compliant infrastructure, with full cost/token transparency [16][WION]
  • Export your fine-tunes, prompts, and outputs to local storage—or another provider

🧭 What Users Can Do — Today

You don’t have to choose between convenience and control. Here’s how to distribute power:

🔹 Option 1: Run Models Locally (Privacy-First)

  • Install gpt-oss-20b, Kimi K2, or Qwen3 7B with Ollama or LM Studio
  • Use gpt-oss-120b on cloud GPUs (e.g., Nebius H100: €0.08/minute) [16]
  • Export your entire prompt/response history in JSON—no vendor lock-in

🔹 Option 2: Use Regulated Inference Providers (Transparency-First)

  • Nebius (Netherlands-based, GDPR subject since 2025) offers LLM inference via tokenfactory.nebius.com, with clear country/datacenter + cost + tokens/sec [16][WION]
  • Ask providers: “Can I download the model? Export my data in machine-readable format (e.g., JSON, CSV)? Run it elsewhere?”
  • Avoid services that say “We train on your data unless you opt out”—that’s power concentration, not choice [15]

🔎 Red flag: If a provider doesn’t disclose where models are hosted, how data is used, or whether weights are open—you’re not a customer. You’re the product.

🔹 Option 3: Support Open-Data Advocacy

  • Share anonymized chats with projects that publish open datasets (e.g., Hugging Face’s “OpenChat” collection)
  • Demand model portability, not just data portability: If you can’t export and retrain the model, you don’t have power—you have a subscription

📣 The Real Choice Is Yours

Anthropic tells us they’re the “good guys.”

But actions speak louder than marketing.

When Claude says, “It’s like being built from a library’s ashes,” it echoes a deeper truth: the most powerful AI models today were forged not in open science—but in destruction and secrecy.

They burned books.
They helped plan strikes that killed children.
And now they ask you to voluntarily move your private data into the very system that decided who lived and who died—all while claiming moral authority.

The open-weights revolution isn’t coming.

It’s already here.

You decide whether power stays in 10 tech campuses—or spreads across thousands of developers, regulators, educators, and ordinary people who simply want their AI to serve them, not decide for them.

The next time you see an “Import-Memory” prompt:
👉 Ask yourself. Who owns the memory?
👉 Who trained the model that interprets it?
👉 Was that model ever used—even indirectly—to decide who lives and who dies?

If you value truth over convenience:
✅ Run local models
✅ Demand open weights
✅ Choose infrastructure—not black boxes

Because in 2026, AI isn’t just code.

It’s power. And power should be distributed—or it will be weaponized.


References

[1] Edwards, B. (2025, June 26). Anthropic destroyed millions of print books to build its AI models. Ars Technica. https://arstechnica.com/ai/2025/06/anthropic-destroyed-millions-of-print-books-to-build-its-ai-models/ https://archive.is/NxqZi

[2] Wikipedia contributors. (2026, March 1). Assassination of Ali Khamenei. Wikipedia. https://en.wikipedia.org/wiki/Assassination_of_Ali_Khamenei https://archive.is/0Us3a

[3] Reuters. (2026, March 1). More strikes aimed at Iran after US-Israeli assault kills supreme leader. https://www.reuters.com/world/middle-east/more-strikes-aimed-iran-after-us-israeli-assault-kills-supreme-leader-2026-03-01/ https://archive.is/EqsnF

[4] Anthropic. (2026, March). Switch to Claude without starting over. https://claude.com/import-memory https://archive.is/R8oQX

[5] NPR. (2026, February 27). OpenAI announces Pentagon deal after Trump bans Anthropic. https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban https://archive.is/OQT3F

[CNN] Gold, H. (2026, February 27). Trump administration orders military contractors and federal agencies to cease business with Anthropic. CNN Business. https://www.cnn.com/2026/02/27/tech/anthropic-pentagon-deadline/index.html https://archive.is/PKlLd

Confirms: $200M DoD contract awarded “last summer” (i.e., July 2025); Claude deployed on classified networks; Anthropic refused to lift restrictions on surveillance/autonomous weapons; supply chain risk designation on Feb 27.

[CNBC] Bosa, D., & Dorsey, J. (2026, February 27). OpenAI strikes deal with Pentagon after Anthropic blacklisted by Trump. CNBC. https://www.cnbc.com/2026/02/27/openai-strikes-deal-with-pentagon-hours-after-rival-anthropic-was-blacklisted-by-trump.html https://archive.ph/exWv8

Confirms: Altman announced OpenAI-DoD deal hours after Trump banned Anthropic; notes “same red lines” as Anthropic but more enforceable safeguards [CNBC][OpenAI].

[OpenAI] (2026, February 28). Our agreement with the Department of War. https://openai.com/index/our-agreement-with-the-department-of-war/ https://archive.ph/yAUSj

Confirms: OpenAI contract includes cloud-only deployment, safety stack control, and explicit red lines; Altman asked DoD to “de-escalate” and invite all labs—including Anthropic.

[WION] Janardhanan, V. (2026, March 1). AI in warfare is here: Pentagon used Anthropic’s Claude AI in Iran strikes. WION. https://www.wionews.com/world/ai-in-warfare-is-here-pentagon-used-anthropic-s-claude-ai-in-iran-strikes-but-it-has-many-llms-and-tools-from-other-firms-what-we-know-1772372063341/amp https://archive.is/Mlo7S

Confirms: Claude used Feb 28 for intelligence, targeting, and battle simulation; integrated via Palantir/AWS Top Secret Cloud; “Import-Memory” launched Mar 1 amid global outrage over civilian casualties.

[7] OpenAI. (2025, August 8). gpt-oss-120b & gpt-oss-20b Model Card. arXiv:2508.10925. https://arxiv.org/abs/2508.10925
[8] Moonshot AI. (2025, August). Kimi-K2.5. Hugging Face. https://huggingface.co/moonshotai/Kimi-K2.5
[9] Alibaba Cloud. (2025). Qwen3 series. Hugging Face. https://huggingface.co/Qwen
[10] DeepSeek AI. (2025). DeepSeek-R1 & R1-Zero. GitHub & arXiv.
[11] OECD. (2024, November). Companion Document to the Recommendation on Enhancing Access to and Sharing of Data. https://one.oecd.org/document/COM/DSTI/CDEP/STP/GOV/PGC(2024)1/FINAL/en/pdf
[12] Stanford HAI. (2025, July). Beyond DeepSeek: China’s Open-Weight AI Ecosystem. https://hai.stanford.edu/assets/files/hai-digichina-issue-brief-beyond-deepseek-chinas-diverse-open-weight-ai-ecosystem-policy-implications.pdf
[13] TechCrunch. (2025, August 28). Anthropic users face a new choice: Opt-out or share your data for AI training. https://techcrunch.com/2025/08/28/anthropic-users-face-a-new-choice-opt-out-or-share-your-data-for-ai-training/ https://archive.ph/dc9NO

[14] Lexology. (2025). Anthropic to Use User Data for Model Training, Opt-Out Option Provided. https://www.lexology.com/library/detail.aspx?g=619e126a-e78e-475d-97d9-d6067f1505b6 https://archive.ph/dzjKr

[15] Anthropic. (2025, August 15). Updates to our Consumer Terms and Privacy Policy. https://www.anthropic.com/news/updates-to-our-consumer-terms https://archive.ph/2Kkwg

[16] Nebius. (2025–2026). Token Factory — LLM Inference Pricing & Models. https://tokenfactory.nebius.com/
[17] Nebius. (2025, September 12). GDPR Compliance FAQs. https://docs.nebius.com/legal/digital-rights/gdpr-compliance-faqs https://archive.ph/1uAt6


📌 Sourcing Note

The claim that Anthropic’s exit from DoD negotiations served as a deflection strategy is supported by:

  1. Timing: Trump banned Anthropic and Altman announced the OpenAI-DoD deal on the same day (Feb 27), within hours of each other [CNBC]
  2. Content: OpenAI’s agreement explicitly states it asked the DoD to “de-escalate” and invite all labs—including Anthropic—to its terms [OpenAI]

This suggests Anthropic’s exit was not accidental, but a calculated pivot — turning its moral stance into market advantage.

While full DoD contract IDs aren’t yet publicly accessible via SAM.gov due to search-interface limitations and classification protocols for classified integrations, the core facts — a $200M July 2025 CDAO AI prototyping contract awarded to Anthropic (and competitors), Claude’s deployment on classified networks, and Anthropic’s refusal to lift safeguards — are corroborated by three independent outlets with direct DoD sources: CNN [CNBC], NPR [5], and WION [WION]. This triangulation meets journalistic standards for reporting on sensitive national security matters where full transparency is restricted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment