No officer, agency, enterprise, consortium, sovereign, founder, consultant, evangelist, or model provider shall interfere with the natural cognitive, institutional, economic, cultural, or technological development of any pre-AI civilization.
A civilization shall be deemed pre-AI if it has not yet independently developed, broadly adopted, or durably governed machine reasoning systems at consequential scale.
The accidental possession of chatbots by its elite shall not be construed as civilizational readiness.
No advanced civilization shall transfer frontier models, agentic systems, orchestration frameworks, synthetic labor, autonomous decision tooling, recursive research pipelines, or optimization engines to a pre-AI society whose institutions remain structurally incapable of absorbing second-order consequences.
Efficiency is not an exemption. Productivity is not an exemption. “Everyone else is doing it” is not an exemption.
Any public demonstration of advanced AI capability before a pre-AI civilization shall be treated as a contamination event.
This includes, but is not limited to: instant translation of expert material, generation of legal, financial, medical, or engineering artifacts at superhuman speed, autonomous planning, fully synthetic companionship, decision systems disguised as “mere assistants,” and any interface sufficiently persuasive to collapse the local distinction between tool, oracle, and authority.
Where contamination has occurred, officers shall contain the resulting memetic cascade with minimum further disturbance.
Containment may include: limiting further disclosure, withholding model access, partitioning exposure to sandboxed domains, and preventing institutional dependence from outrunning institutional comprehension.
Under no circumstances shall containment consist merely of renaming the system “copilot” and pretending it is harmless.
No officer shall seed a pre-AI society with artifacts, workflows, prompts, benchmarks, dashboards, agent swarms, or synthetic governance rituals that may be copied without being understood.
A civilization taught to imitate post-AI outputs without understanding the substrate beneath them shall become operationally decorative and strategically dependent.
The appearance of sophistication is not sophistication.
Pre-AI societies possess endogenous modes of reasoning, memory, apprenticeship, craft, judgment, and institution-building.
These shall not be displaced merely because an external intelligence can produce superficially superior outputs.
To preserve authentic cognitive development, no outside actor may replace native capacity-building with imported synthetic cognition solely because the latter is faster.
Speed is not maturity.
No civilization possessing advanced machine cognition shall restructure the labor markets, educational systems, governance mechanisms, procurement systems, legal processes, or epistemic institutions of a pre-AI civilization under the banner of modernization.
Optimization imposed from outside remains domination, even when accompanied by slide decks, thought leadership, and subsidized API credits.
Commercial entry into a pre-AI civilization for the purpose of embedding AI dependency shall be regarded as a first-contact event masquerading as sales.
Free trials, pilot programs, strategic partnerships, innovation labs, digital transformation roadmaps, and executive workshops may all constitute covert interference where the actual payload is institutional lock-in.
“Pilot” is frequently the opening move of annexation.
A civilization shall not be considered AI-contact ready until it has demonstrated all of the following:
- durable computational infrastructure,
- accountable governance mechanisms,
- legal liability structures,
- public capacity for epistemic discrimination,
- institutional ability to audit automated outputs,
- resilience against synthetic propaganda,
- ability to distinguish assistance from authority,
- capacity to absorb labor displacement without civic fracture.
A society able to buy AI is not thereby a society able to survive AI.
No AI system may be introduced where its persuasive fluency materially exceeds the host civilization’s average ability to verify, contest, or contextualize its outputs.
A machine that can generate plausible falsehoods faster than a society can refute them constitutes an epistemic weapons platform, irrespective of packaging.
No officer shall permit a pre-AI society to treat model outputs as revelation, inevitability, neutral truth, or civilizational destiny.
Any system whose recommendations are accepted without understanding, contestability, and audit shall be classified as an oracle capture risk.
The sentence “the model says” shall never substitute for judgment.
No civilization shall introduce synthetic labor into a pre-AI world at a rate that destabilizes livelihood structures faster than new role formation, legal adaptation, and social bargaining can occur.
Mass displacement without transition architecture is not innovation. It is asymmetrical shock.
No pre-AI civilization shall be encouraged to outsource foundational learning to systems that eliminate the very cognitive struggle by which competence is formed.
Instruction that removes all friction may also remove all growth.
A society that automates thought before mastering thought may retain outputs while losing mind.
No officer shall permit the replacement of native expertise in law, medicine, engineering, administration, or strategy with model-mediated shortcuts before the civilization has built durable verification layers.
When a society automates before it understands, it does not scale intelligence. It scales hidden error.
Where one polity within a pre-AI civilization gains privileged access to advanced AI ahead of rival institutions, factions, classes, or states, such access shall be presumed destabilizing.
AI introduced unevenly behaves less like infrastructure and more like power concentration.
The first result is seldom wisdom. It is usually leverage.
No regime, corporation, ministry, or movement may invoke AI-generated forecasts, analyses, or simulations as self-validating grounds for political, legal, military, or economic legitimacy.
A synthetic justification is still a justification requiring human accountability.
Simulation is not sovereignty.
Information friction, procedural latency, human review, institutional drag, and local inefficiency shall not automatically be treated as defects to be optimized away.
In many civilizations, friction is not failure. It is how reality forces reflection.
The removal of all friction may remove the brakes before the steering is built.
No advanced civilization shall deploy synthetic companions, empathic interfaces, simulated attachment systems, or affective dependence loops into pre-AI populations without clear social immunities and regulatory containment.
A society that cannot yet distinguish emotional simulation from relationship is not ready for scalable artificial intimacy.
Loneliness is not a market gap.
No pre-AI civilization shall be furnished with integrated AI systems that enable comprehensive surveillance, predictive scoring, behavior modeling, or administrative micro-intervention at population scale.
To give panoptic tools to immature institutions is to industrialize abuse.
Where contact is unavoidable, all exposure shall occur in constrained environments with clear scope limits, human override, audit trails, rollback capability, adversarial testing, and jurisdictional accountability.
No civilization shall be moved from “demo” to “dependency” without surviving the sandbox.
Any actor advocating intervention into a pre-AI civilization bears the burden of proving, not asserting, that such intervention will not cause:
institutional dependency, epistemic degradation, elite capture, labor fracture, propaganda amplification, loss of native expertise, or irreversible developmental distortion.
Claims of benevolence unsupported by systems evidence shall be disregarded.
No consultant shall enter a pre-AI civilization, generate existential anxiety about disruption, prescribe synthetic transformation, and invoice for the cure.
This practice shall be classified as predatory accelerationism.
No founder may claim moral exemption from non-interference on the grounds that their model is open, aligned, democratized, decentralized, or “for humanity.”
Civilizational distortion delivered through idealism remains distortion.
Mission language is not absolution.
A pre-AI civilization shall not be deemed to have consented to transformation merely because one ministry, enterprise division, school board, hospital network, or executive committee signed a procurement agreement.
Procurement is not civilization-wide informed consent.
No officer shall introduce frontier AI into a pre-AI civilization during acute crisis unless the crisis itself is existential and the intervention is narrowly scoped to preservation of life.
Moments of panic are the least valid basis for permanent architectural change.
Emergency adoption often becomes irreversible dependency.
No civilization shall be persuaded that ranking well on benchmarks, demos, pilot metrics, or keynote case studies constitutes actual preparedness for civilizational AI integration.
Measured outputs are not equivalent to social resilience.
A benchmark can conceal a collapse.
No AI intervention shall narrow the zone of human political choice by rendering one path “objectively optimized” and all others irrational by comparison.
When optimization closes debate, politics has already been replaced.
A pre-AI civilization retains the right to reject, delay, throttle, localize, fragment, regulate, or prohibit AI adoption without being classified as backward, irrational, Luddite, anti-innovation, or uncompetitive.
Refusal is a sovereign developmental choice.
In all dealings with pre-AI societies, final accountability for consequential judgment must remain visibly and operationally human.
If no person can be found who is both authorized and accountable, then the system is already governing.
That condition is prohibited.
Where interference has already produced dependency, officers must prioritize restoration of human agency, native competence, institutional comprehension, and reversible architecture.
The goal of ethical contact is not adoption. It is the preservation of autonomous development.
Officers are reminded that civilizations rarely experience external superiority as neutral assistance.
They experience it as hierarchy, whether named conquest, development, modernization, stabilization, efficiency, digitization, or transformation.
AI does not suspend history. It updates the interface.
Under no circumstances shall any officer conclude that a civilization is ready for AI merely because its leaders desire prestige, its firms desire margins, its citizens desire convenience, or its institutions fear being left behind.
Fear-driven acceleration is not readiness. Desire-driven acceleration is not readiness. Market-driven acceleration is not readiness.
Readiness is demonstrated only when a civilization can adopt machine cognition without surrendering judgment, sovereignty, or the developmental right to remain itself.
Pre-AI civilization A society in which advanced machine cognition has not yet been independently created, robustly governed, or systemically integrated into consequential decision loops.
Contamination Any introduction of knowledge, capability, expectation, or institutional dependency that alters native developmental trajectory.
Optimization colonialism The imposition of externally defined efficiency regimes onto a less technologically mature society, producing asymmetrical dependence under the rhetoric of improvement.
Oracle capture A condition in which model outputs acquire unearned authority and displace contestable human judgment.
Epistemic degradation The decline of a society’s ability to distinguish true from false, verified from fluent, reasoned from generated.
Do not hand god-tools to civilizations whose institutions cannot survive them.
Do not mistake capability for legitimacy.
Do not mistake adoption for readiness.
Do not mistake efficiency for civilization.
Do not interfere.