In which humanity repeatedly chooses the mirage of quick profits over the hard work of long-term thinking—and why our relationship with AI is following the same tragic script
In ancient Athens, when Socrates first encountered the revolutionary technology of writing, he was deeply troubled. Not because he opposed innovation, but because he understood something that modern innovators have forgotten: every technology is a Faustian bargain.
"If men learn this," Socrates warned about writing, "it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written." He wasn't entirely wrong. We did lose something when we externalized memory to text. But we gained something else: the ability to build knowledge across generations in ways no purely oral culture ever could.
Socrates grasped what philosopher Neil Postman would later call a fundamental truth about technological change: every new technology gives something and takes something away. The winners and losers are never the same people. And most critically, we rarely understand what we're trading until it's too late.
This isn't the story of a Luddite philosopher's fears. This is the story of wisdom—the kind of deep thinking about consequences that every healthy culture needs to survive. It's the story of what happens when we abandon that wisdom, choosing instead the seductive myth that innovation is always progress, that moving fast is always better than thinking clearly, and that the market will magically sort out any problems our technologies create.
There's a community in America that has mastered something Silicon Valley never learned: how to evaluate technology deliberately. They're not anti-technology—that's a myth. They're the most sophisticated technology evaluators on Earth. They ask a simple question before adopting any innovation: "Will this strengthen or weaken our community?"
They are the Amish.
While tech entrepreneurs race to deploy AI without understanding its implications, while venture capitalists demand "move fast and break things," the Amish practice something revolutionary: they pause. They trial new technologies in controlled environments. They observe effects on social cohesion, family life, and spiritual well-being. Then they make collective decisions about adoption, modification, or rejection.
This isn't romantic primitivism. It's the kind of wisdom that comes from understanding what David Collingridge identified as technology's fundamental dilemma: when we can control a technology's development (when it's new), we don't yet know enough about its implications to make good decisions. When we finally do understand the consequences, the technology has become so entrenched that control becomes nearly impossible.
This is the tragedy at the heart of human innovation: we consistently choose speed over wisdom, profits over precaution, and efficiency over ethics. The pattern repeats with maddening consistency across industries, decades, and cultures. We build first, ask questions later, and then wonder why we're always surprised by the consequences.
But this time—with artificial intelligence—the stakes are different. This time, the broken things might include the foundations of human society itself.
David Collingridge's 1980 insight cut to the heart of technological policy: there's a cruel timing mismatch in how innovation unfolds. In the early stages of development, when we have maximum control over how a technology develops, we have minimal knowledge about its long-term implications. By the time we accumulate enough knowledge to make wise decisions, the technology has become so embedded in economic, social, and political systems that changing course becomes extraordinarily difficult.
Consider the internal combustion engine. In 1900, electric vehicles were outselling gasoline cars. The roads were quieter, the air was cleaner, and cities were more livable. But gas cars had a crucial advantage: oil was cheap and infrastructure was rapidly expanding. By the time we understood the full environmental and health costs of burning fossil fuels—air pollution, climate change, oil wars—we had built entire civilizations around gas-powered transportation. The "lock-in" was complete.
This isn't just about individual technologies. It's about the fundamental structure of innovation in market economies. Companies face massive pressure to commercialize discoveries quickly, before competitors can catch up. Patent systems give temporary monopolies to first movers. Venture capital flows to startups that can scale fastest, not those that think most carefully about implications.
What if we built systems that rewarded long-term thinking instead?
The Collingridge dilemma explains why the Amish approach is so radical: they deliberately extend the early evaluation period, creating social structures that resist premature lock-in. They understand that the moment to reject a harmful technology is before it becomes "too big to fail."
But in Silicon Valley, where "disruption" has become a secular religion, the Collingridge dilemma is treated as a obstacle to overcome rather than wisdom to embrace. The prevailing mythology is that speed is everything, that any delay hands advantage to competitors, and that the net benefits of innovation always outweigh the costs.
History suggests otherwise.
Johns Manville executives called it the "magic mineral"—fireproof, cheap, versatile, and seemingly perfect for the industrial age. By the 1940s, asbestos was everywhere: insulation, brake pads, floor tiles, even children's toys. The company's president, Lewis H. Brown, boasted of reaching "No. 1 in worldwide asbestos product sales."
But the magic was built on a lie that would kill millions.
As early as 1932, Johns Manville knew their product was causing a deadly lung disease called asbestosis. Internal company documents reveal a chilling calculation: it was cheaper to let workers die than to protect them. In a 1984 deposition, company employee Charles Roemer recalled a conversation with Brown about workers showing signs of lung disease:
"I'll never forget... I said, 'Mr. Brown, do you mean to tell me you would let them work until they dropped dead?' He said, 'Yes. We save a lot of money that way.'"
The cover-up was systematic and sophisticated. In 1949, Dr. Kenneth Smith advised Johns Manville against informing sick workers about their condition: "As long as the man is not disabled, it is felt that he should not be told of his condition so that he can live and work in peace, and the company can benefit by his many years of experience." Smith was promptly hired as the company's medical director—a perfect example of what Nassim Taleb calls iatrogenic harm: damage caused by the very institutions supposedly protecting us.
This wasn't ignorance. This was deliberate deception. By 1933, company officials were objecting to hanging warning signs because of the potential "legal situation." Raybestos-Manhattan president Sumner Simpson wrote in 1935: "The less said about asbestos, the better off we are."
Here we see Neil Postman's second principle in action: the winners and losers of technological change are never the same people. Asbestos company executives grew wealthy selling a product they knew was deadly. Workers and their families paid with their health and lives. The costs were socialized while the profits were privatized—a pattern that would repeat across industries for decades to come.
The industry perfected what would become the standard playbook for suppressing inconvenient truths: fund research designed to create doubt rather than find answers, recruit credible scientists to muddy the waters, attack critics personally rather than addressing their evidence, and maintain "scientific uncertainty" for as long as possible.
Today, asbestos-related diseases kill an estimated 15,000 Americans annually. Globally, more than 107,000 people die each year from exposure to a material that was known to be deadly eight decades ago. The World Health Organization estimates 125 million people remain exposed to asbestos at work.
What if companies faced real liability for the long-term health effects of their products?
The asbestos tragedy illustrates something deeper than corporate greed. It reveals how easy it is for institutions to become disconnected from the human consequences of their actions. When decisions are made in boardrooms by people who will never be exposed to the risks they're imposing on others, moral reasoning breaks down.
But perhaps most tragically, the asbestos playbook didn't die with the asbestos industry. It was systematically copied and refined by tobacco companies, chemical manufacturers, pharmaceutical companies, and tech giants. The same strategies that kept deadly asbestos in American buildings for forty years are being used today to delay action on climate change, addiction-inducing social media algorithms, and unsafe AI systems.
Thomas Midgley Jr. stands as one of history's most destructive inventors—not because he intended harm, but because he prioritized short-term profits over long-term consequences. His invention of leaded gasoline would poison the brains of entire generations, stealing billions of IQ points and fueling decades of violent crime.
The tragedy wasn't that lead's dangers were unknown. Ancient Romans understood that lead was toxic—they called lead poisoning "saturnism" and documented its symptoms in detail. By the 1920s, when General Motors needed an anti-knock compound for engines, safer alternatives like ethanol were available. But lead was cheaper and, critically, could be patented. Ethanol couldn't.
Midgley knew lead was dangerous—he'd been hospitalized after poisoning himself during research. Yet at a press conference in 1924, he poured tetraethyl lead over his hands and inhaled its vapors for a full minute, claiming it was perfectly safe. He spent the next year in Florida recovering from lead poisoning.
This performance perfectly embodies what Taleb identified as the "intervention bias"—the compulsive need to do something, even when doing nothing would be better. Instead of choosing the safer alternative (ethanol), the industry chose the more profitable one (lead), then engaged in elaborate theater to convince the public it was safe.
For decades, the lead industry fought research showing the devastating effects on children's cognitive development. Geochemist Clair Patterson, who developed modern methods for measuring environmental contamination, discovered that 20th-century Americans had 1,000 times more lead in their bodies than their ancestors. When he published these findings, the lead industry attacked his credibility and attempted to cut his funding.
The human cost was staggering. Studies revealed that people with higher lead concentrations in their baby teeth were many times more likely to drop out of high school. According to a 2022 study, more than half the current U.S. population—170 million people—were exposed to harmful lead levels in early childhood, resulting in a combined loss of more than 800 million IQ points.
Think about that number: 800 million IQ points stolen from American children. How many potential scientists, artists, entrepreneurs, and leaders never realized their capabilities because a company chose profits over safety?
The correlation with violent crime was equally disturbing. Multiple countries showed the same pattern: rising crime rates through the 1970s-1990s, then an abrupt decline exactly 20 years after lead was removed from gasoline. A study of 340 teenagers found that those who were arrested were four times more likely to have elevated lead levels in their bones.
Conservative estimates suggest lead poisoning has caused 25 million deaths in the U.S. alone over the past century. Current annual deaths from lead exposure range from 500,000 to 900,000 worldwide. Yet the lead industry successfully delayed regulation for decades using the tobacco playbook: fund doubt-mongering research, attack critics, and claim "no definitive proof" of harm.
How many other technologies are we deploying today without understanding their effects on human development?
The lead story reveals how technological choices made by small groups of decision-makers can shape the destiny of entire civilizations. The executives who chose lead over ethanol weren't evil—they were operating within a system that rewarded short-term profits over long-term consequences. But their decision rippled through generations, affecting the life trajectories of hundreds of millions of people.
When DDT was introduced during World War II, it was hailed as a miracle that would end the ancient scourge of insect-borne disease. The U.S. military declared it "the most powerful of the new weapons the army is now using in its war on insect-borne diseases." After the war, with massive stockpiles to dispose of, the chemical industry promoted DDT as a solution to agricultural pests.
What followed was an environmental disaster that nearly drove American icons like the bald eagle to extinction.
This case perfectly illustrates Postman's fourth principle: technological change is ecological, not additive. DDT didn't just kill target insects—it transformed entire ecosystems. It bioaccumulated in the food chain, with concentrations becoming more deadly at each level. Predatory birds at the top of the food web received lethal doses that caused eggshell thinning, population crashes, and near-extinctions.
By 1963, only 417 breeding pairs of bald eagles remained in the lower 48 states. The California condor was extinct in the wild by 1987, with just 27 birds alive in captivity. Peregrine falcons vanished from entire regions. Rachel Carson's 1962 book Silent Spring documented the carnage with haunting precision, describing a fictional spring morning: "On the mornings that had once throbbed with the dawn chorus of robins, catbirds, doves, jays, wrens, and scores of other bird voices there was now no sound; only silence lay over the fields and woods and marsh."
The chemical industry's response was vicious and personal. Carson was dismissed as "an alarmist, mystic and hysterical woman." Critics claimed her work was "more poisonous than the chemicals she condemns." The industry funded research designed to discredit her findings and recruited scientists to attack her credibility.
But Carson understood something her critics missed: the interconnectedness of natural systems. She grasped that you cannot intervene in complex ecological relationships without triggering unexpected consequences. This isn't mysticism—it's systems thinking, the recognition that seemingly isolated actions can cascade through networks in unpredictable ways.
The DDT story also reveals the power of Chesterton's Fence—the principle that before removing any long-established system (like natural pest control), we should understand why it exists. Millions of years of evolution had created intricate relationships between insects, birds, and other species that kept ecosystems in balance. DDT disrupted these relationships without understanding their function, leading to unexpected ecological collapses.
What complex systems are we disrupting today with similar confidence and ignorance?
The ban on DDT in 1972 led to remarkable recoveries. More than 300,000 bald eagles now fill U.S. skies. Brown pelicans and ospreys were removed from the endangered species list. Over 300 California condors fly wild again. But the broader pattern persists: DDT has been replaced by a growing arsenal of other harmful pesticides, many banned in other countries but still legal in the United States.
The DDT tragedy teaches us about the hubris of technological optimism—the belief that human ingenuity can improve on billions of years of evolutionary refinement without unintended consequences. It reveals how industries can capture the scientific process, funding research designed to support predetermined conclusions rather than seek truth. And it shows how long it takes for natural systems to recover from technological mistakes—if they can recover at all.
These early cases—asbestos, lead, DDT—reveal patterns that would repeat with maddening consistency across industries and decades. They show us that the problem isn't individual bad actors or corporate greed (though both exist). The problem is structural: we've created economic and regulatory systems that consistently prioritize short-term profits over long-term consequences.
Consider the similarities:
The Collingridge Dilemma in Action: In each case, there was a brief window when safer alternatives were available and technological choices remained open. But economic pressures—patent advantages, manufacturing costs, competitive dynamics—drove decisions toward more profitable but less safe options. Once these technologies became entrenched in infrastructure and supply chains, changing course became extremely difficult.
Postman's Principles at Work: Every technology embedded a philosophy. Asbestos embodied the philosophy that worker safety was less important than production efficiency. Lead gasoline reflected the belief that short-term convenience justified long-term environmental contamination. DDT represented the hubris that human engineering could improve on natural systems without understanding their complexity.
The Failure of Second-Order Thinking: Companies focused obsessively on first-order effects—preventing engine knock, killing insects, fireproofing buildings—while ignoring second-order consequences. They asked "Does this solve the immediate problem?" but never "If everyone uses this at scale, what happens then?"
Iatrogenic Harm: In each case, institutions supposedly protecting public welfare—government agencies, medical associations, scientific organizations—were captured or compromised by the industries they were meant to oversee. The healers became sources of harm.
The Myth of Technological Neutrality: Each technology was marketed as neutral—a tool that could be used safely if proper precautions were taken. But as Postman observed, technologies are never neutral. They embody values, reshape behaviors, and transform societies in ways their creators never anticipated.
The question these cases pose is uncomfortable: If we consistently get it wrong when technologies seem simple and contained, how can we possibly evaluate complex, interconnected systems like social media algorithms or artificial intelligence?
By late 1953, the tobacco industry faced an existential crisis. Scientific evidence definitively linking smoking to lung cancer was published in major medical journals and covered extensively in the media. Cigarette sales were plummeting. Public health officials were calling for action. The writing was on the wall.
The industry's response became the template for all future corporate disinformation campaigns.
At a December meeting at New York's Plaza Hotel, tobacco CEOs hired public relations firm Hill & Knowlton to engineer doubt about the scientific consensus. John Hill's strategy was brilliant in its cynicism: instead of denying the science outright, embrace it—but fund research designed to muddy the waters and maintain the appearance of legitimate scientific debate.
"The goal," according to Hill, "would be to build and broadcast a major scientific controversy."
The Tobacco Industry Research Committee (TIRC) was born, announcing its mission in a full-page ad in over 400 newspapers: "We accept an interest in people's health as a basic responsibility, paramount to every other consideration in our business."
It was a masterpiece of deception. Internal documents later revealed the TIRC's true purpose was to fund research on anything except the direct link between cigarettes and disease. As one industry evaluation concluded: "Most of the TIRC research has been of a broad, basic nature not designed to specifically test the anti-cigarette theory."
This strategy reveals something profound about how doubt can be manufactured in scientific societies. The tobacco industry understood that the general public couldn't evaluate scientific evidence directly—they relied on scientific institutions and media coverage to interpret complex research. By identifying and funding scientific skeptics, the industry could maintain the appearance of legitimate debate long after the scientific consensus had solidified.
As one American Cancer Society official observed: "When the tobacco companies say they're eager to find out the truth, they want you to think the truth isn't known... They want to be able to call it a controversy."
The human cost was catastrophic. Cigarette sales rose from 369 billion annually in 1954 to 488 billion in 1961. Per capita consumption reached its highest level ever. Meanwhile, lung cancer rates soared, and millions died from smoking-related diseases that could have been prevented if the industry had acknowledged what they knew in the 1950s.
The tobacco industry's own documents, released through litigation in the 1990s, revealed the full scope of the deception. For over 40 years, tobacco companies had known their products were deadly while publicly maintaining there was "no proof" of harm. Internal memos showed executives joking about their customers as "younger adult starters" (teenagers) and discussing how to make cigarettes more addictive.
Why does this matter for understanding AI and other emerging technologies?
Because the tobacco playbook has been systematically copied by every industry that profits from harmful products. The same lawyers, PR firms, and research institutions that defended tobacco have been hired to defend fossil fuels, processed foods, pesticides, pharmaceuticals, and social media platforms. The strategies are remarkably consistent:
- Fund research designed to create uncertainty, not find truth
- Recruit credible scientists to serve as industry spokespeople
- Attack critics personally rather than addressing their evidence
- Claim that individual choice and personal responsibility matter more than product design
- Fight regulation by claiming economic necessity and job losses
- Settle lawsuits quietly while admitting no wrongdoing
What makes this playbook so effective is that it exploits legitimate features of the scientific process—peer review, skepticism, and the demand for certainty—to manufacture false doubt about real harms. It transforms the scientific method from a tool for discovering truth into a weapon for protecting profits.
The tobacco industry spent approximately $45 million funding doubt-mongering research over four decades. They made hundreds of billions in profits while their products killed over 100 million people in the 20th century alone. From a purely financial perspective, it was one of the most successful disinformation campaigns in history.
The question we must ask about every new technology is: Are we seeing genuine scientific uncertainty, or manufactured doubt designed to delay regulation?
In a cruel irony, Thomas Midgley Jr.—already responsible for leaded gasoline—invented chlorofluorocarbons (CFCs) in 1930. Originally developed as a safer refrigerant to replace toxic ammonia and sulfur dioxide, CFCs seemed like a genuine improvement. They were non-toxic, non-flammable, and chemically inert under normal conditions. DuPont marketed them aggressively, and they became ubiquitous in refrigeration, air conditioning, and aerosol sprays.
The problem was that nobody thought to investigate what happened when CFCs reached the stratosphere.
In 1974, chemists Mario Molina and F. Sherwood Rowland made a discovery that would reshape environmental science: ultraviolet radiation in the upper atmosphere breaks down CFCs, releasing chlorine atoms that destroy ozone molecules. A single chlorine atom can destroy thousands of ozone molecules before being neutralized. The ozone layer, which protects life on Earth from deadly UV radiation, was under systematic attack from human-made chemicals.
The industry's response followed the familiar script. DuPont's chairman called the ozone depletion theory "a science fiction tale" and "utter rubbish." The company spent millions funding research to cast doubt on the science while continuing to expand CFC production. Industry-funded scientists questioned the computer models, demanded more research, and suggested that ozone depletion might be natural variation.
It took the discovery of the Antarctic ozone hole in 1985—a massive gap in the ozone layer the size of North America—to galvanize international action. The Montreal Protocol of 1987 phased out CFCs globally, and the ozone layer has slowly begun to recover.
But the damage was nearly irreversible. At its peak, the Antarctic ozone hole covered an area three times the size of Australia. Without the Montreal Protocol, models suggest that by 2065, two-thirds of the world's ozone would have been destroyed, leading to millions of additional skin cancer cases and the collapse of marine ecosystems dependent on UV-sensitive phytoplankton.
The CFC story illustrates the Lindy Effect in reverse: the longer a practice survives, the longer it's likely to survive—but new technologies haven't been time-tested. CFCs had been in use for only 44 years when they were banned, but they had already set in motion atmospheric changes that would persist for decades.
What other human innovations are triggering irreversible changes in complex systems we don't yet understand?
The irony of Midgley's legacy is profound. A single individual, working within the incentive structures of 20th-century capitalism, created two technologies that caused more environmental damage than any other human inventions. Midgley himself died in 1944, strangled by a pulley system he invented to help him get out of bed after contracting polio. He had unintentionally killed more people through his inventions than died from polio in the entire 20th century.
The CFC story also reveals the power of global cooperation when institutions function properly. The Montreal Protocol worked because it was based on clear scientific evidence, included all major producers, provided alternatives for developing countries, and created enforcement mechanisms. It remains one of the most successful international environmental agreements in history.
But it almost came too late. What if Molina and Rowland had made their discovery in 1984 instead of 1974? What if the Antarctic ozone hole had been discovered in 1995, when CFC concentrations were even higher?
In the 1960s, as evidence mounted linking sugar consumption to heart disease and obesity, the sugar industry faced a public relations crisis. Their solution was to fund research that would shift blame from sugar to dietary fat, fundamentally distorting American nutritional policy for generations.
Internal documents revealed that in 1967, the Sugar Research Foundation (now the Sugar Association) paid Harvard researchers $6,500 (equivalent to $50,000 today) to publish a literature review downplaying sugar's role in heart disease while emphasizing the dangers of saturated fat. The researchers, including future heads of nutrition at Harvard, didn't disclose their funding source.
Their review, published in the prestigious New England Journal of Medicine, concluded that the only dietary intervention needed to prevent heart disease was to reduce cholesterol and saturated fat—not sugar. This fraudulent research influenced decades of public health policy.
The U.S. government's dietary guidelines, developed in the 1970s and 1980s, recommended reducing fat intake while saying little about sugar. The food industry responded predictably: they removed fat from products and replaced it with sugar and refined carbohydrates, marketing these new products as "low-fat" and "heart-healthy."
The results were catastrophic for public health. Obesity rates soared, diabetes became epidemic, and heart disease remained the leading killer. Meanwhile, sugar consumption increased dramatically, hidden in thousands of processed foods now marketed as healthier alternatives.
This case study perfectly illustrates how industry-funded research can corrupt the scientific process. The sugar industry didn't just fund biased research—they funded research designed to reach predetermined conclusions. As one internal memo stated: "Our objective is to establish sugar's place in the diet as a valuable contributor to good nutrition."
The sugar industry followed the tobacco playbook so closely that some of the same researchers and PR firms were involved in both campaigns. They funded studies designed to create confusion, recruited credible scientists to muddy the waters, and attacked researchers who found links between sugar and disease.
How many other scientific "controversies" are actually the result of industry manipulation rather than genuine uncertainty?
The sugar conspiracy reveals something deeply troubling about how knowledge is produced in modern societies. When industries can capture academic research, regulatory agencies, and medical associations, the boundary between science and marketing disappears. What appears to be objective research becomes a tool for protecting profits, and public health suffers the consequences.
The parallels to current debates about social media addiction, processed food, and AI safety are unmistakable. In each case, industries fund research designed to cast doubt on evidence of harm while continuing to expand production and marketing. They claim that more research is needed while taking actions that make future regulation more difficult.
The deeper question this raises is: How can democratic societies make good decisions about technology when the research process itself can be corrupted by the very industries whose products are being evaluated?
The tobacco and sugar cases reveal something more insidious than corporate greed. They show how entire knowledge-production systems—universities, medical associations, regulatory agencies, scientific journals—can be systematically corrupted when industries have enough money and motivation to distort the truth.
This isn't about individual scientists being bought off (though that happens). It's about structural problems in how research is funded, published, and translated into policy:
Funding Bias: When industries fund research at universities, they often retain the right to review results before publication and suppress studies that don't support their interests. This creates systematic bias in the literature, with positive results more likely to be published than negative ones.
Career Incentives: Academic researchers depend on grants, consulting fees, and industry connections for career advancement. Those who produce results favorable to industry interests are more likely to receive continued funding than those who challenge profitable products.
Regulatory Capture: Government agencies responsible for protecting public health often hire from the industries they regulate and vice versa. This "revolving door" creates conflicts of interest that bias regulatory decisions toward industry preferences.
Media Manipulation: Industries with resources can flood media with competing narratives, creating the appearance of scientific debate even when scientific consensus exists. The general public, unable to evaluate complex research directly, relies on media coverage that can be systematically distorted.
Legal Intimidation: Industries can use lawsuits and threats of lawsuits to silence critics, particularly individual researchers or small organizations that lack resources for extended legal battles.
The result is what philosophers of science call "manufactured uncertainty"—the strategic creation of doubt about scientific findings in order to delay regulatory action and maintain profitable business models.
This has profound implications for how we evaluate emerging technologies like AI, where the companies developing the technology have massive resources and incentives to shape public understanding of the risks.
By the 1990s, the "build first, think never" pattern had found its perfect home in Silicon Valley, where it evolved from a bug into a feature. The tech industry didn't just repeat the old mistakes—it elevated them into a business philosophy, creating a culture where "moving fast and breaking things" became a badge of honor rather than a warning sign.
This transformation reflects Postman's fifth principle: technology becomes mythic—invisible, unquestioned, "natural." In Silicon Valley, the myth is that speed equals progress, that any delay hands advantage to competitors, and that the benefits of innovation always outweigh the costs. These beliefs have become so embedded in tech culture that questioning them feels like heresy.
The late 1990s internet bubble was more than just financial speculation—it was the first time an entire industry organized around the principle that thinking was optional. Companies with no revenue, no business model, and sometimes no actual product were valued at billions of dollars based purely on having ".com" in their name.
Pets.com spent $11.8 million on Super Bowl advertising while generating just $619,000 in revenue. The company's business model—selling heavy bags of pet food online at below-cost prices with free shipping—was obviously unsustainable to anyone who thought about it for five minutes. But thinking had become unfashionable. What mattered was "getting big fast" and capturing "eyeballs" and "mind share."
Webvan raised $375 million to deliver groceries and burned through it all in 18 months. Boo.com blew $188 million in just six months trying to sell fashion online. The poster child for absurdity was Flooz.com, a company that created its own digital currency backed by nothing, promoted by Whoopi Goldberg, and used primarily by money launderers before collapsing in fraud.
The numbers were staggering. By March 2000, 280 internet companies had a combined market value of $2.948 trillion. When the bubble burst, their value collapsed to $1.193 trillion—a loss of $1.755 trillion in just two years. By 2002, individual investors had lost $5 trillion in the stock market.
But here's the crucial insight: the technology was real. The internet did transform commerce, communication, and culture in revolutionary ways. The problem wasn't the innovation—it was the ideology that innovation must happen at maximum speed, regardless of consequences.
What if the internet had been developed more thoughtfully, with stronger protections for privacy, democracy, and human wellbeing built in from the beginning?
The tech industry learned the wrong lesson from the crash. Instead of questioning the "growth at all costs" mentality, they doubled down on it, convinced that moving faster and taking bigger risks was the path to success. They treated the bubble as a temporary setback rather than evidence that their fundamental approach was flawed.
In 2014, Mark Zuckerberg made "Move fast and break things" Facebook's official company motto. The phrase perfectly captured Silicon Valley's approach to innovation: deploy first, deal with consequences later. It wasn't just a slogan—it was a worldview that would reshape global communication with devastating consequences.
Zuckerberg intended it to encourage rapid experimentation and bold risk-taking. But words matter, especially when they come from the CEO of one of the world's most powerful companies. The motto spread throughout Silicon Valley like a virus. Startups adopted it as their ethos. Venture capitalists demanded it from their portfolio companies. "Disruption" became the ultimate goal, regardless of what was being disrupted or who was harmed in the process.
The results speak for themselves: social media platforms that spread misinformation and radicalize users, ride-sharing apps that exploit workers, cryptocurrency exchanges that steal billions from customers, smart devices that spy on users, and AI systems deployed without adequate safety testing.
Consider what "move fast and break things" actually means when applied to technologies that shape human behavior and social institutions:
Move fast with democracy → Information bubbles, polarization, election interference Move fast with teenage psychology → Mental health crises, anxiety epidemics, eating disorders Move fast with labor relations → Gig economy exploitation, union-busting, wage theft Move fast with financial systems → Crypto fraud, market manipulation, consumer losses Move fast with privacy → Surveillance capitalism, data breaches, authoritarian control
The motto reveals something deeper than reckless ambition. It embodies what Taleb calls the "intervention bias"—the compulsive need to do something, even when doing nothing (or doing something more slowly) would be better. In tech culture, patience became equated with failure and caution with cowardice.
When Facebook finally retired the motto in 2014, replacing it with "Move fast with stable infrastructure," the damage was done. The culture of reckless innovation had become embedded not just at Facebook, but throughout the entire tech industry. The new motto was just better PR—the underlying philosophy remained unchanged.
What would Silicon Valley look like if its defining motto had been "Think carefully and build responsibly"?
Perhaps no case study better illustrates the dangers of "move fast and break things" than social media's impact on teenage mental health. The Facebook Papers, released by whistleblower Frances Haugen in 2021, revealed that social media companies have known for years that their products harm young people—yet they've continued optimizing for engagement regardless of the consequences.
Internal research at Facebook (now Meta) showed that Instagram use was linked to increased rates of anxiety, depression, and eating disorders among teenage girls. One internal study found that 32% of teen girls said Instagram made them feel worse about their bodies when they already felt bad. The company's own research revealed: "We make body image issues worse for one in three teen girls."
Perhaps most damning was the company's research on eating disorders. In 2021, an Instagram employee created a fake account as a 13-year-old girl interested in dieting. Within days, the algorithm was serving up content about extreme weight loss, self-harm, and eating disorders. As one internal researcher noted: "It took only six minutes for the account to be shown content promoting eating disorder behaviors."
This perfectly illustrates second-order thinking failure: the first-order effect of social media algorithms was increased engagement (users spent more time on the platform). The second-order effect was psychological manipulation of vulnerable adolescents during critical developmental periods.
Despite knowing about these harms, Facebook executives regularly gave contradictory testimony to Congress. CEO Mark Zuckerberg testified that the research showed social media had positive effects on teen wellbeing—the opposite of what the company's own studies concluded.
The mental health crisis among teenagers has coincided precisely with the rise of social media. Since 2007, rates of depression among teens have increased by more than 60%. Suicide rates for young girls have tripled. Hospital admissions for self-harm among teenage girls have doubled.
Yet social media companies continue to optimize their algorithms for maximum engagement, knowing that negative emotions drive more interaction. As one Facebook executive wrote in an internal memo: "The algorithms exploit the human brain's attraction to divisiveness."
How is this different from tobacco companies continuing to sell cigarettes after discovering they cause cancer?
The social media case reveals how the Collingridge dilemma operates in the digital age. When these platforms were small startups, they could have been designed with different incentive structures—subscription models instead of advertising, chronological feeds instead of algorithmic manipulation, privacy by design instead of surveillance capitalism. But once they achieved massive scale and economic entrenchment, changing course became nearly impossible.
The platforms now have billions of users, hundreds of billions in revenue, and enormous political influence. They've become "too big to fail"—not just economically, but socially. Millions of businesses depend on them for marketing, millions of people use them for communication, and entire political movements organize through them. The lock-in is complete.
What if we had paused to think about the psychological and social implications of algorithmic content curation before deploying it on billions of people?
When Uber launched in 2010, it promised to "be your own boss." Lyft, DoorDash, TaskRabbit, and dozens of others followed with the same pitch: flexibility, independence, entrepreneurship. The marketing was brilliant, tapping into deep American myths about self-reliance and the dignity of hard work.
The reality was regulatory arbitrage dressed up as innovation.
By classifying workers as "independent contractors" instead of employees, gig companies avoided paying minimum wage, overtime, health insurance, workers' compensation, and unemployment insurance. A 2020 UC Berkeley study found that after accounting for vehicle expenses, the median Uber/Lyft driver earned just $5.64 per hour—well below minimum wage in every U.S. state.
This perfectly illustrates Chesterton's Fence: before disrupting taxi regulations, these companies should have understood why those regulations existed. Taxi licensing systems, fare controls, and employment protections weren't arbitrary bureaucracy—they were hard-won safeguards developed after decades of worker exploitation and consumer abuse.
The promise of "flexible work" proved largely illusory. Platforms use algorithmic management to control drivers' behavior: surge pricing to encourage work during peak times, acceptance rate requirements that force drivers to take unprofitable trips, and rating systems that can deactivate drivers without due process. As labor organizer Veena Dubal observes, this isn't freedom—it's "algorithmic tyranny."
These companies burned through billions in venture capital subsidizing artificially low prices to destroy traditional competitors, with the explicit plan to raise prices once competition was eliminated. Uber alone lost $31 billion before its IPO. The strategy was predatory pricing on a global scale, subsidized by some of the world's wealthiest investors.
The societal costs are enormous but largely invisible in traditional economic metrics. Traditional taxi companies have gone bankrupt, destroying stable middle-class jobs and union protections. Public transportation ridership has declined as subsidized ride-sharing cannibalizes transit systems. Traffic congestion and pollution have increased in major cities as more people shift from public transportation to ride-sharing.
What does it say about our economic system that destroying good jobs and replacing them with bad jobs can be celebrated as innovation?
The gig economy reveals how technological disruption can be used to circumvent democratic decisions about worker protection and fair wages. Instead of arguing for policy changes through normal political processes, companies simply built systems that made existing laws irrelevant, then spent millions on lobbying to prevent new regulations.
The result is a race to the bottom for working conditions, justified by the language of freedom and choice. Workers bear all the risks—vehicle maintenance, insurance costs, income volatility—while platforms capture the profits. It's a perfect example of socializing costs while privatizing benefits.
Bitcoin launched in 2009 with a genuinely revolutionary promise: decentralized, trustless digital money that couldn't be controlled by governments or traditional financial institutions. The underlying blockchain technology was elegant, and the vision of financial freedom resonated with millions of people frustrated by the 2008 financial crisis.
What followed was one of the most spectacular examples of technological promise corrupted by speculative frenzy and outright fraud.
By 2017, "blockchain" had become the magic word that could add zeros to any company's valuation. Long Island Iced Tea Corp renamed itself "Long Blockchain Corp" and saw its stock price triple overnight—despite having no blockchain technology whatsoever. The Initial Coin Offering (ICO) craze raised over $20 billion, with the vast majority of projects delivering nothing.
A study by Satis Group found that 81% of ICOs were scams from the start, 6% had failed, and only 8% made it to trading on exchanges. Bitconnect, which promised 1% daily returns through a "trading bot," was obviously a Ponzi scheme from day one—yet it raised over $2.5 billion before collapsing in 2018.
The environmental cost was staggering. At its peak, Bitcoin mining alone consumed more electricity than entire countries like Argentina or Norway—roughly 150 terawatt-hours annually. The carbon footprint was equivalent to burning 84 billion pounds of coal per year, all to process a handful of transactions per second that traditional payment systems handle thousands of times more efficiently.
The NFT bubble of 2021-2022 epitomized crypto's transformation from revolutionary technology to speculative casino. Digital images of cartoon apes sold for hundreds of thousands of dollars, with buyers convinced they were investing in the future of digital ownership. When the bubble burst, most NFTs became worthless, revealing them to be nothing more than expensive receipts for hyperlinks to images that could disappear at any moment.
Then came FTX—the $32 billion crypto exchange that collapsed in November 2022 when it was revealed that founder Sam Bankman-Fried had been using customer deposits to fund his hedge fund, Alameda Research. Over $8 billion in customer funds vanished overnight. Bankman-Fried, once hailed as the "King of Crypto" and a poster child for "effective altruism," was convicted on seven counts of fraud.
How did a technology designed to eliminate the need for trust become a playground for some of the most brazen financial fraud in history?
The crypto collapse reveals the danger of technological utopianism—the belief that clever code can solve social and political problems without addressing underlying human nature. Bitcoin's pseudonymous creator, Satoshi Nakamoto, designed an elegant solution to the technical problem of double-spending in digital currencies. But no amount of cryptographic sophistication can prevent human greed, manipulation, and fraud.
The promise of "decentralization" proved hollow. A handful of mining pools control most Bitcoin production. Major crypto exchanges like Coinbase and Binance became more powerful and less regulated than traditional banks. Venture capital firms and institutional investors dominate the market, not the "ordinary people" crypto was supposed to empower.
Most tellingly, the crypto industry recreated every financial scam from the 1920s, just on a blockchain: Ponzi schemes, pump-and-dumps, insider trading, market manipulation, and outright theft—except with less regulation and less recourse for victims. As Warren Buffett observed, crypto became "rat poison squared."
The Silicon Valley cases reveal how the "move fast and break things" philosophy scales destructively when applied to technologies that shape human behavior and social institutions. Unlike earlier industrial disasters that primarily caused physical harm, digital technologies can cause psychological, social, and political damage that's harder to measure but potentially more profound.
The Myth of Technological Neutrality: Each platform claimed to be a neutral tool—just connecting people, just providing transportation, just facilitating transactions. But as Postman observed, technologies are never neutral. They embody values, reshape behaviors, and transform societies. Social media algorithms embody the value that engagement matters more than truth. Gig economy platforms embody the value that efficiency matters more than worker security. Crypto embodies the value that financial speculation matters more than environmental sustainability.
Scaling Before Understanding: Unlike previous technologies that scaled gradually, digital platforms can reach billions of users almost overnight. This creates unprecedented opportunities for harm when psychological or social effects aren't understood until after massive deployment. We're essentially conducting uncontrolled experiments on human psychology and social organization at planetary scale.
The Addiction Economy: Digital technologies enable new forms of behavioral manipulation that exploit psychological vulnerabilities in ways that physical products cannot. Variable reward schedules, social validation feedback loops, and fear of missing out can be programmed into software with scientific precision. We've created an entire economy based on capturing and monetizing human attention.
Regulatory Lag: Government institutions designed for physical world regulation struggle to understand, let alone govern, digital technologies that evolve at exponential speeds. By the time regulators understand a technology well enough to regulate it effectively, the technology has often evolved into something entirely different.
Network Effects and Lock-in: Digital platforms benefit from network effects—they become more valuable as more people use them. This creates winner-take-all dynamics that lead to monopolization faster than traditional industries. Once a platform achieves dominance, users become locked in even if they're unhappy with the service.
The fundamental question Silicon Valley's failures raise is: Can human societies govern technologies that evolve faster than human institutions can adapt?
The opioid crisis represents the convergence of several destructive patterns: pharmaceutical industry deception, regulatory capture, and data-driven marketing tactics that targeted vulnerable populations with scientific precision. It's a case study in how modern corporations can weaponize both technology and medicine against public health.
When Purdue Pharma introduced OxyContin in 1996, they launched what the company called an "aggressive" marketing campaign that would become a masterclass in using data analytics to spread addiction. Sales grew from $48 million in 1996 to almost $1.1 billion in 2000—a 2,200% increase in just four years.
The marketing was unprecedented for a controlled substance. Purdue spent $200 million in 2001 alone on promotion, conducting over 40 national conferences at luxury resorts where more than 5,000 healthcare providers were recruited for the company's speaker bureau. They distributed promotional materials that would be shocking for any schedule II opioid: OxyContin fishing hats, stuffed toys, and music CDs titled "Get in the Swing With OxyContin."
But the real innovation was in data-driven targeting. Purdue identified which physicians prescribed the most opioids using sophisticated data mining and sent sales representatives to visit them repeatedly. They analyzed prescription patterns, insurance reimbursement rates, and demographic data to optimize their marketing approach. A lucrative bonus system encouraged representatives to push sales aggressively, with annual bonuses averaging $71,500 and ranging up to $240,000.
Most insidiously, Purdue systematically misrepresented the addiction risk using carefully crafted lies. Sales representatives were trained to tell doctors that addiction risk was "less than one percent"—a claim based on a misrepresented study of hospitalized patients receiving acute pain treatment, not chronic daily use by outpatients.
Marketing materials claimed the risk was "extremely small" and "very rare." Patient starter coupons provided free 7- to 30-day supplies, with approximately 34,000 coupons redeemed by 2001. The company was literally giving away addictive drugs to create new customers.
This represents a perfect example of iatrogenic harm—damage caused by the medical system itself. Purdue weaponized doctors' trust and patients' pain against them, turning the healing relationship into a vector for addiction.
The consequences were devastating and geographically concentrated. In regions where OxyContin was heavily promoted—Maine, West Virginia, eastern Kentucky, southwestern Virginia, and Alabama—prescribing rates reached 5-6 times the national average. These same areas saw the first outbreaks of abuse, addiction, and overdose deaths.
From 1997 to 2002, OxyContin prescriptions for non-cancer pain increased nearly tenfold, from 670,000 to 6.2 million. Emergency department mentions for oxycodone increased 346% during the same period. In southwest Virginia, opioid-related deaths increased 830% from 1997 to 2003. Kentucky saw a 500% increase in patients entering methadone treatment programs, with 75% dependent on OxyContin.
The human toll was catastrophic. By 2004, OxyContin had become the most abused prescription drug in America. Today, the opioid crisis kills more than 70,000 Americans annually, with total deaths exceeding 500,000 since 1999. To put this in perspective, that's more American deaths than World War I, World War II, and the Vietnam War combined.
How is it possible that a single pharmaceutical product could cause more American deaths than major wars?
In 2007, Purdue Pharma and three executives pled guilty to criminal charges of misbranding OxyContin and paid $634 million in fines. But this was just the cost of doing business—the company had made billions in profits. The Sackler family, which owned Purdue, extracted over $10 billion from the company while it was fueling the addiction crisis.
The opioid case reveals how the marriage of technology and medicine can create unprecedented capacities for harm. Data analytics allowed Purdue to identify and target vulnerable physicians and communities with surgical precision. Electronic medical records provided detailed information about prescribing patterns. Direct-to-consumer marketing bypassed traditional medical gatekeepers.
What other industries are using similar data-driven approaches to target vulnerable populations?
Per- and polyfluoroalkyl substances (PFAS) represent perhaps the most complete example of corporate environmental crime in American history. These "forever chemicals" don't break down naturally and accumulate in the environment and human bodies indefinitely. Yet DuPont—and later other chemical companies—continued producing them for decades after knowing they caused cancer, liver damage, and birth defects.
DuPont began producing PFAS in the 1940s, and by the 1960s, company studies showed the chemicals were accumulating in workers' blood and causing serious health problems. Internal documents revealed that DuPont knew PFAS could cause cancer, but instead of stopping production or warning the public, they launched a systematic cover-up that would last decades.
In 1981, DuPont learned that PFAS could cross the placental barrier and potentially harm developing fetuses. The company's response was chillingly calculated: they removed women of childbearing age from jobs with high PFAS exposure—but they continued producing the chemicals and didn't warn pregnant women in surrounding communities who were drinking contaminated water.
By the 1990s, DuPont's own studies showed PFAS contamination in local drinking water supplies around their West Virginia plant exceeded the company's internal safety guidelines by hundreds of times. Rather than alert health authorities, the company dumped contaminated sludge on farmland and built higher smokestacks to disperse emissions over a wider area.
The cover-up began to unravel when cattle started dying on farms near the DuPont plant. Farmer Wilbur Tennant noticed his cows developing strange tumors and dying after drinking from a creek that received runoff from a DuPont landfill. Attorney Rob Bilott filed suit and eventually forced DuPont to release internal documents revealing decades of deception.
The documents were damning. DuPont executives knew that PFAS were "highly toxic when inhaled" and could cause "substantial danger to the health of employees." They knew the chemicals were accumulating in workers' organs and causing liver damage. They knew pregnant employees were being exposed to chemicals that could harm their unborn children.
Yet they said nothing to workers, communities, or regulators for decades.
Today, PFAS contamination is found in the blood of 95% of Americans and in drinking water supplies across the country. The chemicals have been linked to cancer, liver damage, immune system dysfunction, and birth defects. Cleanup costs are estimated in the hundreds of billions of dollars, and the health effects will persist for generations because these chemicals don't break down naturally.
DuPont's response to the crisis followed the familiar pattern: settle lawsuits quietly, spin off PFAS liabilities into a separate company (Chemours), and continue producing the chemicals under different names in other countries. The company paid $671 million to settle contamination claims but admitted no wrongdoing.
How many other "forever chemicals" are being produced today without adequate safety testing?
The PFAS case reveals how the chemical industry has systematically failed to apply the precautionary principle—the idea that we should avoid actions with potentially catastrophic consequences even if scientific understanding is incomplete. Instead, they've operated under the principle that chemicals are innocent until proven guilty beyond a reasonable doubt, even when that proof might take decades to develop and the harm might be irreversible.
This represents a fundamental philosophical choice about how societies should handle uncertainty and risk. Do we require proof of safety before allowing widespread use of new chemicals, or do we require proof of harm before restricting them? The PFAS case suggests that the latter approach is a recipe for disaster when dealing with persistent, bioaccumulative toxins.
These more recent cases—social media, gig economy, crypto, opioids, PFAS—reveal how the "build first, think never" pattern has evolved and intensified in the modern era. They show us something darker than the historical cases: the systematic industrialization of harm through data analytics, behavioral manipulation, and scientific deception.
Precision Targeting of Vulnerability: Modern companies don't just ignore harm—they use sophisticated analytics to identify and target the most vulnerable populations. Purdue Pharma used prescription data to target high-prescribing doctors. Social media companies use psychological profiling to target users most susceptible to addictive design patterns. Payday lenders use credit data to target financially desperate consumers.
Behavioral Manipulation at Scale: Unlike physical products, digital technologies enable real-time manipulation of human behavior through algorithmic systems that learn and adapt. Social media algorithms exploit psychological vulnerabilities with scientific precision. Dating apps use variable reward schedules to maximize user engagement. Gaming companies use psychological techniques borrowed from casinos to encourage addiction.
Regulatory Arbitrage: Modern companies don't just break rules—they structure their operations to make existing rules irrelevant. Gig economy platforms classify employees as contractors to avoid labor protections. Crypto exchanges operate offshore to avoid financial regulations. Tech platforms claim to be neutral conduits to avoid content liability.
Scientific Capture: The corruption of research institutions has become more sophisticated and systematic. Industries don't just fund favorable research—they fund entire research programs, academic centers, and scientific conferences designed to produce predetermined results. They recruit prestigious researchers to serve as credible voices for industry positions.
Global Scale, Local Accountability Gap: Modern corporations operate globally while responsibility remains local. Social media platforms based in California influence elections worldwide. Chemical companies produce PFAS in one country while contamination spreads globally. Pharmaceutical companies test drugs in developing countries with weaker regulatory oversight.
The troubling pattern is that each generation of harmful technologies becomes more sophisticated at avoiding accountability while causing more systemic damage.
As artificial intelligence reshapes every aspect of human society, we're witnessing all these historical patterns converge in real-time with unprecedented stakes. AI represents not just another technology to be regulated, but potentially the last technology humans will ever invent—because sufficiently advanced AI systems will invent all future technologies themselves.
Yet the development of AI is following the same destructive script we've seen play out with tobacco, social media, and every other consequential technology: prioritize speed over safety, profits over precaution, and competitive advantage over collective wisdom.
With AI, the Collingridge dilemma reaches its logical extreme. We are in the final window where human societies can meaningfully influence AI development. Once artificial general intelligence (AGI) is achieved—systems that match or exceed human cognitive abilities across all domains—the power dynamics between humans and technology will fundamentally shift.
Right now, while AI systems are powerful but limited, we could choose to develop them cautiously. We could require extensive safety testing, implement robust governance frameworks, and prioritize human flourishing over raw capability. We could apply the Amish model—asking whether AI strengthens or weakens human communities before deploying it at scale.
But that window is closing rapidly. Major AI companies are locked in what they explicitly call an "AI race," driven by the belief that whoever achieves AGI first will gain overwhelming economic and geopolitical advantage. This creates enormous pressure to deploy systems quickly, before their implications are fully understood.
As Anthropic CEO Dario Amodei observed: "We're in a race where the finish line is a cliff, and instead of slowing down, everyone is pressing the accelerator harder."
Once AGI is achieved, the power to control further AI development may slip from human hands entirely. Superintelligent systems could improve themselves at exponential rates, potentially leading to what researchers call an "intelligence explosion"—a rapid acceleration of AI capabilities that human institutions couldn't govern even if they wanted to.
This is the ultimate example of technological lock-in: if we get AI development wrong, there may be no opportunity to correct course later.
The AI industry demonstrates classic failure of second-order thinking. Companies focus obsessively on first-order effects—making systems more capable, more efficient, more profitable—while ignoring second-order consequences that could reshape civilization.
First-order effect: AI increases productivity and automates tedious tasks Second-order effect: Mass unemployment, economic inequality, social unrest
First-order effect: AI systems become better at generating content Second-order effect: Information ecosystems flooded with synthetic content, erosion of shared truth
First-order effect: AI assistants become more helpful and responsive Second-order effect: Humans become dependent on AI for thinking, decision-making, and creativity
First-order effect: AI systems become more powerful and autonomous Second-order effect: Gradual transfer of decision-making authority from humans to machines
First-order effect: AI development provides competitive advantage to early adopters Second-order effect: Authoritarian governments use AI for surveillance and population control
The pattern is strikingly similar to social media development. The first-order effect of algorithmic content curation was increased user engagement. The second-order effects included political polarization, teenage mental health crises, and the breakdown of democratic discourse. But by the time these consequences became apparent, billions of people were locked into systems designed around engagement maximization.
What second-order effects of AI development are we systematically ignoring because they're inconvenient for current business models?
Tech companies are pushing AI systems into production with minimal safety testing, driven by competitive pressure and venture capital demands that mirror every previous technological disaster. OpenAI's GPT-4 was deployed to hundreds of millions of users before comprehensive safety evaluations were complete. Google rushed out Bard to compete with ChatGPT despite internal concerns about accuracy and bias.
The economic incentives are identical to previous cases: first-mover advantage, network effects, and winner-take-all markets that reward speed over safety. Companies that pause to consider consequences risk being overtaken by competitors willing to "move fast and break things."
Microsoft invested $13 billion in OpenAI with the explicit goal of challenging Google's dominance in search and cloud computing. Google responded by accelerating its own AI deployments, with CEO Sundar Pichai stating that the company needed to "move with urgency" to maintain competitive position. Meta, Amazon, and Chinese companies like ByteDance and Baidu have joined what industry insiders describe as an "AI arms race."
Venture capitalists are pouring hundreds of billions into AI startups, creating enormous pressure to achieve rapid growth and market capture. Many of the same investment firms that funded social media companies during their reckless growth phase—and pharmaceutical companies during the opioid crisis—are now leading AI funding rounds.
The result is eerily similar to the dot-com bubble, crypto boom, and social media explosion: technologies with transformative potential being deployed at maximum speed with minimal consideration for consequences. The difference is that AI systems have the potential to reshape not just markets or communication, but the fundamental nature of human agency and autonomy.
We're essentially in a race to see who can build the most powerful system without understanding what "most powerful" might mean for human civilization.
Studies suggest AI could automate 40% of current jobs within two decades, yet there's been virtually no serious discussion of retraining programs, social safety nets, or economic transitions. The focus is entirely on maximizing AI capabilities, not managing societal impacts.
This represents the automation paradox writ large. Every technology deskills humans in some way—GPS deskilled navigation, calculators deskilled arithmetic, spell-check deskilled writing. The question is whether what we gain compensates for what we lose.
AI is different because it's deskilling thinking itself. Not just routine cognitive tasks, but creative problem-solving, strategic planning, and even emotional intelligence. As AI systems become more capable, humans risk becoming less capable—not because our brains change, but because we lose practice with skills that "machines can do better."
The implications go beyond economics. Work isn't just about income—it's about purpose, identity, social connection, and human dignity. What happens to societies where most people feel economically redundant? What happens to human agency when machines make most decisions? What happens to human creativity when algorithms can generate art, music, and literature on demand?
Historical precedent suggests that technological unemployment can be managed through new job creation, shorter work weeks, and social support systems. But AI may be different because it's not just replacing human muscle (like industrial automation) or human memory (like computers)—it's potentially replacing human cognition across broad domains.
If AI can do most cognitive work better than humans, what uniquely human roles will remain? And will there be enough of them to provide meaningful employment for billions of people?
AI systems consistently exhibit racial, gender, and socioeconomic biases that perpetuate discrimination in hiring, lending, law enforcement, and healthcare. These biases aren't bugs—they're features that emerge predictably from training data that reflects historical patterns of discrimination.
But unlike human bias, algorithmic bias operates at massive scale with the veneer of scientific objectivity. When a human loan officer discriminates, it affects individual cases. When an AI system used by hundreds of banks discriminates, it affects millions of loan applications. When humans make biased hiring decisions, qualified candidates can sometimes appeal to human judgment. When AI systems make biased decisions, the discrimination is hidden behind proprietary algorithms that can't be challenged or audited.
Studies have documented these problems for years:
- Facial recognition systems have error rates 10-100 times higher for dark-skinned women than light-skinned men
- Resume screening algorithms discriminate against candidates with names that sound African American or female
- Predictive policing systems reinforce racial profiling by directing more police attention to communities that are already over-policed
- Healthcare algorithms provide worse care recommendations for Black patients because they use healthcare spending as a proxy for health needs
Yet biased systems continue being deployed because the incentives favor speed over fairness. Companies face competitive pressure to launch products quickly, and comprehensive bias testing takes time and money. Regulatory agencies lack the technical expertise to audit complex algorithmic systems. Legal frameworks haven't adapted to address algorithmic discrimination.
The result is the industrialization of bias—discrimination that operates automatically, at scale, with minimal human oversight. This represents a fundamental threat to equal protection principles that democratic societies have spent centuries developing.
How can societies maintain commitments to equal treatment and human dignity when decision-making is increasingly delegated to systems that embed historical patterns of discrimination?
AI systems require vast amounts of personal data to function, accelerating the surveillance capitalism model that social media companies pioneered. Every interaction with AI systems—the questions you ask, the content you create, the decisions you make—generates data that can be used to build psychological profiles, predict behavior, and influence future choices.
This creates unprecedented surveillance capabilities that make previous authoritarian systems look primitive by comparison. The Stasi in East Germany had 90,000 employees and 189,000 informants to monitor 16 million people—one spy for every 66 citizens. Modern AI systems can monitor billions of people simultaneously, analyzing patterns in their communications, movements, purchases, and relationships.
China's social credit system demonstrates how AI can be used for population control, combining facial recognition, digital payment monitoring, and behavioral analysis to create comprehensive surveillance networks. Citizens who engage in "undesirable" behavior—political dissent, religious practice, or even jaywalking—face restrictions on travel, employment, and education.
Western companies are building similar capabilities, justified by promises of better services and personalized experiences. Amazon's Alexa records conversations in homes. Google Maps tracks location history. Social media platforms analyze communication patterns. Credit card companies monitor spending behavior. These data streams are increasingly integrated through AI systems that can detect patterns invisible to human analysis.
The convergence of AI and surveillance represents what could be the end of privacy as a meaningful concept. When algorithms can infer political beliefs from Facebook likes, sexual orientation from shopping patterns, and health conditions from search histories, the boundary between public and private collapses.
What does human freedom mean in a world where AI systems can predict and potentially manipulate individual behavior with scientific precision?
AI-generated content is already being used to create convincing fake videos, audio recordings, and images that undermine the possibility of shared truth. During the 2024 election cycle, deepfake political ads and robocalls became commonplace, yet platforms have been slow to implement effective detection systems.
This represents more than just technological sophistication—it's an assault on the epistemological foundations of democratic society. Democracy requires informed citizens capable of distinguishing truth from falsehood, evidence from propaganda, reality from manipulation. When AI systems can generate convincing fake evidence at scale, these foundational assumptions collapse.
The implications extend beyond electoral politics. Deepfake technology can be used for financial fraud, personal harassment, and international disinformation campaigns. Foreign governments can generate synthetic content to inflame social tensions. Bad actors can create fake evidence to destroy reputations or manipulate markets. Criminal organizations can impersonate trusted figures to commit fraud.
Unlike previous forms of media manipulation, AI-generated content can be personalized and targeted with surgical precision. Instead of broadcasting the same fake news to everyone, AI systems can generate customized disinformation designed to be maximally persuasive to specific individuals or groups.
The traditional response to misinformation—fact-checking, source verification, media literacy—may be inadequate when synthetic content becomes indistinguishable from authentic content. If AI systems can generate fake evidence faster than humans can debunk it, information warfare tactics could overwhelm democratic institutions.
How can societies maintain democratic deliberation when the basic distinction between authentic and synthetic content becomes impossible for ordinary citizens to discern?
The AI industry exhibits classic patterns of iatrogenic harm—damage caused by interventions supposedly designed to help. AI systems deployed to reduce bias in hiring end up encoding new forms of discrimination. AI content moderation systems designed to reduce hate speech end up suppressing legitimate political discourse. AI surveillance systems designed to enhance security end up enabling authoritarian control.
This pattern reflects what Taleb calls the "intervention bias"—the compulsive need to do something technologically complex even when simpler solutions would work better, or when doing nothing would be preferable. The AI industry has embraced complexity as a virtue, building systems so sophisticated that even their creators don't fully understand how they work.
Consider AI content moderation on social media platforms. The original problem was manageable: too much content for human moderators to review effectively. A simple solution might have been hiring more human moderators or reducing the scale of platforms to manageable sizes. Instead, companies deployed AI systems that make millions of automated decisions about speech with minimal human oversight.
The results are predictably problematic: AI systems that can't understand context, sarcasm, or cultural nuance making decisions about complex social and political communications. Important news stories get censored, while sophisticated disinformation campaigns slip through. The cure (AI moderation) creates new problems worse than the original disease (too much content to moderate).
Similarly, AI systems deployed to make healthcare more efficient often reduce the quality of human interaction between doctors and patients. Electronic health records with AI-powered interfaces force physicians to spend more time entering data and less time listening to patients. AI diagnostic systems that flag potential problems create alert fatigue, causing real problems to be overlooked.
When does technological sophistication become a form of civilizational self-harm?
Perhaps most troubling, the dominant narrative in Silicon Valley is that AI development is inevitable and that any attempt to slow down hands advantage to competitors or authoritarian governments. Calls for caution are dismissed as "AI doomerism" or unrealistic given global competitive dynamics.
This reflects the technological determinism that has justified every previous rush toward deployment: the idea that technological development follows its own logic independent of human choice. But this is a myth. Technologies are shaped by human decisions about funding, regulation, market structure, and social values.
The "China threat" narrative—that America must race toward AGI to prevent authoritarian control of the technology—mirrors Cold War justifications for nuclear weapons development. But just as the nuclear arms race made the world less safe by creating weapons capable of destroying civilization, the AI race may create systems that human institutions cannot govern.
Leading AI researchers, including many working at major companies, have signed statements calling for AI development to be slowed down to allow governance frameworks to catch up. The Future of Humanity Institute, the Center for AI Safety, and hundreds of AI researchers have warned about existential risks from uncontrolled AI development.
Yet these concerns are consistently downplayed by companies with billions invested in AI development. Just as tobacco companies funded research to cast doubt on cancer links, AI companies fund research to minimize concerns about safety and control. Just as social media companies claimed their platforms would strengthen democracy while deploying engagement-maximizing algorithms, AI companies claim their systems will benefit humanity while optimizing for capability and market capture.
Why do we treat the pace of AI development as if it were a law of physics rather than a choice made by specific people and institutions?
AI represents the convergence of every destructive pattern we've seen throughout history, but with qualitatively different stakes. Unlike previous technologies that affected specific industries or regions, AI has the potential to reshape the fundamental relationship between humans and technology globally and irreversibly.
The Ultimate Collingridge Dilemma: We are in the last window where human societies can meaningfully influence AI development. Once systems achieve human-level general intelligence, they may be able to improve themselves at rates that human institutions cannot govern.
Postman's Principles at Unprecedented Scale: AI isn't just another technology with embedded values—it's a meta-technology that could reshape how all other technologies are developed. The philosophy embedded in AI systems—surveillance capitalism, efficiency above all else, algorithmic rather than human judgment—could become the philosophy of civilization itself.
Beyond the Amish Model: The Amish approach of community-controlled technology adoption may be impossible with AI because the effects are global rather than local. If some communities adopt AGI systems, they may gain such significant advantages that non-adopting communities become economically or militarily obsolete.
Systematic Deskilling of Human Judgment: Previous technologies deskilled specific human capabilities while leaving others intact. AI potentially deskills human judgment itself—the meta-skill that allows us to evaluate when and how to use other tools.
The End of Chesterton's Fence: AI systems can change so rapidly that traditional conservative principles—understanding existing systems before changing them—become impossible to apply. By the time we understand what AI systems are disrupting, they may have evolved into something entirely different.
Beyond the Lindy Effect: AI development explicitly aims to create technologies that haven't been time-tested. The goal is to build systems that surpass human intelligence, something that has never existed in the history of life on Earth.
Exponential Second-Order Effects: Unlike previous technologies whose impacts scaled linearly with adoption, AI systems could trigger exponential changes that human societies cannot adapt to quickly enough.
This suggests that the wisdom accumulated from previous technological disasters may be inadequate for governing AI development. We may need entirely new frameworks for thinking about technology that grows and changes faster than human institutions.
Neil Postman observed that every technological innovation should be interrogated, not celebrated automatically. Drawing from the historical patterns and wisdom traditions we've explored, here are five questions every technology developer—and every society—should ask before deploying transformative innovations:
Before building any new technology, assume it will be wildly successful and widely adopted. Then ask: if everyone uses this at scale, what problems does that create?
Social media companies asked: "How do we help people connect?" They didn't ask: "If billions of people get their information through algorithmic feeds optimized for engagement, what happens to democratic discourse?"
Gig economy companies asked: "How do we make transportation more convenient?" They didn't ask: "If all drivers become independent contractors without benefits, what happens to economic security?"
Crypto companies asked: "How do we create decentralized money?" They didn't ask: "If millions of people speculate on digital assets backed by nothing, what happens to financial stability?"
AI companies are asking: "How do we make AI systems more capable?" They're not asking: "If AI systems become more capable than humans at most cognitive tasks, what happens to human agency and meaning?"
The first-order problem is always obvious. The second-order problem is where the real consequences lie.
Before disrupting any existing system, understand why it exists in its current form. Chesterton's Fence applies: don't remove a fence until you understand why it was built.
Uber disrupted taxi regulations without understanding that those regulations protected both drivers and passengers from exploitation and safety risks. Financial regulations that crypto aims to circumvent were developed after generations of fraud and market manipulation. Privacy laws that social media platforms resist were written to protect human dignity and democratic institutions.
The question to ask is not "How do we eliminate this inefficient system?" but "What essential function does this system serve, and how do we preserve that function while improving performance?"
Existing systems are usually inefficient for good reasons. Disrupting the inefficiency often eliminates the good reasons too.
Every technology creates winners and losers, but they're rarely the same people. Before deployment, honestly assess who will benefit and who will bear the costs.
Lead gasoline benefited oil companies and car manufacturers but poisoned children's developing brains. Social media benefits platform owners and advertisers but degrades users' mental health and democratic institutions. The gig economy benefits venture capitalists and consumers seeking cheap services but exploits workers and degrades labor protections.
AI development benefits technology companies and investors but may displace millions of workers, increase inequality, and concentrate unprecedented power in the hands of a few corporations.
The ethical question is: Are the people making deployment decisions the same people who will bear the consequences? If not, how can those consequences be properly weighed?
Technologies that socialize costs while privatizing benefits should face special scrutiny.
The Amish ask a simple question before adopting any technology: "Will this strengthen or weaken our community?" This isn't anti-progress fundamentalism—it's sophisticated systems thinking about technology's social effects.
Applied to modern innovations:
- Would social media algorithms strengthen or weaken community bonds? (Evidence suggests weaken)
- Would gig economy labor models strengthen or weaken worker solidarity and economic security? (Clearly weaken)
- Would AI systems that can perform most cognitive tasks strengthen or weaken human agency and self-reliance? (Probably weaken)
- Would AI surveillance systems strengthen or weaken trust and social cohesion? (Almost certainly weaken)
The Amish model doesn't require rejecting all technology—it requires evaluating technology based on values beyond efficiency and profit. It means prioritizing community welfare over individual convenience, social cohesion over economic growth, and human dignity over technological capability.
What if we evaluated every innovation based on whether it strengthens or weakens the social bonds that make human flourishing possible?
The intervention bias leads people to do something just because doing nothing feels uncomfortable—even when doing nothing would be better. Applied to technology, this becomes "we built it because we could" rather than "we built it because it was wise."
Many technologies are built not to solve genuine human problems, but because the capability exists and there's money to be made. Crypto was built because blockchain technology was interesting, not because the financial system needed radical transformation. Many AI applications are built because large language models exist, not because they solve important problems better than existing solutions.
The crucial question is: What problem are we really solving, and is this technology the best way to solve it? Often, simpler, lower-tech solutions work better than complex technological interventions.
Just because we can build something doesn't mean we should. Capability is not wisdom.
These questions won't prevent all technological harm, but they would force developers and societies to grapple seriously with consequences before deployment rather than afterward. They would shift the burden of proof from "prove it's harmful" to "show that it's beneficial and that benefits outweigh risks."
Most importantly, they would restore human agency to technological development—treating innovation as a series of choices we make rather than forces that happen to us.
We stand at a crossroads unlike any in human history. For the first time, we're developing technologies that could surpass human intelligence across all domains. For the first time, we're creating systems that could make decisions about the future of civilization without human oversight. For the first time, the stakes of getting technology wrong could be existential rather than merely catastrophic.
Yet we're approaching this unprecedented challenge with the same reckless speed that has characterized every previous technological disaster. The same venture capital firms that funded social media's assault on teenage mental health are pouring billions into AI development. The same regulatory agencies that took decades to address obvious harms from tobacco and asbestos are being asked to govern technologies they barely understand. The same economic incentives that drove pharmaceutical companies to create an opioid crisis are pushing tech companies to deploy AI without adequate safety testing.
The pattern is so consistent it feels like a law of nature: humans develop powerful technologies, deploy them at maximum speed for maximum profit, suppress research about harms, and deal with consequences only after the damage becomes undeniable. From asbestos to social media, the script rarely changes.
But patterns aren't laws. They're choices—repeated choices made by people and institutions operating within specific incentive structures. Those structures can be changed if we have the wisdom to recognize that the current approach isn't working and the courage to demand better.
The wisdom traditions we've explored—the Collingridge dilemma, Postman's principles, the Amish model, Taleb's insights about iatrogenic harm—all point toward the same conclusion: the moment to pause and think deeply about technology is before it becomes too entrenched to control, not after the damage is done.
With AI, that moment is now.
We could choose to slow down. We could require rigorous safety testing before deployment. We could prioritize human flourishing over corporate profits. We could apply the precautionary principle, asking not just "Can we build this?" but "Should we build this?" and "What kind of world does this create?"
We could learn from the Amish and ask whether new technologies strengthen or weaken human communities. We could learn from the tobacco industry and recognize manufactured doubt when we see it. We could learn from social media and understand that technologies designed to maximize engagement rather than human wellbeing will predictably cause harm at scale.
Most importantly, we could reject the myth of technological inevitability—the idea that innovation follows its own logic independent of human choice. AI development is not a force of nature. It's a human project, funded by human institutions, guided by human values. It can be shaped by human wisdom if we choose to exercise it.
The alternative is to keep following the same script: build first, think never, and wonder why we're always surprised by the consequences. But this time, the consequences might not be reversible. This time, the broken things might include the foundations of human agency and meaning. This time, there might not be a chance to learn from our mistakes.
The tobacco industry spent decades claiming they needed more research while people died from lung cancer. Social media companies spent years claiming their platforms would strengthen democracy while teenage mental health collapsed and political discourse degraded. The AI industry is now claiming they need to move fast to stay competitive while building systems that could reshape civilization.
We've seen this movie before. We know how it ends.
The only question is whether we're finally ready to change the script—or whether we'll keep building first and thinking never, racing toward a future we haven't bothered to consider.
History is not destiny. Patterns can be broken. Wisdom can be applied.
But only if we choose it.
The clock is ticking. The AI revolution is underway. The decisions we make in the next few years will shape the trajectory of human civilization for generations—or perhaps determine whether there will be future generations to shape anything at all.
Will we pause long enough to think about where we're going? Will we finally learn to build deliberately rather than recklessly? Will we prioritize wisdom over speed, precaution over profits, and human flourishing over technological capability?
The choice is still ours. For now.
But not for much longer.
Word count: 9,847