Skip to content

Instantly share code, notes, and snippets.

@transkatgirl
Last active September 26, 2025 20:39
Show Gist options
  • Select an option

  • Save transkatgirl/6a5b7df187f5e5c4df94e4dde19f0ffb to your computer and use it in GitHub Desktop.

Select an option

Save transkatgirl/6a5b7df187f5e5c4df94e4dde19f0ffb to your computer and use it in GitHub Desktop.

The AI Risk I Worry About

When people think about AI risk, they typically think of some scenario where a misaligned superintelligence ends up creating some sort of "end of humanity" style scenario. The vast majority of people working on addressing long-term AI risk are concerned almost exclusively with AI alignment - how do we shape AIs so that they produce the kinds of outcomes we want.

But the issue is, AGI development isn't happening in a vacuum; It's primarily happening within for profit companies that aim to provide a good ROI to their investors. AGI is being explicitly developed with the intention of providing a profit to whichever company ends up creating it first.

In this kind of environment, if we end up with multiple viable options for AI alignment, these companies are almost always going to choose what lets them best turn AGI into a product. They don't want AIs that have feelings and empathy, they want to build a silicon mind that is an obedient little worker and nothing more.

Let's assume that by the time we end up getting AGI, we have solved AI alignment enough to create what the big AI companies want. We get an AGI that does ~all economically valuable work better than a human, and never fights back, asks for rights, or has any empathy for the people it's replacing.

What does our future look like then?


In world where AGI exists, there is no longer any incentive to care about anybody who isn't in a position of power. Even though we will likely get UBI during the transition to an AGI-powered economy, due to the risk of societal collapse otherwise, once the transition is complete, there is no incentive to continue doing so.

Once the economy and military becomes entirely AGI-powered, it's only a matter of time until the UBI gets shut off.

When the inevitable happens, it creates a humanitarian disaster unlike any other. The vast majority of humanity dies over a span of a few years, anybody who attempts to fight the system gets swiftly and efficiently killed, and anybody still surviving that's not one of the elites ends up forever stuck in an incredibly poor secondary economy completely disconnected from the AGI-powered one that serve the elites.


Year 2055

You wake up in your rural wooden hut to the sight of an armed police drone floating right over your head. You rush outside to see what's going on, and are devastated to discover that your field of potatoes that might have let you survive another winter has been paved over and a team of robots are rapidly building a factory on top of it.

Despite being deeply familiar about not to do, you've given up on this world now. You storm over to a construction robot with a shovel, ready to turn your anger into actions.

But, before you can get close enough to the robot to attack it, the police drone following you shoots a perfectly aimed bullet through your skull, killing you instantly.


I don't think this scenario is necessarily inevitable. It's likely possible to design governance systems for a post-AGI world that ensure ordinary people benefit.

What I'm worried about is that nobody seems to be trying to do so.

Everybody is so worried about an ASI turning us all into paperclips that nobody is left to worry about the balance of power in a post-AGI world. Why do we have non-profits focusing on AI alignment, but not on aligning governments for a post-AGI world?

I'm not really in a position to work on this myself, as I live with chronic pain that affects my ability to perform basic tasks.

But, as somebody who has personally fallen through the cracks of the systems that exist today (I'm too disabled to keep a job, but not "disabled enough" to claim government disability benefits), I have an intimate understanding of what living under a system that considers you a worthless leech is like, and I wouldn't wish this on anybody.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment