When people think about AI risk, they typically think of some scenario where a misaligned superintelligence ends up creating some sort of "end of humanity" style scenario. The vast majority of people working on addressing long-term AI risk are concerned almost exclusively with AI alignment - how do we shape AIs so that they produce the kinds of outcomes we want.
But the issue is, AGI development isn't happening in a vacuum; It's primarily happening within for profit companies that aim to provide a good ROI to their investors. AGI is being explicitly developed with the intention of providing a profit to whichever company ends up creating it first.
In this kind of environment, if we end up with multiple viable options for AI alignment, these companies are almost always going to choose what lets them best turn AGI into a product. They don't want AIs that have feelings and empathy, they want to build a silicon mind that is an obedient little worker and nothing more.
Let's assume th