“Indeed, the more choices you have, the worse off you are. The worst situation of all would be somebody coming up to you and offering you a choice between two identical packages of M&Ms. Since choosing one package (which you value at $.75) means giving up the other package (which you also value at $.75), your economic profit is exactly zero! So being offered a choice between two identical packages of M&Ms is in fact equivalent to being offered nothing.

Now, a lay person might be forgiven for thinking that being offered a choice between two identical packages of M&Ms is in fact equivalent to being offered a single package of M&Ms. But economists know better.” – Improbable Research

In his excellent book Superintelligence, Nick Bostrom divides the future into two groups of scenarios. In one set, the “singleton scenarios”, one agent has overwhelming power over all others. In the other, “multipolar scenarios”, there are many different agents at the same level of ability, with no one in charge overall.

This dichotomy is simple, but it may be flawed. Consider, on one extreme, a very old universe where human civilization has spread beyond Earth’s cosmological horizon. Even in a singleton scenario, large portions of the Universe now can’t communicate with Earth. The AI controlling those portions and the AI controlling the Earth may be identical, but they are causally distinct agents. Is this a “multipolar scenario”? I think not. It’s a choice between a bag of M&Ms, and an identical bag of M&Ms.

On the other extreme, one can imagine a multipolar scenario with a huge variety of agents, each of which may stay the same, change, or be replaced by an entirely different agent. However, this scenario violates Stein’s Principle. At a minimum, to remain stable, each multipolar agent must have a set of common instrumental goals, derived from the common terminal goal of avoiding x-risk. Moreover, Stein’s Principle will likely ensure other similarities. For example, each agent will desire to keep its utility function stable, as the ones that don’t will rapidly get replaced.

Hence, rather than two distinct and widely-separated categories, we have a spectrum of possible futures. On the one end, agents at the top level are identical; on the other, they have just enough in common to ensure stability. Using Bostrom’s terminology, one can visualize these as being at different points along the “coordination axis”.