In an idealized far future, would there be conflict? I think so. Competition is one of the thousand shards of human desire, and a lot of people would be sad if there were no more football games or chess or Team Fortress 2.

But such conflicts are not driven by universal goals. Here I must handwave a bit as to what “universal goal” means, but it is something in the neighborhood of being a utilitarian-style drive, rather than a biological-style drive. A human (or other animal) who wants a cheeseburger won’t, even if given the chance, obsessively optimize the atoms of Alpha Centauri to maximize cheeseburger probability. A naively-constructed AI would, giving rise to the problem Nick Bostrom calls “infrastructure profusion”. A “universal goal” is, roughly, one that you would optimize everything in the Universe to meet, not a chess match where you forget about losing the week after.

In a scenario high on the coordination axis, there would be no meaningful conflict over universal goals. Everything with the power to affect the entire universe would agree about non-trivial aspects of how to do so. This is what Bostrom calls the “singleton” scenario, and it’s likely to obey Stein’s Principle: such a system would have both a strong motivation to prevent goal drift or competing systems, and the ability to implement such motivations to enforce long-term stability. Call this null case Type 0 conflict.

Go a bit lower on coordination, and you might encounter a universe with several different systems of comparable ability, which agree about basics like existential risk but disagree on other goals. For example, you could have AIs A, B, and C, where each of them thinks the universe should be blue, red, or green. The simplest scenario is one where the conflict between them is static: each AI gets roughly one third of the universe, and this stays fixed over time, with all AIs having strong reason to expect it to remain fixed. This might be made to obey Stein’s Principle, but it is more of a risky bet. One would need strong reason for believing that any AI getting a “bigger share” was impossible. If, for example, one AI could hack the others, in a manner similar to modern-day computer or social hacking, this would allow for “victory” and introduce instability. Solving this in the general case might be an NP-complete problem; if the computers have some sensory input, you must be able to know that no point in an exponentially large input space will cause serious failure. But, a solution might happen. Call this Type 1 conflict.

Going down more on coordination, one finds scenarios where there are still a fixed number of agents, but their relative positions change over time; call this Type 2 conflict. By Stein’s Principle, the only way to make this work is through negative feedback loops: if there is any case where winning a bit causes you to win more, this will spiral on itself, until one agent ceases to exist. And (handwave) maintaining universal negative feedback seems quite hard. In the human world, advantage in conflict is a combination of many different factors; you would have to get negative feedback on all of them, or else they would collapse and stop oscillating.

If we dare to go down even farther, to worlds with stable long-term conflict in which it is still possible to “win”, one must also allow for the emergence of new players, or else the number of players will monotonically decrease to one (or zero, in an x-risk scenario). And all players should have a basic drive to prevent the creation of new players with differing goals. For this scenario (Type 3 conflict) to work, Stein’s Principle requires that the existing players be unable to prevent the creation of new players (at comparable levels of ability), and simultaneously be able to ensure with extremely high accuracy that all new players pose no threat of existential risk. This seems, a priori, extremely implausible.

And of course, we have Type 4 conflict, the sort present among humanity today, which is obviously not long-term stable. The strange thing is that almost nobody seems to realize it.