The economist Herbert Stein once said “whatever cannot continue forever must stop”, now called Stein’s Law. We can generalize this to “Stein’s Principle”.

The universe will almost certainly last for many billions of years. In addition, let’s assume that the utility of a mind’s life doesn’t depend on the absolute time period in which that life occurs.

Logically, either human-derived civilization must exist for most of the universe’s lifespan, or not. If it does not, this falls into what Nick Bostrom calls an existential risk scenario. But if it does, and if we (very reasonably) assume that the population is steady or increasing, then this implies the vast majority of future utility is in time periods over a million years from now. This is Bostrom’s conclusion in ‘Astronomical Waste‘.

However, we can break it down further. Let X be the set of possible states of future civilization. We know that there is at least one x in X which is stable over very long time periods – once humans and their progeny go extinct, we will stay extinct. We also know there is at least one x which is unstable. (For example, the world where governments have just started a nuclear war will rapidly become very different, with very high probability.) Hence, we can create a partition P over X, with each x in X falling into one and only one of P_1, P_2, P_3… P_n. Some of the P_i are stable, like extinction, in that a state within P_i will predictably evolve only into other states in P_i. Other P_j are unstable, and may evolve outside of their boundaries, with nontrivial per-year probabilities.

One can quickly see that, after a million years, human civilization will wind up in a stable bucket with exponentially high probability. (Formally, one can prove this with Markov chains.) But we already know that the vast majority of human utility occurs after a million years from now. Hence, Stein’s Principle tells us that any unstable bucket must have very little intrinsic utility; its utility lies almost entirely in which stable bucket might come after it.

Of course, one obvious consequence is Bostrom’s original argument: any bucket with a significant level of x-risk must be unstable, and so its intrinsic utility is relatively unimportant, compared to the utility of reducing x-risk. But even excluding x-risk, there are other consequences too. For example, for a multipolar scenario to be stable, it must include some extremely reliable mechanism for preventing both one agent from conquering the others, and the emergence of a new agent more powerful than the existing ones. Without such a mechanism, the utility of any such world will be dominated by that of the stable scenario which inevitably succeeds it.

And further, each stable bucket might itself contain stable and unstable sub-buckets, where a stable sub-bucket locks the world into it but an unstable one allows movement to elsewhere in the enclosing bucket. Hence, in a singleton scenario, buckets where the singleton might replace itself with dissimilar entities are unstable; buckets where the replacements are always similar in important respects are stable.