(Followup To: Simulations and the Epicurean Paradox)

Logically, one can divide ancestor simulations into two types: Those which are “perfectly realistic”, simulating the full motions of every electron, and those which are not.

Call the first Type 1 simulations. It’s hard to bound the capabilities of future superintelligences, but it seems implausible that they would run large numbers of such simulations, owing to them being extremely computationally expensive. For Goedelian reasons, simulating a component perfectly is always more expensive than building it “for real”; if that were not the case, one could get infinite computing power by recursively simulating both yourself and a computer next to you. (For example, a perfect whole brain emulation will always require more clock cycles than a brain actually computes.) Hence, simulating a galaxy requires more than a galaxy’s worth of matter, which is a rather large amount.

The second possibility involves imperfect simulations: those which “lower the resolution” on unimportant parts, like not simulating the motions of every atom in the Sun. These can be further subdivided into simulations where the ‘code’ just runs passively, and simulations where an active effort is made to avoid simulated science discovering the “resolution lowering” phenomenon. Call these Type 2 and Type 3. (Logically, a Type 3b exists, where the simulators deliberately make noticeable interventions like blotting out the Sun at semi-regular intervals. Though noted for completeness, it seems clear we don’t live in one of these.)

In a Type 2 simulation, as science advances far enough, it will almost certainly discover the “resolution lowering”, and conclude the world was being simulated. This possibility is hard to rule out completely, as there will always be one more decimal place to run experiments on. However, the weight of existing science, and the lack of evidence suggesting simulations as a conclusion or even a reasonable hypothesis, is fairly strong evidence against us living in a Type 2.

A Type 3 simulation is one where the simulators actively seek to avoid detection by simulated intelligences. Bostrom’s paper focuses mainly on these, noting that superintelligences could easily fool simulated beings if they so chose. However, this appears to be a form of Giant Cheesecake Fallacy; a superintelligence surely could fool simulated beings, but must also have some motive for doing so. What this motive might be has not, to my knowledge, so far been suggested anywhere (www.simulation-argument.com does not appear to address it).

One might suppose that such a discovery would “disrupt the simulation”, by causing the simulatees to act differently and thereby “ruining the results”. However, any form of resolution lowering would impact the results in some way, and these impacts are not generally predictable by virtue of the halting theorem/grandfather paradox. All one can say is that a given approximation will probably not have a significant impact, for some values of “probably” and “significant”. Hence, we already know the simulators are okay with such disruptions (perhaps below some significance bound).

Would “discovering the simulation” count as a “significant” impact? Here some handwaving is necessary, but it’s worth noting that virtually all civilizations have had some concept of god or gods, and that these concepts varied wildly. Apparently, large variations in civilizational religion did not produce that large a change in civilizational behavior; e.g. medieval Europe was still much more similar to Rome than to imperial China, despite the introduction of Christianity. In addition, the discovery of science which appeared to suggest atheism was in a real sense more disruptive than scientific validation of (some type of) religion would have been.

One might think that, to obtain as much knowledge as possible, simulators would want to try at least some scenarios of Type 3 in addition to Type 2. (This would also apply to the previous post’s arguments, in that the first simulation to allow genocide might be much more informative than the marginal Type 3b simulation which did not allow it.) However, if one accepts that, say, one in one billion simulations are of Type 3 and the rest of Type 2 or Type 3b, one is faced with an anthropic-like dilemma: why are we so special as to live in the only Type 3? Such an observation smacks of Sagan’s invisible dragon.

Finally, one might suppose that simulators live in a universe which allows hyper-computation, beyond ordinary Turing machines, or some other type of exotic physics. Such beings could likely create perfect simulations at trivial cost to themselves; call these Type 4. While theoretically interesting, Bostrom’s Simulation Argument does not extend to cover Type 4s, stating that “Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation”, which only applies if the simulators and simulatees have comparable laws of physics. Such unobservable ‘super-universes’ might remain forever speculation.