(Followup To: Simulations and the Epicurean Paradox)
Logically, one can divide ancestor simulations into two types: Those which are “perfectly realistic”, simulating the full motions of every electron, and those which are not.
Call the first Type 1 simulations. It’s hard to bound the capabilities of future superintelligences, but it seems implausible that they would run large numbers of such simulations, owing to them being extremely computationally expensive. For Goedelian reasons, simulating a component perfectly is always more expensive than building it “for real”; if that were not the case, one could get infinite computing power by recursively simulating both yourself and a computer next to you. (For example, a perfect whole brain emulation will always require more clock cycles than a brain actually computes.) Hence, simulating a galaxy requires more than a galaxy’s worth of matter, which is a rather large amount.
The second possibility involves imperfect simulations: those which “lower the resolution” on unimportant parts, like not simulating the motions of every atom in the Sun. These can be further subdivided into simulations where the ‘code’ just runs passively, and simulations where an active effort is made to avoid simulated science discovering the “resolution lowering” phenomenon. Call these Type 2 and Type 3. (Logically, a Type 3b exists, where the simulators deliberately make noticeable interventions like blotting out the Sun at semi-regular intervals. Though noted for completeness, it seems clear we don’t live in one of these.)
In a Type 2 simulation, as science advances far enough, it will almost certainly discover the “resolution lowering”, and conclude the world was being simulated. This possibility is hard to rule out completely, as there will always be one more decimal place to run experiments on. However, the weight of existing science, and the lack of evidence suggesting simulations as a conclusion or even a reasonable hypothesis, is fairly strong evidence against us living in a Type 2.
A Type 3 simulation is one where the simulators actively seek to avoid detection by simulated intelligences. Bostrom’s paper focuses mainly on these, noting that superintelligences could easily fool simulated beings if they so chose. However, this appears to be a form of Giant Cheesecake Fallacy; a superintelligence surely could fool simulated beings, but must also have some motive for doing so. What this motive might be has not, to my knowledge, so far been suggested anywhere (www.simulation-argument.com does not appear to address it).
One might suppose that such a discovery would “disrupt the simulation”, by causing the simulatees to act differently and thereby “ruining the results”. However, any form of resolution lowering would impact the results in some way, and these impacts are not generally predictable by virtue of the halting theorem/grandfather paradox. All one can say is that a given approximation will probably not have a significant impact, for some values of “probably” and “significant”. Hence, we already know the simulators are okay with such disruptions (perhaps below some significance bound).
Would “discovering the simulation” count as a “significant” impact? Here some handwaving is necessary, but it’s worth noting that virtually all civilizations have had some concept of god or gods, and that these concepts varied wildly. Apparently, large variations in civilizational religion did not produce that large a change in civilizational behavior; e.g. medieval Europe was still much more similar to Rome than to imperial China, despite the introduction of Christianity. In addition, the discovery of science which appeared to suggest atheism was in a real sense more disruptive than scientific validation of (some type of) religion would have been.
One might think that, to obtain as much knowledge as possible, simulators would want to try at least some scenarios of Type 3 in addition to Type 2. (This would also apply to the previous post’s arguments, in that the first simulation to allow genocide might be much more informative than the marginal Type 3b simulation which did not allow it.) However, if one accepts that, say, one in one billion simulations are of Type 3 and the rest of Type 2 or Type 3b, one is faced with an anthropic-like dilemma: why are we so special as to live in the only Type 3? Such an observation smacks of Sagan’s invisible dragon.
Finally, one might suppose that simulators live in a universe which allows hyper-computation, beyond ordinary Turing machines, or some other type of exotic physics. Such beings could likely create perfect simulations at trivial cost to themselves; call these Type 4. While theoretically interesting, Bostrom’s Simulation Argument does not extend to cover Type 4s, stating that “Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation”, which only applies if the simulators and simulatees have comparable laws of physics. Such unobservable ‘super-universes’ might remain forever speculation.
I think what makes the simulation argument interesting is that it explains our universe as a product of a powerful optimization process. Whether the mechanism of creation is simulation or physical construction seems less important to me. Admittedly, this might be because my main interest in ‘did an optimizer make our universe?’ isn’t from Bostrom’s simulation argument (which I don’t find especially troubling). I’m just curious about how much of the habitable content of a Big Universe would be a byproduct of (not-merely-evolutionary) optimization. I’m interested in non-Bostrom-ish variants on Types 1, 2, 4.
Powerful optimization (planning, predicting, etc.) is one of the basics way that Really Interesting things happen, so reductionists about mind should probably be more interested in scenarios in which large portions of a Big Universe are experiments (or just resources or waste products) of minds.
I don’t think the variations in civilizational religions between human religions having little effect is strong enough evidence for the effect of discovering we’re in a simulation to be negligible that you can then assign a one-in-one-billion chance that we’re in the kind of simulation that hides the fact that it is a simulation.
I think your Type 2 simulation misses something important, namely that we might not notice the resolution lowering and instead view it as the fundamental shape of the universe.
Suppose a simulation were run with only atomic level accuracy. Everything would be correct except that atoms would, instead of being made up of other bits, be simulated in a way to make them the fundamental units of the simulation. My guess is that no one would go, “hmm, weird that the universe stops at atoms” and would instead focus on concluding “atoms plus some other stuff are the theory of everything”.
I think a fair question to ask in terms of assessing likelihoods about Type 2 simulation is how likely is it that we could tell if we are in a low resolution scenario or finding a theory of everything?
Belated note:
Simulating a computer requires larger computational capacity than that computer. However, simulating the material predecessors of that computer, ie. a lump of silicon and a larger lump of iron, may conceivably be vastly cheaper.
Similarly, simulating a galaxy that is a computer (and actively computing) requires a computer (of equivalent make) that necessarily uses at least that much or more matter/energy. However, simulating a galaxy that is _not_ currently a computer may be vastly cheaper.
There’s another option: This universe is actually running on “low resolution”, and we have discovered it. The Uncertainty Principle looks an awful lot like a hack a clever engineer would introduce to prevent arbitrarily precise measurements and limit the amount of simulation to be done, while leaving macroscopic outcomes unaffected.
And if you weren’t constrained by this, you could probably build better and smaller computers that could run our “hacked” simulation.
(Yep, it sounds wacky and is a completely untestable claim, which is probably why nobody is seriously proposing it.)