1950s America is a Special Case

“Advances invented either solely or partly by government institutions include, as mentioned before, the computer, mouse, Internet, digital camera, and email. Not to mention radar, the jet engine, satellites, fiber optics, artificial limbs, and nuclear energy. (…) Even those inventions that come from corporations often come not from startups exposed to the free market, but from de facto state-owned monopolies. For example, during its fifty years as a state-sanctioned monopoly, the infamous Ma Bell invented (via its Bell Labs division) transistors, modern cryptography, solar cells, the laser, the C programming language, and mobile phones…” – “Competence of Government

I think it’s worth paying attention to the fact that, of this apparently arbitrary list of inventions, none of them came from the current US political system (I’ll abbreviate CUSPS). Every one of them was developed many decades ago. And more specifically, every one of them was developed between about 1930 and 1975. Electricity, automobiles, telephones, telegraphs, airplanes, movies, radios, and other much older inventions aren’t included either. Rather than general examples of innovative government, across different cultures and time periods, these are all from one specific political system (the immediate predecessor to CUSPS).

If we were merely using this time period as an example, to show government could innovate, one data point suffices. If we were economic historians, dispassionately debating how large the space of possible civilizations was, we could stop there. However, in what Scott calls the motte-and-bailey defense, that is never how this argument is used in practice. For one example, if you Google (eg.) site:ycombinator.com “government” “arpanet”, of the first ten results every one is in the context of a policy debate about what CUSPS should do and what our attitude should be towards it. ARPANET and things in its category are invariably used, de facto, as justifications for CUSPS, despite not having been created by CUSPS.

Scott’s model of how the world operates here is (to quote) “de novo invention seems to come mostly from very large organizations that can afford basic research”; this is a much stronger claim than that the evidence shows “examples exist of large organizations which did well at de novo invention”, or (a quote from the introduction to the document) “we [can’t] be absolutely certain free market always works better than government before we even investigate the issue”. Given more detailed historical data, I would suggest an alternative model.

In the US, there had always been a great deal of innovation, before the federal government was funding it en masse, and even before the federal government had much power at all. Railroads and steamships and telegraphs came from a time when Washington D.C. could not prevent half the states from raising their own armies and fighting a bloody civil war, much the same as the government in 2014 Iraq.

Later, in the 1930s, the immensely more powerful federal government created policies (taxation, fixed regulatory costs, the SEC, etc.) that strongly favored large organizations over smaller ones. The pre-existing base of inventive individuals, like everyone else, then simply got sucked into large institutions for lack of anywhere else to go. This neatly explains the entire historical trajectory. There were smart guys who invented stuff; the government then hired most of them, so most inventions started coming out with a government logo stamped on them; and when CUSPS was created, its incompetence started driving the smart guys away, again increasing private innovation at the expense of public.

Stein’s Principle

The economist Herbert Stein once said “whatever cannot continue forever must stop”, now called Stein’s Law. We can generalize this to “Stein’s Principle”.

The universe will almost certainly last for many billions of years. In addition, let’s assume that the utility of a mind’s life doesn’t depend on the absolute time period in which that life occurs.

Logically, either human-derived civilization must exist for most of the universe’s lifespan, or not. If it does not, this falls into what Nick Bostrom calls an existential risk scenario. But if it does, and if we (very reasonably) assume that the population is steady or increasing, then this implies the vast majority of future utility is in time periods over a million years from now. This is Bostrom’s conclusion in ‘Astronomical Waste‘.

However, we can break it down further. Let X be the set of possible states of future civilization. We know that there is at least one x in X which is stable over very long time periods – once humans and their progeny go extinct, we will stay extinct. We also know there is at least one x which is unstable. (For example, the world where governments have just started a nuclear war will rapidly become very different, with very high probability.) Hence, we can create a partition P over X, with each x in X falling into one and only one of P_1, P_2, P_3… P_n. Some of the P_i are stable, like extinction, in that a state within P_i will predictably evolve only into other states in P_i. Other P_j are unstable, and may evolve outside of their boundaries, with nontrivial per-year probabilities.

One can quickly see that, after a million years, human civilization will wind up in a stable bucket with exponentially high probability. (Formally, one can prove this with Markov chains.) But we already know that the vast majority of human utility occurs after a million years from now. Hence, Stein’s Principle tells us that any unstable bucket must have very little intrinsic utility; its utility lies almost entirely in which stable bucket might come after it.

Of course, one obvious consequence is Bostrom’s original argument: any bucket with a significant level of x-risk must be unstable, and so its intrinsic utility is relatively unimportant, compared to the utility of reducing x-risk. But even excluding x-risk, there are other consequences too. For example, for a multipolar scenario to be stable, it must include some extremely reliable mechanism for preventing both one agent from conquering the others, and the emergence of a new agent more powerful than the existing ones. Without such a mechanism, the utility of any such world will be dominated by that of the stable scenario which inevitably succeeds it.

And further, each stable bucket might itself contain stable and unstable sub-buckets, where a stable sub-bucket locks the world into it but an unstable one allows movement to elsewhere in the enclosing bucket. Hence, in a singleton scenario, buckets where the singleton might replace itself with dissimilar entities are unstable; buckets where the replacements are always similar in important respects are stable.

How To Detect Fictional Evidence

Based On: The Logical Fallacy of Generalization from Fictional Evidence

Some fictional evidence is explicit – “you mean the future will be like Terminator?”. But it can also be subtle. Predicting the future can be done “in storytelling mode”, using the tropes and methods of storytelling, without referring to a specific fictional universe like the Matrix; one obvious example is Hugo de Garis. How can we tell when “predictions” are just sci-fi, dressed up as nonfiction?

1. The author doesn’t use probability distributions.

Any interesting prediction is uncertain, to a greater or lesser extent. Even when we don’t have exact numbers, we still have degrees of confidence, ranging from “impossible under current physics” through to “extremely likely”. And many important predictions can be done as conditionals. We may have no idea how likely event B is, but we might be able to say it’s almost certain to follow event A.

But stories aren’t like that. An author creates an “alternate world”, and any given fact (“Snape kills Dumbledore!”) is either true or false within that world. There’s no room for the shades of uncertainty one sees in technology forecasting, or for that matter in military planning; it would just leave readers confused. Hence, any author who presents “the future” as a single block of statements all treated as fact, rather than a set of possiblities and conditionals of varying likelihood (see Bostrom’s Superintelligence for an excellent example), is probably in ‘storytelling mode’.

Sometimes, an author will realize this, and tack “but of course, this is uncertain” onto the end. The author can then deflect any questions about relative odds by mentioning this disclaimer, and immediately resume sounding very certain as soon as the questions are over. But, as Eliezer discusses in his original post, this biases the playing field. If X is very complex, asking “X: Yes Or No?” ignores the hundreds or thousands of questions about which parts of X are more likely relative to other parts. An honest analysis will have uncertainty woven through it, with each burdensome detail matched by a diminishment of certainty.

2. The author doesn’t change their mind.

In fiction, inconsistency is bad. Every part of the “alternate world” must match every other part. Therefore, an author writing a sequel must carefully track the original, lest she introduce “plot holes”.

However, a realistic prediction must be continually updated in response to new information. From time to time, other people will give you ideas you hadn’t thought of yourself. And even if you were a supergenius who needed no advice, there is no way a single human mind can hold all the information which might help one make a prediction. For a general, for example, there are always new things to learn about the enemy’s forces.

Hence, if almost nothing has changed between someone’s old predictions and new predictions, they are probably being a ‘storyteller’. This is especially true for anyone predicting the next century, as the events of 2012 give us much more incremental evidence about 2032 than about 200 Billion AD.

3. The author creates and describes ‘characters’.

Characters are central to storytelling. Almost all sci-fi stories, at least in part, use characters who the reader can empathize with – the passion of a lover, the struggle of a worker, the fury of a warrior. The reasons for this lie deep in human evolution and psychology.

However, when making predictions, ‘characters’ are of very little relevance. When describing the futures of billions, one must aggregate to make the problem remotely tractable; “military strength” rather than soldiers, “economic conditions” rather than rich and poor, “transportation demand” rather than kids going on vacation. And of course, while certain individuals can have great influence over society, except for the very near term we can have no clue who they will be.

Therefore, when an author describes in great detail the lives of individual people – their emotions, their personalities, their hardships, their relationships and wants and needs – we should get suspicious. This can be great fun, but it isn’t always good for you, like riding a motorcycle at 200 MPH.

4. The author focuses exclusively on a single dynamic or trend.

The world is very big, and many important trends all happen simultaneously. Predicting how they interact is extremely difficult, like solving a many-variable differential equation. By contrast, it’s easier for a story to focus on an overarching ‘principle’ or ‘theme’, which drives the main events and actions of the key characters. This theme can be very specific (“revenge”), but it can also be a complex of different memes, like Lewis and Tolkien’s literary explorations of Catholicism.

An author in “storytelling” mode may observe trend X, and from X make predictions A, B, and C; and these predictions might be quite reasonable. However, it’s still fallacious to not account for the other things (at least the big ones) influencing A, B, and C. Y might cancel out X’s effect on A; Z might reverse X, and so cause B’s opposite; and Q might have the same effect on C, but a hundred times as strong, so X’s contribution is negligible.

Be extra suspicious if the chosen dynamic is one the author happens to be an expert in, and they don’t rely on experts in other fields to help fill in the blanks. Odds are, they’re missing something very important; in your own field you know when you’re lost, but in others there are many more unknown unknowns. And be extra extra suspicious if the chosen trend is a pet political cause (“Islam”, “taxes”, “global warming”, “government surveillance”, “inequality”… ). That subset is probably worth ignoring entirely.

5. The author predicts rapid change, but doesn’t discuss specific things changing.

Michael Vassar describes this as “everything should stay the same, including the derivatives”. In a story, whether it’s Star Wars or Game of Thrones, it’s usually good to fix a static “backdrop” of culture and technology and norms, as it’s less work for the audience to track unchanging scenery. But in real life, changing fundamental traits like military ability, economic ability, communication, intelligence or transportation has sweeping consequences through nearly every aspect of society. To name one example, the Chinese Empire was old as the hills, but the changes of the 20th century caused it to collapse, followed by a republic, a civil war, a military occupation by Japan, a brutal Stalinist regime, and finally the authoritarian capitalism of today. And needless to say, no historical example will capture the changes caused by going beyond normal biological humans.

What Is A Copy?

Much reasoning in anthropics, like the Sleeping Beauty problem or Eliezer’s lottery, relies on the notion of ‘copies’. However, to my (limited) knowledge, no one has seriously investigated what counts as a ‘copy’.

Consider a normal human undergoing an anthropic experiment. I instantly cool him/her to near absolute zero, so the atoms stop wiggling. I make one perfect copy, but then move one of the atoms a tiny amount, say a nanometer. I then make another copy, moving a second atom, make a third copy, and so on, moving one atom each step of the way, until I wind up at (his father/her mother) as they existed at (his/her) age. At every step, I maintain a physically healthy, functioning human being. The number of copies needed should be on the order of 10^30. Large, but probably doable for a galactic superintelligence.

But then I repeat the process, creating another series of copies going back to (his grandfather/her grandmother). I then go back another generation, and so on, all the way back to the origin of Earth-based life. As the organisms become smaller, the generations grow shorter, but the number of copies per generation also becomes less. Let’s wave our hands, and say the total number of copies is about 10^40.

Now, before putting you to sleep, I entered you into a deterministic lottery drawing. I then warm up all the modified copies, and walk into their rooms with the lottery results. For the N copies closest to you, I tell them they have won, and hand them the pile of prize money. For the others, I apologize and say they have lost (and most of them don’t understand me). All copies exist in all universes, so SIA vs. SSA shouldn’t matter here.

Before you go to sleep, what should your subjective probability of winning the lottery be after waking up, as a function of N? When doing your calculations, it seems there are only two possibilities:

1. The cutoff between ‘copy’ and ‘not a copy’ is sharp, and you assign each organism a weighting of zero or one. That is, it’s possible to move an atom one nanometer, and thereby make an organism go from “not you” to “you”.

2. There exist ‘partial copies’ out in mind space, and you assign some organisms partial weightings. That is, there exist hypothetical entities which are two-thirds you.

Both seem problematic, for different reasons. Is there a third option?

Four Types of Simulation

(Followup To: Simulations and the Epicurean Paradox)

Logically, one can divide ancestor simulations into two types: Those which are “perfectly realistic”, simulating the full motions of every electron, and those which are not.

Call the first Type 1 simulations. It’s hard to bound the capabilities of future superintelligences, but it seems implausible that they would run large numbers of such simulations, owing to them being extremely computationally expensive. For Goedelian reasons, simulating a component perfectly is always more expensive than building it “for real”; if that were not the case, one could get infinite computing power by recursively simulating both yourself and a computer next to you. (For example, a perfect whole brain emulation will always require more clock cycles than a brain actually computes.) Hence, simulating a galaxy requires more than a galaxy’s worth of matter, which is a rather large amount.

The second possibility involves imperfect simulations: those which “lower the resolution” on unimportant parts, like not simulating the motions of every atom in the Sun. These can be further subdivided into simulations where the ‘code’ just runs passively, and simulations where an active effort is made to avoid simulated science discovering the “resolution lowering” phenomenon. Call these Type 2 and Type 3. (Logically, a Type 3b exists, where the simulators deliberately make noticeable interventions like blotting out the Sun at semi-regular intervals. Though noted for completeness, it seems clear we don’t live in one of these.)

In a Type 2 simulation, as science advances far enough, it will almost certainly discover the “resolution lowering”, and conclude the world was being simulated. This possibility is hard to rule out completely, as there will always be one more decimal place to run experiments on. However, the weight of existing science, and the lack of evidence suggesting simulations as a conclusion or even a reasonable hypothesis, is fairly strong evidence against us living in a Type 2.

A Type 3 simulation is one where the simulators actively seek to avoid detection by simulated intelligences. Bostrom’s paper focuses mainly on these, noting that superintelligences could easily fool simulated beings if they so chose. However, this appears to be a form of Giant Cheesecake Fallacy; a superintelligence surely could fool simulated beings, but must also have some motive for doing so. What this motive might be has not, to my knowledge, so far been suggested anywhere (www.simulation-argument.com does not appear to address it).

One might suppose that such a discovery would “disrupt the simulation”, by causing the simulatees to act differently and thereby “ruining the results”. However, any form of resolution lowering would impact the results in some way, and these impacts are not generally predictable by virtue of the halting theorem/grandfather paradox. All one can say is that a given approximation will probably not have a significant impact, for some values of “probably” and “significant”. Hence, we already know the simulators are okay with such disruptions (perhaps below some significance bound).

Would “discovering the simulation” count as a “significant” impact? Here some handwaving is necessary, but it’s worth noting that virtually all civilizations have had some concept of god or gods, and that these concepts varied wildly. Apparently, large variations in civilizational religion did not produce that large a change in civilizational behavior; e.g. medieval Europe was still much more similar to Rome than to imperial China, despite the introduction of Christianity. In addition, the discovery of science which appeared to suggest atheism was in a real sense more disruptive than scientific validation of (some type of) religion would have been.

One might think that, to obtain as much knowledge as possible, simulators would want to try at least some scenarios of Type 3 in addition to Type 2. (This would also apply to the previous post’s arguments, in that the first simulation to allow genocide might be much more informative than the marginal Type 3b simulation which did not allow it.) However, if one accepts that, say, one in one billion simulations are of Type 3 and the rest of Type 2 or Type 3b, one is faced with an anthropic-like dilemma: why are we so special as to live in the only Type 3? Such an observation smacks of Sagan’s invisible dragon.

Finally, one might suppose that simulators live in a universe which allows hyper-computation, beyond ordinary Turing machines, or some other type of exotic physics. Such beings could likely create perfect simulations at trivial cost to themselves; call these Type 4. While theoretically interesting, Bostrom’s Simulation Argument does not extend to cover Type 4s, stating that “Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation”, which only applies if the simulators and simulatees have comparable laws of physics. Such unobservable ‘super-universes’ might remain forever speculation.

Simulations and the Epicurean Paradox

In a famous paper, Nick Bostrom outlines what he calls the Simulation Argument:

“A technologically mature “posthuman” civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.”

#1 and #2 seem unlikely. #1, because we haven’t found any strong reason to think existential risks are nearly impossible to avoid (see Scott’s Great Filter post). #2, because independent convergence across many possible worlds generally requires world trajectories to be very predictable, and we don’t observe that on Earth. (For example, small timeline changes might have created sentient dolphins, and dolphins have very different drives and moral systems from humans.) Therefore, most attention has focused on option #3.

As Bostrom says:

“In some ways, the posthumans running a simulation are like gods in relation to the people inhabiting the simulation: the posthumans created the world we see; they are of superior intelligence; they are “omnipotent” in the sense that they can interfere in the workings of our world even in ways that violate its physical laws; and they are “omniscient” in the sense that they can monitor everything that happens.”

However, like the gods of mythology, these gods run into what is called the Epicurean paradox, after the Greek philosopher who invented it. The paradox runs:

“Is God willing to prevent evil, but not able? Then he is not omnipotent.
Is he able, but not willing? Then he is malevolent.
Is he both able and willing? Then whence cometh evil?
Is he neither able nor willing? Then why call him God?”

For convenience, let’s number each of these possibilities 1 through 4. For a race of posthuman simulators, we can essentially rule out #1 and #4; they can probably do whatever they please.

The cynically minded might jump to #2 – the “gods” (simulators) are real but malevolent, willingly allowing disease, famine, the Holocaust, and all of humanity’s ills. But this presents another problem. A truly malevolent god could create much more suffering than we actually see. For example, it is widely agreed that electricity makes human life more pleasant. A malevolent simulator could then cause all power plants to be economically impractical, which worsens human life without affecting most other goals it might also be pursuing.

One might then postulate a simulator as merely indifferent to suffering, with goals that are entirely orthogonal. But this too would likely wipe out all the good things in human life, not just a few of them. A full explanation of why would be very lengthy, but Bostrom has described the various issues in his book Superintelligence, coining the terms “perverse instantiation” and “infrastructure profusion” for two of the most serious. In essence, almost all apparently “neutral” goals would, if carried to completion, create a universe morally indistinguishable from one where humanity is extinct. Humanity is not extinct, so we can likely also rule out #2.

For #3, one can try to invent various explanations for why the evil we see is an illusion. Perhaps only you are ‘fully’ simulated, and starving children are merely ‘zombies’ without moral value. Perhaps the simulated world was created recently, and so the world wars and other disasters never really happened. Perhaps the simulator “switches off” consciousness when people are suffering too much. However, all of these suffer from issues of Occam’s Razor: they postulate additional complexity which is inherently unobservable. The problems here are those which cause us to disbelieve the theory of Last Thursdayism, which postulates the universe was created last Thursday, but with memories and other signs of older age already in place.

In fact, observing a large number of ancestor simulations places extremely strong constraints on the goals of the simulator – essentially an even stronger version of the Epicurean paradox, or for that matter of the FAI problem. Solving the FAI problem requires formally specifying a utility function which doesn’t wipe out humanity, a tiny target in a vast space. Creating a simulator requires specifying a utility function which literally never intervenes across a vast variety of simulated situations, a much smaller target still. (One can of course speculate that the simulators intervene and wipe our memories afterwards, or some such, but this shares the problems of Last Thursdayism.)

(Another possibility, more fun to think about, has occurred to me. It seems likely that the space of human values is not large enough to fully satisfy our novelty desires over the next eleven trillion years. Since evolution is the ultimate source of our values, I have wondered if future civilizations might simulate new species evolving to sentience, so as to acquire a richer set of values than they started with. However, on reflection, it is extremely unlikely that an ancestor simulation is the best way to achieve this. Some form of directed evolution, or possibly an even more complex optimization process not yet known to us, would almost certainly be more efficient.)

By itself, this seems to be an argument for Bostrom’s scenario #2. A perfect ancestor simulation, with no intervention by the simulators, requires hitting an extraordinarily small target in utility function space. Hence, it’s not surprising that many different dissimilar worlds failed to hit it, any more than it’s surprising if a thousand gangsters shooting at random fail to hit an acorn seven hundred meters off.

However, believing in Bostrom’s scenario #2 presents a different challenge, outlined by Jaan Tallinn in his talk at the Singularity Summit. If we are not being simulated, then we are some of the very first beings to ever exist, part of the tiny fraction to live before the creation of self-modifying intelligence. The number of beings which might ever exist is truly vast, possibly on the order of 10^70. This creates another conundrum. Why should we be living now? What makes us so privileged?

I have a speculation which addresses this question. Suppose there is a shortage of bread, in your city of two million people. The city government creates a giant queue to buy bread, and assigns each resident a place in it at random. If you are placed at the very head of the line, you would think this demanded explanation; it is very unusual. Perhaps your brother is the mayor, and rigged the lottery. Perhaps you are religious, and prayed very fervently. Something must be going on; conditioning on all your other life experiences, you having this one experience is still very unlikely.

On the other hand, suppose you are graduating from college. Proudly, you walk across the stage, shake the dean’s hand, and receive your diploma. By itself, this is just as unusual as the first scenario. A college education has about two million minutes, of which only one is the one when you receive your degree. Yet, even though you may be very emotional, you don’t see the fact of living out this minute as something that demands explanation. You don’t postulate divine intervention, or an unknown friend in the administration. (Unless, of course, you are a very poor student!)

Even though it is extremely improbable, that one minute where you get your diploma is made logically necessary by the other four years of your education. Conditional on all your other experiences happening, it is extremely likely that you experience this one too; every four year project must have a first minute and a last minute. Therefore, you are not surprised.

Our standard, ancestral view of life sees people as discrete entities. A person is born, lives for a while, and then dies, with inter-human bandwidth of about 300 baud being negligible compared to intra-human bandwidth. One might envision each life as strands of spaghetti, strewn throughout a football field of time. Each strand is distinct, each has a beginning and end, and if you select one at random it is very surprising to pick the first.

However, there is no reason for posthuman civilization to be like this. When humans have a very complex computer program, it is already rare to just throw it out entirely. More likely, one creates a new version, ports it to new hardware, or adapts it for a new purpose (Windows 8’s ancestry goes back thirty-five years, and Linux’s over twenty), because code is easy to copy. In addition, when we do throw code out, almost always the motivation comes from the extremely rapid changes in computers produced by Moore’s Law; it might, for example, be easier to rewrite from scratch than to alter code to handle 100x the previous number of requests. In a world of static computers, such things would become rarer still, and even rarer if one assigned code moral value and the code did not want to ‘die’. Posthuman life would look like a single, continuous river, twisting and branching and growing through the eons.

If we suppose this, then living in the first few years of the river does not seem so surprising; every stream of consciousness must have its beginning, just as every college degree or career or sea voyage must have its first ten seconds, and the existence of the first year is necessarily implied by all of the others. Moreover, if one supposes the posthuman transition (or aging escape velocity) is likely to occur soon, this appears to solve another paradox: why we exist in the year 2014, rather than as one of the hundred billion primitive humans who lived millennia ago. If we are the first generation to be uploaded, then the stream starts with us, rather than all our ancestors who were unlucky enough to have their brains rot in the ground.

Global Warming Numbers

“Across the Narrow Sea, your books are filled with words like “usurper“, and “mad man“, and “blood right“. Here our books are filled with numbers, we prefer the stories they tell. More plain. Less… open to interpretation.” – The Iron Bank of Braavos








(Graphs from Tol, R.S.J. (2005). ‘The marginal damage costs of carbon dioxide emissions: an assessment of the uncertainties’, Energy Policy vol. 33(16), pp. 2064-2074.)


Get every new post delivered to your Inbox.

Join 62 other followers