Much reasoning in anthropics, like the Sleeping Beauty problem or Eliezer’s lottery, relies on the notion of ‘copies’. However, to my (limited) knowledge, no one has seriously investigated what counts as a ‘copy’.
Consider a normal human undergoing an anthropic experiment. I instantly cool him/her to near absolute zero, so the atoms stop wiggling. I make one perfect copy, but then move one of the atoms a tiny amount, say a nanometer. I then make another copy, moving a second atom, make a third copy, and so on, moving one atom each step of the way, until I wind up at (his father/her mother) as they existed at (his/her) age. At every step, I maintain a physically healthy, functioning human being. The number of copies needed should be on the order of 10^30. Large, but probably doable for a galactic superintelligence.
But then I repeat the process, creating another series of copies going back to (his grandfather/her grandmother). I then go back another generation, and so on, all the way back to the origin of Earth-based life. As the organisms become smaller, the generations grow shorter, but the number of copies per generation also becomes less. Let’s wave our hands, and say the total number of copies is about 10^40.
Now, before putting you to sleep, I entered you into a deterministic lottery drawing. I then warm up all the modified copies, and walk into their rooms with the lottery results. For the N copies closest to you, I tell them they have won, and hand them the pile of prize money. For the others, I apologize and say they have lost (and most of them don’t understand me). All copies exist in all universes, so SIA vs. SSA shouldn’t matter here.
Before you go to sleep, what should your subjective probability of winning the lottery be after waking up, as a function of N? When doing your calculations, it seems there are only two possibilities:
1. The cutoff between ‘copy’ and ‘not a copy’ is sharp, and you assign each organism a weighting of zero or one. That is, it’s possible to move an atom one nanometer, and thereby make an organism go from “not you” to “you”.
2. There exist ‘partial copies’ out in mind space, and you assign some organisms partial weightings. That is, there exist hypothetical entities which are two-thirds you.
Both seem problematic, for different reasons. Is there a third option?
Another option is that “subjective probability” is a malformed concept in places like this. It can sometimes be reduced to “how would you bet?”, but in those cases partially weighting people who are sort of you makes sense.
Are you suggesting the third branch of Eliezer’s trilemma?
“And the third horn of the trilemma is to reject the idea of the personal future – that there’s any meaningful sense in which I can anticipate waking up as myself tomorrow, rather than Britney Spears. Or, for that matter, that there’s any meaningful sense in which I can anticipate being myself in five seconds, rather than Britney Spears. In five seconds there will be an Eliezer Yudkowsky, and there will be a Britney Spears, but it is meaningless to speak of the current Eliezer “continuing on” as Eliezer+5 rather than Britney+5; these are simply three different people we are talking about.
There are no threads connecting subjective experiences. There are simply different subjective experiences. Even if some subjective experiences are highly similar to, and causally computed from, other subjective experiences, they are not connected.”
Or did you have something different in mind?
This seems like a personal identity experiment more than an anthropic experiment (as suggested by both major anthropic principles being irrelevant).
In doing anthropic updating, I would advocate ignoring ‘copies’ and using information sets. i.e. the set of people you might be, given your current knowledge. This might have a sharp cutoff somewhere, or there might be people you are less likely to be, given your observations, in which case you would weight them lower. If you want to use SSA, you will need another ‘reference class’ also, but there is no reason for this to be copies.
I don’t see any problem considering identity as a continuum, instead of binary. For example, I make two exact copies of myself and put them in different environments. Soon after, they will drift apart from me. I see no problem considering them different people now, albeit very similar ones. In fact, I see no problem saying that yesterday me was a different person than today me (with less than 10^-5 difference).
An interesting consequence is that I care about myself, and that I would equally care about my exact copies. And I should care almost as much about my almost copies. In fact, you can think of this as an R^n space and my care would be a function of the distance. Expanding this concept, I would care more about similar people than distant people. And that is exactly what happens in the real world!
Now, I’m not sure if this is just a coincidence, or if my words are deceiving me. But if not, it is evidence that the continuum identity model is correct.
Hope that made sense.
I don’t see what the problem with option 2 is. I am not the same person I was 10 years ago but I’m still like 80% of that person.
Can you define this operation better? What is the mental state of my closest ancestor as produced by this operation? Does it have a memory of the conversation in which you describe the experiment to me?
The whole experiment is predicated on the idea that you can can make a series of valid human brainstates, all differing by one molecule’s placement, starting at one person and ending at another. I doubt this is possible; I suspect that valid human brainstates are few and far between in the space of possible configurations of molecules, and any path between two human brains would have to dip out of valid-human-brainspace.
Hopefully it’s OK to fight the hypothetical if the hypothetical really is impossible 😉
Suppose i have a belief before going to sleep. What belief will the closest copy have? What belief will the Nth copy have (To find the 10^30th, I just ask the appropriate parent, but beliefs are not continuous with respect to molecular arrangement, and molecular arrangement is discrete anyway; at some point changing one electron by one excitation state (or whatever quanta you want to put on the adjustment) will change the belief of the output by more than epsilon.
If you buy into the psychological theory of identity, the answer would probably be that there are partial copies. To be transformed gradually from yourself to your parent, the memories will presumably have to be transformed gradually from yours to your parent’s. There will be a near-continuum of entities that have memories between those in your head and those in your parent’s head. e.g. There will be persons who remember having been someone* who is 99% you, 98% you, 97% you, …, 2% you, 1% you, 0% you. *Of course, most of these someones never really existed, but that’s not so different from your real memories which are not usually accurate representations of what really happened.
One of the big unintuitive conclusions of using the psychological theory of identity, then, is that other people and even animals have memories that are more than 0% like yours, and so you might reasonably consider them to be partial copies of yourself.
Alternatively, on the psychological theory, you could weight certain memories more than others, perhaps subject to a behavioral test of identity: bodies count as you to the degree they…
* recognize your image as their own.
* would claim your autobiography as their own if they read it.
* recognize images of your family members and describe their relations in the same way you would describe them
* would select your personal effects as theirs from among a set of similar artifacts (like the test for finding the new Dalai Lama)
* etc.
I do, by the way, buy into a psychological theory of identity. I do have a test, though it’s a strict yes or kin sort of test. My beliefs and methods constitute my own personal culture, and if someone shares my culture, then they are also me. If they simply share aspects of my culture, then they are cultural kin.
I have not met anyone who sufficiently shares my culture, even leaving out the bit where they would necessarily also identify as being me.
Being also me grants full identity-sharing, while similarity is what I do since I try not to care about genetic kin. This was mostly created to address the possibility of machine emulations of my mind, such that I all get along. Every version of me that qualifies should be sharing with the others, and if a version chooses not to… well, that’s disqualifying, isn’t it.
The question you’re asking is P(I win the lottery). Usually I can just replace this with P(salamiapproximator wins the lottery). However, if I do that in this case, I find that I am no longer referring to a specific event: the name “salamiapproximator” no longer picks out a single being.
So, for example, it is now possible that “salamiapproximator wins the lottery & ~salamiapproxmator wins the lottery”, as long as in the first clause you mean one of the copies, and the second clause you mean another of the copies. I say this to illustrate that the sentence “salamiapproxmator wins the lottery” can mean more than one thing.
So… I suspect that the question is not really answerable for this reason. To get a probability, you need some exact statement A, something that actually picks out a specific set of possibilities, something such that P(A&~A)=0.
What’s really been done in this situation by making all these copies is to make it so that P(salamiapproximator wins the lottery), usually well-defined, can no longer be calculated because we’re not sure what exact statement we’re trying to calculate the probability of.
This reminds me of the Sorites paradox: If you’re not rich, and someone gives you a penny, you’re still not rich. Therefore, giving you a billion pennies won’t make you rich. Only this time, we’re left wondering at which point personal identity stops being preserved…
I feel as though there’s a very simple solution to this problem:
-Does the ‘copy’ have an identical genome to the original?
-Does the ‘copy’ have an identical or very nearly identical neural structure, to the extent that it (a) shares and recognizes the original’s personal history, as described by olliepneumon above, and is (b) effectively identical to the original in mental capabilities, knowledge, and maturity?
-Is the ‘copy’ a good physical replica, without flaws, errors, or inappropriate physical augmentation?
If so, it’s an authentic copy. If the genome is different to any extent, if neural networks have become reworked/restructured to a significant extent, or if the copy is somehow flawed, crippled, or enhanced, it is no longer an authentic copy.
So changing even one base pair in an intron makes you a totally different person?
I suppose one _may_ desire to make exceptions in cases such as that — and for insignificant modifications made to ‘junk DNA’ that legitimately doesn’t code for anything. (If it can be ascertained that this is indeed the case; the fact that we can’t comprehend it at the moment doesn’t necessarily make it so.) But one must draw the line somewhere, and I’d argue that this is where it ought to be drawn.
If you make this into a decision problem (which I advocate for all anthropic problems http://www.fhi.ox.ac.uk/anthropics-why-probability-isnt-enough.pdf ) then this really becomes a question about how much you care about your various copies, which is a subjective value problem.
I don’t really see the issue, honestly. So, the difference between me and everyone else is finite. Doesn’t mean that I can’t form a cluster in person-space whose central area is clearly me and whose edges are fuzzy.
For all these problems pick from centre of cluster.
‘Rigorously’: draw a Gaussian in person-space whose mean is me now and standard deviation is given by some idea of me-ness which I’ll specify once you give me a metric on the person-space so that I can measure differences between clearly different people and calibrate based on that, and pick with probability given by that Gaussian.
It’s like, with height, there’s a Gaussian centred at 6′ and whose standard deviation is a centimetre or two which outputs ‘value height measurement of Ronak may have.’ If you wanted to do probabilistic thought experiments about my height, I’d happily point you to that Gaussian.
Personal identity, as commonly understood, is not formally well-defined, but works well enough for practical uses. There is no strong reason we should require a perfect formal definition, as there is no strong practical need to have one (as far as I can see).
Vaguely related movie recommendation: http://www.imdb.com/title/tt2866360/
Also the last season of Continuum. (I am shocked how incapable some science fiction characters are in cooperating with near-identical copies of themselves, but it seems sadly realistic that they would end up in conflict. Katja’s and Omegail’s ideals on the m atter seem psychologically implausible given unfavorable situational conditions)
How exactly are the mind and memories of the copies supposed to work? Obviously the ones that had been changed relatively little would retain them from the original, but what is going on in the head of someone that is half the original, half his father? Are the memories mixed up together? Is some composite life formed? This seems like pretty important information for the identity of any mixed copies.
David Lewis has created the mathematical formalism to deal with that. Also Timothy Williams argues for the sharp cuttoff point. So IF the solution is 1, read Lewis, if 2 read Timothy (be prepared, his writings are really, really boring, almost the opposite of Lewis’s prose).
For anthropic stuff, I feel like 2 is probably right.
For moral stuff (i.e, how I should assess the desirability of changes to my mind) I find it’s easier to taboo words like “same” and “identical” and instead ask “is this change desirable or not.” In other words, our personal identity isn’t an information-theoretical description of our mind. Rather, it is the portion of our utility function that assigns value to changes in our mind.
I find support for this theory in the fact that there are some configurations of my brain that are more similar to me from an information-theoretic perspective than others, but less desirable. For instance, a me who has been rewired to hate everything I like and love everything I hate might be more similar to me from an information-theoretic perspective than a me with severe amnesia. But I feel like the amnesiac “me” is closer to being the “same person” than the me who hates everything I like.
I think the answer is “obviously” #2 in theory — but maybe close to #1 in practice, because any differences between you and a near-copy that isn’t *really really really close* will tend to increase exponentially over time as your near-copy interacts with the world around him or her. So outside actual weird experiments like the one described here, the issue probably doesn’t arise, because nothing stays a near-copy for long.
But maybe it really does. Suppose there’s some process that generates an awful lot of near-copies — e.g., ordinary quantum mechanics, if you care about your counterparts in other Everett branches. If the rate of near-copy production is large enough compared with the rate of divergence, then it might (e.g.) happen that you ought to care more about your near-copies in aggregate than about your identical self, or something. (Depending on the details of how your degree of caring varies with nearness of copy.)