## Asking Good Questions

Aumann’s Agreement Theorem says two perfect rationalists can’t “agree to disagree”. Therefore, when two people disagree, a good question is one that makes either the asker or askee change their minds. Some examples of bad questions are:

– Why don’t you agree that abortion kills innocent babies?
– Why don’t you support welfare programs that help children with cancer?
– Isn’t it obvious that Politician Bob is corrupt?
– Do you deny supporting the poaching of baby seals?

These aren’t really questions. They’re more like attacks, with a question mark tacked on at the end. Instead, a good question tries to roll back a chain of inferences. A person might support E because they support A, and they also believe in the chain of arguments A (therefore) B (therefore) C (therefore) D (therefore) E. A good question tries to bring the debate about E back to a debate about D, and ultimately all the way back to A.

Some thoughts on how to ask better questions:

– Try to be quantitative. For example, someone might say “I think idea X is too expensive.” So you might reply, “About how much do you think X will cost?”. Sometimes this is a bit more difficult, like if your co-worker said “I think Bob would be a terrible person to hire”. But you can still be semi-quantitative; eg., “On a scale from 1 to 10, where 1 is the worst candidate and 10 is the best, where do you think Bob is?”.

– Ask for examples. Your friend might say, “Policy X has always been a disaster.” So a good next question might be, “Can you talk about some times when people tried X, and it turned out really badly?”. A lot of words are vague enough that two people will hear them, and imagine totally different things in their heads, often without realizing it. So examples can help clear up semantic misunderstandings.

– Talk about probabilities. Eg., a lot of people will say that X is a serious threat, for various different values of X. But sometimes X is very unlikely to happen; the most famous examples are media sensations like terrorist bombings, shark attacks, and stranger abductions. So a good question might be, “If I did X, about how likely do you think it is that <bad thing Y> would happen?”.

– Investigate where ideas come from. Even when people are wrong, it’s rarely because they made something up out of their heads. Much more often, they’ll hear something from Alice, who heard it from Bob, who heard it from Carol, and so on, and the original (correct) idea got lost in a long game of “telephone”. (The Science News Cycle shows this process in action.) So if you can find the original source for an idea, you might both wind up agreeing with it.

– Ask what a supporter thinks about an idea’s downsides. Sometimes, they might disagree that a downside exists; sometimes, they might think a downside exists, but that it’s very small; and sometimes, they might think the downsides are large, but the benefits are so big that it’s worth it. So if eg. someone supports using a new chemical in agriculture, you might ask “How dangerous do you think chemical X is?”. (Don’t let this become rhetorical, though. A question like “But won’t idea X kill millions of puppies?” is back in attack territory.)

– Find comparisons to other examples. A person who really liked X might say things like “X is the best Y ever”. So you might ask, “what are some other Ys, and what makes X better than them?”. Luke Muehlhauser’s post on textbooks used this technique very successfully – people who liked a book also had to name two other books they thought were worse. Otherwise, people might recommend something just because it was the only book they’d read in the field.

## Why you should focus more on talent gaps, not funding gaps

This website focuses on original content, or content that would otherwise be non-Googleable. However, I am making an exception for Ben Todd’s excellent essay, “Why you should focus more on talent gaps, not funding gaps“, both because of how critically important it is and how thorough and well-written it is. The main focus of Ben’s essay is that solving problems is most often limited by human capital (which Ben calls “talent”, although it’s much broader than the word “talent” might normally imply) and social capital (combining talented people into a team that works well), not by funding. I agree with almost everything he says, and I’ve tried to write up bits and pieces of the arguments Ben makes before, but he really does a much better job. The essay is targeted at effective altruists, but I think it’s a must-read even for people who wouldn’t consider themselves EAs.

## A Debate on Animal Consciousness

This debate is from the comments to a public Facebook post by Eliezer Yudkowsky. Some comments were omitted for brevity. Links were inserted for clarity.

Eliezer Yudkowsky: [This is cross-]posted [with] edits from a comment on [the] Effective Altruism [Facebook group], asking who or what I cared about.

I think that I care about things that would, in your native mental ontology, be imagined as having a sort of tangible red-experience or green-experience, and I prefer such beings not to have pain-experiences. Happiness I value highly is more complicated.

However, my theory of mind also says that the naive theory of mind is very wrong, and suggests that a pig does not have a more-simplified form of tangible experiences. My model says that certain types of reflectivity are critical to being something it is like something to be. The model of a pig as having pain that is like yours, but simpler, is wrong. The pig does have cognitive algorithms similar to the ones that impinge upon your own self-awareness as emotions, but without the reflective self-awareness that creates someone to listen to it.

It takes additional effort of imagination to imagine that what you think of as the qualia of an emotion is actually the impact of the cognitive algorithm upon the complicated person listening to it, and not just the emotion itself. Like it takes additional thought to realize that a desirable mate is desirable-to-you and not inherently-desirable; and without this realization people draw swamp monsters carrying off women in torn dresses.

To spell it out in more detail, though still using naive and wrong language for lack of anything better: my model says that a pig that grunts in satisfaction is not experiencing simplified qualia of pleasure, it’s lacking most of the reflectivity overhead that makes there be someone to experience that pleasure. Intuitively, you don’t expect a simple neural network making an error to feel pain as its weights are adjusted, because you don’t imagine there’s someone inside the network to feel the update as pain. My model says that cognitive reflectivity, a big frontal cortex and so on, is probably critical to create the inner listener that you implicitly imagine being there to ‘watch’ the pig’s pleasure or pain, but which you implicitly imagine not being there to ‘watch’ the neural network having its weights adjusted.

What my model says is that when we have a cognitively reflective, self-modely thing, we can put very simple algorithms on top of that — as simple as a neural network having its weights adjusted — and that will feel like something, there will be something that it is like that thing to be, because there will be something self-modely enough to feel like there’s a thing happening to the person-that-is-this-person.

If the one’s mind imagines pigs as having simpler qualia that still come with a field of awareness, what I suspect is that their mind is playing a shell game wherein they imagine the pig having simple emotions and that feels to them like a quale, but actually the imagined inner listener is being created by their own minds doing the listening. Since they have no complicated model of the inner-listener part, since it feels to them like a solid field of awareness that’s just there for mysterious reasons, they don’t postulate complex inner-listening mechanisms that the pig could potentially lack. You’re asking the question “Does it feel like anything to me when I imagine being a pig?” but the power of your imagination is too great; what we really need to ask is “Can (our model of) the pig supply its own inner listener, so that we don’t need to imagine the pig being inhabited by a listener, we’ll see the listener right there explicitly in the model?”

Contrast to a model in which qualia are just there, just hanging around, and you model other minds as being built out of qualia, in which case the simplest hypothesis explaining a pig is that it has simpler qualia but there’s still qualia there. This is the model that I suspect would go away in the limit of better understanding of subjectivity.

So I suspect that vegetarians might be vegetarians because their models of subjective experience have solid things where my models have more moving parts, and indeed, where a wide variety of models with more moving parts would suggest a different answer. To the extent I think my models are truer, which I do or I wouldn’t have them, I think philosophically sophisticated ethical vegetarians are making a moral error; I don’t think there’s actually a coherent entity that would correspond to their model of a pig. Of course I’m not finished with my reductionism and it’s possible, nay, probable that there’s no real thing that corresponds to my model of a human, but I have to go on guessing with my best current model. And my best current model is that until a species is under selection pressure to develop sophisticated social models of conspecifics, it doesn’t develop the empathic brain-modeling architecture that I visualize as being required to actually implement an inner listener. I wouldn’t be surprised to be told that chimpanzees were conscious, but monkeys would be more surprising.

If there were no health reason to eat cows I would not eat them, and in the limit of unlimited funding I would try to cryopreserve chimpanzees once I’d gotten to the humans. In my actual situation, given that diet is a huge difficulty to me with already-conflicting optimization constraints, given that I don’t believe in the alleged dietary science claiming that I suffer zero disadvantage from eliminating meat, and given that society lets me get away with it, I am doing the utilitarian thing to maximize the welfare of much larger future galaxies, and spending all my worry on other things. If I could actually do things all my own way and indulge my aesthetic preferences to the fullest, I wouldn’t eat any other life form, plant or animal, and I wouldn’t enslave all those mitochrondria.

Tyrrell McAllister: I agree with everything you say about the inadequacy of the “pigs have qualia, just simpler” model. But I still don’t eat pigs, and it is for “philosophically sophisticated ethical” reasons (if I do say so myself). When I watch pigs interact with their environment, they seem to me, as best as I can tell, to be doing enough reflective cognition to have an “inner listener”.

Eliezer: Tyrell, what looks to you like a pig’s brain must be modeling itself?

Tyrell: “Modeling itself” is probably too weak a criterion. But pigs do seem to me to do problem-solving involving themselves and their environments that is best explained by their working with a mental model of themselves and their environment. (See, e.g., Pigs Prove to Be Smart, if Not Vain.)

I acknowledge that this kind of modeling isn’t enough to imply that there is an inner listener, so there I am being more speculative.

Also, I should have written, “they seem to me, as best as I can tell, with sufficient probability given the utilities involved, to be doing enough reflective cognition to have an ‘inner listener'”.

(I do eat fish, because the benefits of eating fish, and the probability of their being conscious, seem to make eating them the right thing to do in that case.)

Eliezer: Tyrell: This looks to me like environment-modeling performing a visual transform, and while that implies some degree of cognitive control of the visual cortex it doesn’t imply brains modeling brains.

If my model is correct then the mirror test is actually an ethically reasonable place to put a “do not eat” barrier; passing the mirror test may not be sufficient, but it seems necessary-ish (leaving aside obvious caveats about using the right modality).

Jamie F Duerden: Pigs seem reasonably ‘smart’, insofar as recognising names and processes, solving puzzles and so on. I don’t know whether they recognise themselves in a mirror and are aware of their own awareness of that fact, but I would not be especially surprised to discover it was so. Yet I still eat bacon, because it is filling, very tasty, and a great source of protein. I would not, however, eat a pig which I had been keeping as a pet.

This distinction seems to be consistently applied by whichever part of my brain makes intuitive ‘moral’ judgements, because I experience no psychological backlash when contemplating eating people I don’t know, but am disturbed by the idea of eating someone I was friends with. Comparing those sensations to the equivalent responses for ‘farmed/wild animal’ and ‘pet’ yields negligible difference. I have hunted animals for meat, so this is not a failure to visualise unknown animals correctly. I am forced to conclude that ‘does it have qualia?’ is not an important variable in my default ‘is it food?’ function. (Nor apparently in my ‘would it make a good pet?’ function.) As I routinely catch myself empathising with hypothetical AIs, I suspect this may be a more general complete failure to have separate categories for the various sorts of ‘mind’.

Luke Muehlhauser: I think my probability distribution over theories of consciousness (in animals, humans, and machines) looks something like this:

~30%: Basically Eliezer’s model, described above
~30%: Apes and maybe some others have an inner listener, but it results in less salient subjective experience than the human inner listener due to its less integrated-y self-modely nature, and this subjective salience drops to 0 pretty sharply below a certain kind/degree of self-modely-ness (ape level? pig level?), rather than trailing off gradually down to fish or whatever.
< 5%: pansychism, consciousness is fundamental, consciousness is magical, and similar theories
~35%: other theories not close to the above, most of which I haven’t thought of

I try to limit my meat intake due to how much mass I have on theory categories #2 & 4, but I’m not strictly vegetarian or vegan because I’m choosing to devote most of my “be ethical” willpower/skill points elsewhere. But this does mean that I become more vegetarian/vegan as doing so becomes less skill+willpower-requiring, so e.g. I want that all-vegan supermarket in Germany to open a branch in Berkeley please.

David Pearce: Some errors are potentially ethically catastrophic. This is one of them. Many of our most intensely conscious experiences occur when meta-cognition or reflective self-awareness fails. Thus in orgasm, for instance, much of the neocortex effectively shuts down. Or compare a mounting sense of panic. As an intense feeling of panic becomes uncontrollable, are we to theorise that the experience somehow ceases to be unpleasant as the capacity for reflective self-awareness is lost? “Blind” panic induced by e.g. a sense of suffocation, or fleeing a fire in a crowded cinema (etc), is one of the most unpleasant experiences anyone can undergo, regardless or race or species. Also, compare microelectrode neural studies of awake subjects probing different brain regions; stimulating various regions of the “primitive” limbic system elicits the most intense experiences. And compare dreams – not least, nightmares – many of which are emotionally intense and characterised precisely by the lack of reflectivity or critical meta-cognitive capacity that we enjoy in waking life.

Anyone who cares about sentience-friendly intelligence should not harm our fellow subjects of experience. Shutting down factory farms and slaughterhouses will eliminate one of the world’s worst forms of severe and readily avoidable suffering.

Jai Dhyani: [Eliezer], how confident are you about this? I would like to bet on the proposition: “In 2050 you will not be willing to eat a cow-as-opposed-to-vat grown steak in exchange for \$50 inflation-adjusted USD.” I think there is at least a 10% chance of this occurring.

Eliezer: Jai Dhyani, if I was accustomed to vat-grown steak I probably would value that ethical combo continuing more than I valued \$50.

Mason Hartman: I think you’ve laid the groundwork for a useful model of consciousness, but it’s not clear why you’ve arrived at the position that pigs probably don’t have it. It seems unlikely to me that only a very small handful of species have been or are “under selection pressure to develop sophisticated social models of conspecifics.” Basically, a little red flag quietly squeaks “Check whether this person actually knows much about animal social behavior!” whenever someone says something that seems to imply that humans are the pinnacle of social-animal-ness.

Also, I think consciousness probably “unlocks” a lot of value aside from better dinner conversation – predation, for example, seems like the sort of thing one might be better able to prevent with the ability to model minds, including one’s own mind.

Brent Dill: Incidentally, baboons have quite a rich set of social scheming instincts. All of this guy’s stuff is amazing.

Eliezer: I wouldn’t eat a BABOON. Eek.

Brent: Well, yeah. Eating primates is an excellent way to get really really sick.

Eliezer: They recognize themselves in mirrors! No eat mirror test passers! Anything which has the cognitive support for that could have an inner listener for all I currently know about inner listeners.

Mason: The mirror test has some other problems with it – the big one being that a lot of non-Western kids don’t pass it. I assume Kenyan 6-year-olds will remain off the menu.

William Eden: I am personally more inclined towards the panpsychism view, for various reasons, not least of which is Eliezer’s post, ironically enough.

I care about subjective experience apart from concepts of personhood. Imagine a superintelligence discovered humans, decided that we lacked some critical component of their cognition, and because of this they felt justified in taking us apart to be used as raw atoms, or experimented on in ways we found distressing.

Eliezer: Or even worse, they might not think that paperclips had ethical value? I think if you’re going to go around caring about conscious beings instead of trade partners, you need to accept that your utility function looks weird to a paperclip maximizer. Saying that you care about subjective experience already relegates you to a pretty weird corner of mindspace relative to all the agents that don’t have subjective experience and had different causal histories leading them to assign terminal value to various agent-like objects.

Brent: I think that all ‘pain’, in the sense of ‘inputs that cause an algorithm to change modes specifically to reduce the likelihood of receiving that input again’, is bad.

I think that ‘suffering’, in the sense of ‘loops that a self-referential algorithm gets into when confronted with pain that it cannot reduce the future likelihood of experiencing’.

Social mammals experience much more suffering-per-unit-pain because they have so many layers of modeling built on top of the raw input – they experience the raw input, the model of themselves experiencing the input, the model of their abstracted social entity experiencing the input, the model of their future-self experiencing the input, the models constructed from all their prior linked memories experiencing the input… self-awareness adds extra layers of recursion even on top of this.

One thought that I should really explore further: I think that a strong indicator of ‘suffering’ as opposed to mere ‘pain’ is whether the entity in question attempts to comfort other entities that experience similar sensations. So if we see an animal that exhibits obvious comforting / grooming behavior in response to another animal’s distress, we should definitely pause before slaughtering it for food. The capacity to do so across species boundaries should give us further pause, as should the famed ‘mirror test’. (Note that ‘will comfort other beings showing distress’ is also a good signal for ‘might plausibly cooperate on moral concerns’, so double-win).

At that point, we have a kind of consciousness scorecard, rather than a pure binary ‘conscious / not conscious’ test.

Eliezer: I think you just said that a flywheel governor on a steam engine, or a bimetallic thermostat valve, can feel pain.

Brent: I did, and I intended to. “Pain”, in and of itself, is worthless. “Suffering” has moral weight. “Pain x consciousness = suffering”, and a flywheel governor just isn’t complex enough to be very conscious.

David Brin: See this volume about how altruism is actually very complicated. (I wrote two of the chapters.)

Mason: I have some other thoughts, assuming my understanding of Eliezer’s theory is fairly accurate. These are the claims I’m assuming he’s making:

(a) Meaningful consciousness (i.e. that which would allow experiences Eliezer would care about) in the animal kingdom is exceedingly rare.

(b) Animals only evolve meaningful consciousness when under selection pressure to develop sophisticated social models.

(c) The mirror test is a decent way to determine whether a species possesses, or has the capacity for, meaningful consciousness.

Assuming these claims were true, I’d make some predictions about the world that don’t seem to have been borne out, including:

(1) Given the rarity of meaningful consciousness, animals who pass the mirror test will likely be closely related to one another – but humans, elephants, dolphins, and magpies have all passed the mirror test at one point.

(2) Animals who pass the mirror test should be exceptionally social or descended from animals who are/were exceptionally social. I admittedly don’t know much about magpies, but a quick Google doesn’t seem to imply that they’re exceptionally more social than other birds.

Eliezer: I hadn’t heard of magpies before. African grey parrots are surprisingly smart and apparently play fairly complex games of adultery. Dolphins are the only known creatures besides humans that form coalitions of coalitions. Not sure what elephants do socially but they have large brains and the balance of selection pressures is plausibly affected thereby (i.e., it is more advantageous to have good brain software if you have a larger brain, and if you have a bigger body and brain improvements buy percentages of body-use then you can afford a bigger brain, etc.)

Mason: I think all of the test-passing species are pretty social (some of them possibly more so than us, depending on how you measure social-ness), but they don’t seem exceptionally so. Many, many animals play very complex social games – in my opinion (and much to my surprise) even chickens have fascinating social lives.

The question isn’t “Do we have evidence these animals are highly social?” but “Do we have evidence these animals tend to be social to an extent/in a way which other animals aren’t?”

Rob Bensinger: I don’t believe in phenomenal consciousness. I think if you try to put quotation marks around a patch of your visual field (e.g., by gesturing at ‘that patch of red in the lower-left quadrant of my visual field’), some of the core things included in your implicit intension will make the gesture non-referring. Asking about ‘but what am I really subjectively experiencing?’ is like going: ‘but what is my deja-vu experience really a repetition of?’ The error is trickier, though, because the erroneous metarepresentation is systematic and perception-like (like a hallucination or optical illusion, but one meta level up) rather than judgment-like (like a delusion or hunch).

Like Luke said, believing I’m a zombie in practice just means I value something functionally very similar to consciousness, ‘z-consciousness’. But ‘z-consciousness’ is first and foremost a (promissory note for a) straw-behaviorist, third-person theoretical concept. Thinking in those terms — starting with one box, some of whose (physical, neuronal, behavioral…) components I sometimes misdescribe as ‘mind’, rather than starting with separate conceptual boxes for ‘mind’ and ‘matter’ and trying to glue them together as tightly as possible — has been a really weird (z-)experience. It’s had some interesting effects on my intuitions over the past few years.

1. I’m much more globally skeptical that I can trust my introspection and metacognition.
2. Since (z-)consciousness isn’t a particularly unique kind of information-processing, I expect there to be an enormous number of ‘alien’ analogs of consciousness, things that are comparable to ‘first-person experience’ but don’t technically qualify as ‘conscious’.
3. I’m more inclined to think there are fuzzy ‘half-conscious’ and ‘quarter-conscious’ states in between z-consciousness and z-unconsciousness.

I entertained limited versions of those ideas in my non-eliminative youth, but they’re a lot more salient and personal to me now. And as a consequence of 1-3, I’m much more skeptical that (z-)consciousness is a normatively unique kind of information-processing. Since I think a completed neuroscience will overturn our model of mind fairly radically, and since humans have strong intuitions in favor of egalitarianism and symmetry, it wouldn’t surprise me if certain ‘unconscious’ states acquired the same moral status as ‘conscious’ ones.

The practical problem of deciding which alien minds morally ‘count’ will become acute as we explore transhuman/posthuman mindspace, but the principled problem is already acute; if we expect our ideal, more informed selves to dispense with locating all value in consciousness (or, equivalently, if we expect to locate all value in a bizarrely expansive conception of ‘consciousness’), we should do our best to already reflect that expectation in our ethics.

So, I’m with Eliezer in thinking pig pain isn’t just ‘human pain but simpler’, or ‘human pain but fainter’. But that doesn’t much reassure me: to the extent human-style consciousness is an extremely rare and conjunctive adaptation dependent on complex social modeling, I become that much less confident that that’s the only kind of information-processing I should be concerned for, on a Coherent-Extrapolated-Volition-style reading of ‘should’. My four big worries are:

(a) Pigs might still be ‘slightly’ conscious, if consciousness (ideally conceived) isn’t anywhere near as rare and black-and-white as our non-denoting folk concept of consciousness.

(b) If consciousness is rare and exclusive, that increases the likelihood that our CEV would terminally value some consciousness-like unconscious states. Perhaps the pig lacks a first-person perspective on reality, but has a shmerspective that is strange and beautiful and rich, and we ought ideally to abandon qualia-chauvinism and assign some value to shmuffering.

(c) Supposing our concept of ‘consciousness’ really does turn out to be incoherent unless you build in a ton of highly specific unconscious machinery ‘behind the scenes’, that further increases the likelihood that our CEV would come to care about something more general than consciousness, something that captures the ‘pain’ aspect of our experience in a way that can also apply to systems that are ‘free-floating pain’, pain sans subject. (Or sans a subject as specific and highly developed as a human subject.)

(d) Once we grant pigs might be moral patients, we also need to recognize that they may be utility monsters. E.g., if they’re conscious or quasi-conscious, they might be capable of much more acute (quasi-)suffering than ordinary humans are. (Perhaps they don’t process time in the way we do, so every second of pain is ‘stretched out’.) This may be unlikely, but it should get a big share of our attention because it would be especially bad.

I bounce back and forth between taking the revisionist posthuman consciousness-as-we-think-of-it-isn’t-such-a-big-deal perspective very seriously, and going ‘that’s CRAZY, it doesn’t add up to moral normality, there’s no God to make me endorse such a ridiculous “extrapolation” of my ideals, I’ll modus tollens any such nonsense till I’m blue in the face!’ But I’m not sure the idea of ‘adding up to moral normality’ makes philosophical sense. It may just being a soothing mantra.

We may just need to understand the principles behind consciousness and what-makes-us-value-consciousness on an especially deep level in order to avoid creating new kinds of moral patient; I don’t know whether avoiding neuromorphism, on its own, would completely eliminate this problem for AI.

David: Rob, you might want to explore (d) further. Children with autism have profound deficits of self-modelling as well as social cognition compared to neurotypical folk. So are profoundly autistic humans less intensely conscious than hyper-social people? In extreme cases, do the severely autistic lack consciousness’ altogether, as Eliezer’s conjecture would suggest? Perhaps compare the accumulating evidence for Henry Markram’s “Intense World” theory of autism.

Eliezer, I wish I could persuade you to quit eating meat – and urge everyone you influence to do likewise.

Kaj Sotala: Eliezer, besides the mirror test, what concrete functional criteria do you have in mind that would require the kind of processing that you think enables subjective experience? In other words, of what things can you say “behavior of kind X clearly requires the kind of cognitive reflectivity that I’m talking about, and it seems safe to assume that pigs don’t have such cognitive reflectivity because they do not exhibit any behaviors falling into that class”?

Also, it seems to me that this theory would imply that we aren’t actually conscious while dreaming, since we seem to lack self-modeling capability in (non-lucid) dreams. Is that correct?

I would also agree that “pigs have qualia, only simpler” seems wrong, but “pigs have qualia, only of fewer types” would seem more defensible. For example, they might lack the qualia for various kinds of sophisticated emotional pain, but their qualia for physical pain could be basically the same as that of humans. This would seem plausible in light of the fact that qualia probably have some functional role rather than being purely epiphenomenal, and avoiding tissue damage is a task with a long evolutionary history.

It would feel somewhat surprising if almost every human came equipped with the neural machinery that reliably correlated tissue damage with the qualia of physical pain, and the qualia for emotional pain looked like outgrowths of the mechanism that originally developed to communicate physical pain, but most of our evolutionary ancestors still wouldn’t have had the qualia for physical pain despite having the same functional roles for the mechanisms that communicate information about tissue damage.

Michael Vassar: My best guess is that there is moral subjectivity and possibly also moral objectivity, that moral subjectivity works pretty similarly to Eliezer’s description here. However, moral subjectivity is a property of mirror test passers some of the time, but not very much of the time, and how much depends on the individual and varies over quite a large range. It probably also varies in intensity quite a lot, probably in a manner that isn’t simply correlated with its frequency of occurrence but probably is simply correlated with integrated information. It’s probably present in various types of collective. A FAI would probably have this. Neither a mob nor the members of the mob have it.

There’s also probably moral objectivity. Dreamers, pigs, flow-states, and torture victims have this, but most AGIs probably don’t. Most people, most of the time, do have it, but, e.g. certain meditators may not. It’s harder to characterize its properties. My best guess is that it’s a set of heuristics for pruning computational search trees. “measure” might refer to the same thing.

Luke: On the topic of Rob’s skepticism about introspection, see my post. I should also note that when I mentioned “integrated-y” above, I wasn’t endorsing IIT at all. I have the same reaction to IIT as Aaronson, and encouraged Aaronson to write his post on that subject.

Rob: If you’re skeptical about whether you’re conscious at times you aren’t thinking about consciousness (even though you can in many cases think back and remember those experiences later, and consider what their subjective character was like at the time — as you remember it — and learn new things, things which seem consistent with your subjective experiences at other times), it’s possible you should also be skeptical about whether you’re conscious at times you are thinking about consciousness. Especially when you’re merely remembering such times.

If you can misremember your mental state of five minutes ago as a conscious one, what specifically forbids your misremembering your mental state of five minutes ago as a conscious-of-consciousness one?

My own worries go in a different direction. I’m fine with the idea that I might be confabulating some of my experiences. I’m more concerned that large numbers of other subjects in my brain may be undergoing experiences as a mechanism for my undergoing an experience, or as a side-effect. What goes into the sausages?

Some people fear the possibility that anesthetics prevent memory-of-suffering, instead of preventing suffering. I’m a lot more worried that ordinary human cognition (e.g., ordinary long-term memory formation, or ordinary sleep) could involve large amounts of unrecorded ‘micro-suffering’. Subsystems of my own brain are mysterious to me, and I treat those subsystems with the same kind of moral uncertainty as I treat mysterious non-human brains.

Brent: Note that dogs have evolutionary pressure to express depression, because they have spent the last 40,000 years co-evolving with humans, being selected explicitly for their capacity to emulate human emotional communication and bonding.

Mason: Brent – I’m not convinced that dogs have ever been bred to emulate negative human emotions. Until recently, many breeds were used primarily for work. Many – e.g. herding breeds – don’t actually make very good companions unless made to work or perform simulated work (e.g. obedience/agility/herding trials) to manage their physical/intellectual needs. A dog that would cease to work as effectively during periods of emotional stress (e.g. by displaying symptoms of depression) would probably not be selected for. And yet these breeds are often the most expressive across the board (and the most capable of reacting to emotional expression in humans), as evidenced by their extensive use as therapy/emotional support animals.

It seems very likely to me that we either created actual emotional complexity through selective breeding, or that we just took animals that already had very complex emotional lives and bred for a communication style that was intuitive to us. If we had only been breeding for behavior that simulated emotional expression, we would probably have avoided behaviors that aren’t conducive to the work dogs have done throughout their history with humans. Keeping dogs primarily as pets is a very new thing.

Eric Schmidt: Responding to the initial post:

IMO [Eliezer’s] making a huge, unfounded leap in assuming that all qualia only arise in the presence of an intelligent “inner listener” for some mysterious reason. For all we know, you could engender qualia in a Petri dish. For all we know, there are fonts of qualia all around us. You are restricting your conception of qualia to the kind you are most familiar with and which you hold in highest regard: the feeling of human consciousness and intelligence, the feeling of your integrated sense of self. But if qualia can originate in a Petri dish, it could certainly originate in a pig. I used to think like you do when I was younger, but IMO it’s just an unfounded bias towards the familiar, towards ourselves. AFAIK, if you poke a pig w[ith] a pointy stick, some pain feelings will engender in the universe. No, no intelligent consciousness will be there to stare at them, reflect on them, it won’t lead to the various sorts of other qualia that it can for humans (e.g. the qualia of noticing the pain, dwelling on it, remembering it, thinking about it, self pity, whatever), but AFAIK it’s still there in our universe, tarnishing it slightly (assuming pain is in some sense negatively valued and ought to be minimized).

On vegetarianism: Well, demand for meat keeps livestock species’ populations artificially way high. So as long as those livestock are living net-positive-qualia lives, then great. The more the better. (Aside: maybe [the] Fermi paradox is [because] earth is a farm for Gorlax who eats advanced civilizations in a single bite as an exquisite delicacy. I’m totally okay with that: the TV here is that good.) So I think eating slaughtered animals is fine, so long as the animals aren’t miserable. I’d like to see some data on that. In general, I’d like to see a consumer culture that pressured the meat industry to treat the animals decently, somehow assure us that they’re living net-positive-qualia lives.

EDIT: By “as far as I know”/”as far as we know” I mean that it hasn’t been disproven and there’s no compelling reason to believe it’s false.

Eliezer: Nobody knows what science doesn’t know. The correct form of the statement you just made is “For all I [Eric Schmidt] know, you could engender qualia in a Petri dish.”

My remaining confusion about consciousness does not permit that as an open possibility that could fit into the remaining confusion. I am not thinking about an intelligent being required to stare at the quale, and then experience other quales about being aware of things. I am saying that it is a confused state of mind, which I am now confident I have dissolved, to think that you could have a “simple quale” there in the first place. Those amazingly mysterious and confusing things you call qualia do not work the way your mind intuitively thinks they do, and you can’t have simple ones in a petri dish. Is it that impossible to think that this is something that someone else might know for sure, if they had dissolved some of their confusion about qualia?

Confusion isn’t like a solid estimating procedure that gives you broad credibility intervals and says that they can narrow no further without unobtainable info, like the reason I’m skeptical that Kurzweil can legitimately have narrow credibility intervals about when smarter-than-human AI shows up. Confusion means you’re doing something wrong, that somebody else could just as easily do right, and exhale a gentle puff of understanding that blows away your bewilderment like ashes in the wind. I’m confused about anthropics, which means that, in principle, I could read the correct explanation tomorrow and it could be three paragraphs long. You are confused about qualia; it’s fine if you don’t trust me to be less confused, but don’t tell other people what they’re not allowed to know about it.

To be clear, pigs having qualia does fit into remaining confusion; it requires some mixture of inner listeners being simpler than I thought and pigs having more reflectivity than I think. Improbable but not forbidden. Qualia in a petri dish, no.

In ethical terms, where society does not derogate X as socially unacceptable, and where naive utilitarianism says X is not the most important thing to worry about / that it is not worth spending on ~X, I apply a “plan on the mainline” heuristic to my deontology; it’s okay to do something deontologically correct on the mainline that is not socially forbidden and which is the best path according to naive utilitarianism. Chimps being conscious feels to me like it’s on the mainline; pigs being conscious feels to me like it’s off the mainline.

David: Eliezer, we’d both agree that acknowledged experts in a field can’t always be trusted. Yet each of us should take especial care in discounting expert views in cases where one has a clear conflict of interest. [I’m sure every meat eating reader hopes you’re right. For other reasons, I hope you’re right too.]

Eliezer: What do they think they know and how do they think they know it? If they’re saying “Here is how we think an inner listener functions, here is how we identified the associated brain functions, and here is how we found it in animals and that showed that it carries out the same functions” I would be quite impressed. What I expect to see is, “We found this area lights up when humans are sad. Look, pigs have it too.” Emotions are just plain simpler than inner listeners. I’d expect to see analogous brain areas in birds.

David: Eliezer, I and several other commentators raised what we see as substantive problems with your conjecture. I didn’t intend to rehash them – though I’ll certainly be very interested in your response. Rather, I was just urging you to step back and reassign your credences that you’re correct and the specialists in question are mistaken.

Eliezer: I consider myself a specialist on reflectivity and on the dissolution of certain types of confusion. I have no compunction about disagreeing with other alleged specialists on authority; any reasonable disagreement on the details will be evaluated as an object-level argument. From my perspective, I’m not seeing any, “No, this is a non-mysterious theory of qualia that says pigs are sentient…” and a lot of “How do you know it doesn’t…?” to which the only answer I can give is, “I may not be certain, but I’m not going to update my remaining ignorance on your claim to be even more ignorant, because you haven’t yet named a new possibility I haven’t considered, nor pointed out what I consider to be a new problem with the best interim theory, so you’re not giving me a new reason to further spread probability density.”

Mark P Xu Neyer: Do you think people without developed prefrontal cortices – such as children – have an inner listener?

Eliezer: I don’t know. It would not surprise me very much to learn that average children develop inner listeners at age six, nor that they develop them at age two, and I’m not an expert on developmental psychology nor a parent so I have a lot of uncertainty about how average children work and how much they vary. I would certainly be more shocked to discover that a newborn baby was sentient than that a cow was sentient.

Brian Tomasik: I wrote a few paragraphs partially as a response to this discussion. The summary is:

There are many attributes and abilities of a mind that one can consider important, and arguments about whether a given mind is conscious reflect different priorities among those in the discussion about which kinds of mental functions matter most. “Consciousness” is not one single thing; it’s a word used in many ways by many people, and what’s actually at issue is the question of which traits matter more than which other traits.

Also, this discusses why reflectivity may not be ethically essential:

In Scherer’s view, the monitoring process helps coordinate and organize the other systems. But then privileging it seems akin to suggesting that among a team of employees, only the leader who manages the others and watches their work has significance, and the workers themselves are irrelevant.[2] In any event, depending on how we define monitoring and coordination, these processes may happen at many levels, just like a corporate management pyramid has many layers.

Buck Shlegeris: BTW, Eliezer, AFAICT pigs have self awareness according to the mirror test: they only fail it because pigs don’t care if they have mud on their face. They are definitely aware that the pig in the mirror is not another pig. Is that enough uncertainty to not eat them?

From Wikipedia: “Pigs can use visual information seen in a mirror to find food, and show evidence of self-recognition when presented with their reflection. In an experiment, 7 of the 8 pigs tested were able to find a bowl of food hidden behind a wall and revealed using a mirror. The eighth pig looked behind the mirror for the food.[28]”

Eliezer: What I want to see is an entity not previously trained on mirrors, to realize that motions apparent in the mirror are astoundingly correlated to motions that it’s sending to the body, i.e., the aha! that I can control this figure in the mirror, therefore it is me. This seems to me to imply a self-model. If you train pigs to use a mirror generically in order to find food, then what you’re training them to do is control their visual imagination so as to take the mirror-info and normalize it into nearby-space info. This tells me that pigs have a visual imagination which is not very surprising since IIRC the back-projections from higher areas back to the visual cortex were already a known thing.

But if the pig can then map what it sees in the mirror onto its spatial model of surrounding space, and as a special case can identify things colocated in space with itself, you’ve basically trained the pig to ‘solve’ the mirror test via a different pathway that doesn’t need to go through having a self-model. I’m sorry if it seems like I’m moving the goalpost, but when I say “mirror test” I mean a spontaneous mirror test without previous training. There’s similarly a big difference between an AI program that spontaneously starts talking about consciousness and an AI program that researchers have carefully crafted to talk about consciousness. The whole point of the mirror test is to provoke (and check for) an aha! about how the image of yourself in the mirror is behaving like you control it; training on mirrors in general defeats this test.

Buck: I don’t quite buy your reasoning. Most importantly, the pigs are aware that the pig in the mirror is not a different pig. That seems like strong evidence of self awareness. One of the researchers said “We have no conclusive evidence of a sense of self, but you might well conclude that it is likely from our results.”

So pigs haven’t passed or failed the mirror test, but they seem aware that a pig in a mirror is not a different pig, and experts in the field seem to think pigs are likely to have self awareness.

And again, I think that the burden of proof is on the people who are saying that it’s fine to torture the things! Like, I’m only 70% sure that pigs are conscious. But that’s still enough that it’s insane to eat them.

(Also, when you say “an entity not previously trained on mirrors”: literally no species can immediately figure out what a mirror is. Even humans have to have a while around them to figure them out, which is as much training as we give to pigs.)

Andres Gomez Emilsson: Harry would say: We must investigate consciousness and phenomenal binding empirically.

Let us take brain tissue in a petri dish, or something like that, and use bioengineered neural bridges between our brains and the petri dish culture to find out whether we can incite phenomenal binding of any sort. Try to connect different parts of your brain to the petri dish. E.g what happens when you connect your motor areas to it, and what happens when you add synapses that go back to your visual field? If you are able to bind phenomenologies to your conscious experience via this method, try changing the chemical concentrations of various neurotransmitters in the petri dish. Etc.

This way we can create a platform that vastly increases the range of our possible empirical explorations.

Eliezer: Um, qualia are not epiphenomenal auras that contaminate objects in physical contact. If you hook up an electrical stimulator to your visual cortex, it can make you see color qualia (even if you’re blind). This is not because the electrical stimulator is injecting magical qualia in there. The petri dish, I predict with extreme confidence, would seem to you to produce exactly the same qualia as an electrical stimulator with the same input/output behavior. Unless you have discovered a new law of physics. Which you will not.

Andres: I’m thinking about increasing the size of the culture in the petri dish until I can show that phenomenal binding is happening by doing something I would not be able to do otherwise:

If I increase the size of my visual field by adding to my brain a sufficient number of neurons in a petri dish appropriately connected to it, I would be able to represent more information visually than I am capable of with my normal visual cortex.

This is thoroughly testable. And I would predict that you can indeed increase the information content of a given experience by this method.

Eliezer: You’re not addressing my central point that while hooking up something to an agent with an inner listener may create what that person regards as qualia, it doesn’t mean you can rip out the petri dish and it still has qualia in it.

David: A neocortex isn’t needed for consciousness, let alone self-consciousness. Perhaps compare demonstrably self-aware magpies, who (like all birds) lack a neocortex.

Eric, many more poor humans could be fed if corn and soya products were fed directly to people rather than to factory-farmed nonhuman animals we then butcher. Such is the thermodynamics of a food chain. Qualia and the binding problem? I’ve put a link below; but my views are probably too idiosyncratic usefully to contribute to the debate here.

Eliezer: The PLOS paper seems to be within the standard paradigm for mirror studies. Pending further confirmation or refutation this is a good reason not to eat magpies or other corvids.

David: Eliezer, faced with the non-negligible possibility that one might be catastrophically mistaken, isn’t there a powerful ethical case for playing safe? If one holds a view that most experts disagree with, e.g. in my case, I’m sceptical that a classical digital computer will ever be conscious, I’d surely do best to defer to consensus wisdom until I’m vindicated / confounded. Or do you regard the possibility that you are mistaken as vanishingly remote?

Francisco Boni Neto: Empathic brain-modeling architecture as an conditio sine qua non for an ‘inner listener’ that is a requirement for context-dependent qualia or what-is-likeness that-it-is-like-that-thing-to-be seems like an exaggeration of the “simulation” and “mirroring” theory of affective cognition, overstating top-bottom processes: “there must be a virtualization process, helped by socially complex networks of interactions that are positively selected in modern humans, so I can simulate other humans in my brains while I sustain my own self-modelying thingy that adds a varied collection of qualia types in the mammalian phenotype that are lacking in other less complex phenotypes (e.g. pigs)”.

It seems like a very adaptationist thought that overstates the power and prevalence of natural selection towards the certain recent branches of the phylogenetic tree against the neural reuse that makes ancient structures so robust and vital in processing vivid what-is-likeness and that makes bottom-up processing so important despite the importance of the pre-frontal cortex in deliberative perceptual adaptation and affective processing. That is why I agree when David Pearce points out that many of our most intensely conscious experiences occur when meta-cognition or reflective self-awareness fails. Super vivid, hyper conscious experiences, phenomenic rich and deep experiences like lucid dreaming and ‘out-of-body’ experiences happens when higher structures responsible for top-bottom processing are suppressed. They lack a realistic conviction, specially when you wake up, but they do feel intense and raw along the pain-pleasure axis.

Eliezer: It is impossible to understand my position or pass any sort of Ideological Turing Test on it if you insist on translating it into a hypothesis about some property of hominids like “advanced reflectivity” which for mysterious reasons is the cause of some mysterious quales being present in hominids but not other mammals. As long as qualia are a mysterious substance in the version of my hypothesis you are trying to imagine, of course you will see no motivation but politics for saying the mysterious substance is not in pigs, when for all you know, it could be in lizards, or trapped in the very stones, and those unable to outwardly express it.

This state of confusion is, of course, the whole motivation for trying to think in ways that don’t invoke mysterious qualia as primitive things. It is no use to say what causes them. Anything can be said to cause a mysterious and unreduced property, whether “reflectivity” or “neural emergence” or “God’s blessing”. Progress is only made if you can plausibly disassemble the inner listener. I am claiming to have done this to a significant extent and ended up with a parts list that is actually pretty complicated and involves parts not found in pigs, though very plausibly in chimpanzees, or even dolphins. If you deliberately refuse to imagine this state of mind and insist on imagining it as the hypothesis that these parts merely cause an inner listener by mysteriousness, then you will be unable to understand the position you are arguing with and you will not be able to argue against it effectively to those who do not already disbelieve it.

David: Eliezer, I’ve read Good and Real, agree with you on topics as varied as Everett and Bayesian rationalism; but I still don’t “get” your theory of consciousness. For example, a human undergoing a state of blind uncontrollable panic is no more capable of reflective self-awareness or any other form of meta-cognition than a panicking pig. The same neurotransmitter systems, same neurological pathways and same behavioural responses are involved in the panic response in both pigs and humans. So why is the human in a ghastly state of consciousness but the pig is just an insentient automaton?

Eliezer: One person’s modus ponens is another’s modus tollens: I’m not totally sure people in sufficiently unreflective flow-like states are conscious, and I give serious consideration to the proposition that I am reflective enough for consciousness only during the moments I happen to wonder whether I am conscious. This is not where most of my probability mass lies, but it’s on the table. I think I would be equally surprised to find monkeys conscious, or people in flow states nonsentient.

Daniel Powell: If there existed an entity which was intermittently conscious, would the ethics of interacting with it depend on whether it was conscious at the moment?

What about an entity that has never been conscious, but might become so in the future – for example, the uncompiled code for a general intelligence, or an unconceived homo sapiens?

I’m having a hard time establishing what general principle I could use that says it is wrong to harm a self-aware entity, but not wrong to harm a creature that isn’t self-aware, but wrong to cause “an entity which used to be self-aware and currently isn’t, but status quo will be again” to never again be self aware, but not wrong to cause a complex system that has the possibility to create a self-aware entity not to do so.

I thought I held all four of those beliefs from simple premises, but now I doubt whether my intuitive sense of right and wrong cares about sentience at all.

Eliezer: Daniel, I think the obvious stance would be that having unhappy memories is potentially detrimental / causes suffering, or that traumatic experiences while nonsentient can produce suffering observer-moments later. So I would be very averse to anyone producing pain in a newborn baby, even though I’d be truly shocked (like, fairies-in-the-garden shocked) to find them sentient, because I worry that might lose utility in future sentient-moments later.

Carl Shulman: I find [Eliezer] unconvincing for a few reasons:

1. Behavioral demonstration of self-models along the lines of the mirror test is overkill if you’re looking for the presence of some kind of reflective “thinking about sense inputs/thoughts”, the standard mirror test requires other additional processing and motivation to pass the standard presentation. So one should expect that creatures for which there isn’t yet a wikipedia mirror test entry could also pass idealized tests for such reflective processing, including somewhat less capable relatives of those who pass. [Edit: see variation among human cultures on this, even at 6 years old, and the dependence on grooming behavior and exposure to mirrors, as discussed in the SciAm article here.]

2. There is a high degree of continuity in neural architecture and capacities as one moves about the animal kingdom. Piling on additional neural resources to a capacity can increase it, but often with diminishing returns (even honeybees with a million neurons can do some neat tricks). If one allows consciousness for magpies and crows and ravens, you should expect with fairly high probability that some less impressive (or harder to elicit/motivate/test) versions of those capacities are present in other birds, such as chickens.

3. You haven’t offered neuroscience or cognitive science backing for claims about the underlying systems, just behavioral evidence via the mirror test. The claim that other animals don’t have weaker versions of the mechanisms enabling passage of the mirror test, or capable of reflective thought about sense inputs/reinforcement, is one subject to neuroscience methods. You don’t seem to have looked into the relevant neuroscience or behavioral work in any depth.

4. The total identification of moral value with reflected-on processes, or access-conscious (for speech) processes, seems questionable to me. Pleasure which is not reflected on or noticed in any access-conscious way can still condition and reinforce. Say sleeping in a particular place induced strong reinforcement, which was not access-conscious, so that I learned a powerful desire to sleep there, and not want to lose that desire. I would not say that such a desire is automatically mistaken, simply because the reward is not access-conscious.

5. Related to 4), I don’t see you presenting great evidence that the information processing reflecting on sense inputs (pattern recognition, causal models, etc) is so different in structure.

“Now, a study published September 9 in The Journal of Cross-Cultural Psychology is reinforcing that idea and taking it further. Not only do non-Western kids fail to pass the mirror self-recognition test by 24 months—in some countries, they still are not succeeding at six years old.

What does it mean? Are kids in places like Fiji and Kenya really unable to figure out a mirror? Do these children lack the ability to psychologically separate themselves from other humans? Not likely. Instead researchers say these results point to long-standing debates about what counts as mirror self-recognition, and how results of the test ought to be interpreted.” (link)

Eliezer: More seriously, the problem from my perspective isn’t that I’m confident of my analysis, it’s that it looks to me like any analysis would probably point in the same direction — it would give you a parts list for an inner listener. Whereas what I keep hearing from the opposing position is “Qualia are primitive, so anything that screams probably has pain.” What I need to hear to be persuaded is, “Here is a different parts list for a non-mysterious inner listener. Look, pigs have these parts.” I don’t put any particular weight on prestigious people saying things if this does not appear to be the form of what they’re saying — I would put significant weight on Gary Drescher (who does know what cognitive reductionist philosophy looks like) private-messaging me with, “Eliezer, I did work out my own parts list for inner listeners and I’ve studied pigs more than you have and I do think they’re conscious.”

Carl: Quote from [Stanford Encyclopedia of Philosophy] summarizing some of my objection to Eliezer’s use of the mirror test and restrictive [higher-order theory] views above:

“In contrast to Carruthers’ higher-order thought account of sentience, other theorists such as Armstrong (1980), and Lycan (1996) have preferred a higher-order experience account, where consciousness is explained in terms of inner perception of mental states, a view that can be traced back to Aristotle, and also to John Locke. Because such models do not require the ability to conceptualize mental states, proponents of higher-order experience theories have been slightly more inclined than higher-order theorists to allow that such abilities may be found in other animals.”

Robert Wiblin: [Eliezer], it’s possible that what you are referring to as an ‘inner listener’ is necessary for subjective experience, and that this happened to be added by evolution just before the human line. It’s also possible that consciousness is primitive and everything is conscious to some extent. But why have the prior that almost all non-human animals are not conscious and lack those parts until someone brings you evidence to the contrary (i.e. “What I need to hear to be persuaded is,”)? That just cannot be rational.

You should simply say that you are a) uncertain what causes consciousness, because really nobody knows yet, and b) you don’t know if e.g. pigs have the things that are proposed as being necessary for consciousness, because you haven’t really looked into it.

Carl: Seems to me that Eliezer is just strongly backing a Hofstadter-esque HOT [higher-order theory] view. HOT views are a major school of thought among physicalist accounts of consciousness. Objections should be along the lines of the philosophical debate about HOT theories, about how much credence to give them and importance under uncertainty about those theories, and about implementation of the HOT-relevant (or Higher-Order-Experience relevant) properties in different animals.

BTW, Eliezer, Hofstadter thinks dogs are conscious in this HOT way, which would presumably also cover chickens and cows and pigs (chickens related to corvids, pigs with their high intelligence, cows still reasonably capable if uninterested in performing and with big brains).

“Doug [Hofstadter], on the other hand, has a theory of the self, and thinks that this is just the same as talking about consciousness. Note that this concern with consciousness is not the same concern as whether there is a “subject” that “has” experiences over and above the public self; you can believe that talk of consciousness is irreducible to talk of the built self without thereby positing some different, higher self that is the one that is conscious. As Sartre puts it, consciousness is a primary feature of our experience and the self is built later. For Doug, we should consider an animal conscious only insofar as it’s built up this kind of self-symbol. So consciousness will be a matter of degree: there is probably little-to-nothing that it is “like to be” a mosquito, yet certainly a dog has a conception of self, and once you get language in there you get the whole deal.” (link)

Robert: Thanks [Carl] for all that useful info. I have no particular objection to HOT of consciousness. I just think people should be really unsure about what is going on with consciousness, because there is no overwhelming case for one view over another and it’s such a hard question to study with current technology. I am not even confident of physicalism, which seems to be popular in our circles as much for sociological as sound reasons. Perhaps consciousness is another primitive part of the universe? Perhaps it is an epiphenomenon that is generated by a very large number of processes? We should be more humble about what we do and do not know (more).

Appearing to jump to the conclusion that non-humans do not meet the standards required for HOT of mind also looks to me like sloppy motivated reasoning. As you point out, even if HOT of mind are correct, a wide range of animals may well have the required capabilities.

Eliezer: So there’s this very common thing where it looks to me like other people are failing to distinguish (from my perspective) between confidence about P vs. Not-P and confidence within P. Like, we can be extremely confident about the general category of natural selection vs. intelligent design without any of the open questions of evolutionary biology creating uncertainty that slops over outside the whole category of natural selection and lets [intelligent design] advocates say, “Oh, how can you be so certain of things?”

In this case, the reason I feel this way is that, for example, I don’t particularly feel like I’m confident of Higher-Order Thought vs. Higher-Order Perception or rather it seems like a Wrong Question, and the whole presentation of the theory within SEP seems like not at all the way I would present an attempted reduction; like it’s the sort of consciousness theory you would make up if you were confident that reflection had something to do with it, but were rushing a bit too fast to say exactly what, before you’d really started taking apart reflection into parts or explaining exactly which aspects of our beliefs about consciousness were originating in exactly which reflective algorithms as seen from the inside.

On the other hand, when I see someone who seems to be reasoning that pain qualia are the cause of screaming, so things that scream and look distressed are probably doing so for the same reason, namely pain qualia, then all of my uncertainty lives in worlds where this is definitely, no doubt about it false, and my remaining uncertainty does not slop over.

The Slopover Fallacy really needs a more standard name.

This doesn’t make me confident that pigs lack qualia, and I have scheduled to look at Carl’s thing on the mirror test because that is the sort of thing that affects what I think I know about pigs. Gary Drescher understands what does or does not constitute dissolution of a seemingly mysterious thing as a cognitive algorithm as seen by inside (and has either developed the same theory of consciousness independently or something very close to it), so if he calls me up and says, “I’ve studied pigs, I think they have enough empathic brain-controlling architecture to have a pig-that-is-this-pig with perceived and imaginable properties and this is what my model says is the inner listener”, then I would stop eating pigs until I’d talked it over with him. People who don’t live by the rules of reductionism, regardless of what their job title says or what other people respect them, don’t get a seat at my table. They live outside my space of uncertainty, and although it feels to me like I’m uncertain about lots of things, none of that uncertainty Slops Over into their mental world and grants it any credibility.

It also seems to me that this is not all that inaccessible to a reasonable third party, though the sort of person who maintains some doubt about physicalism, or the sort of philosophers who think it’s still respectable academic debate rather than sheer foolishness to argue about the A-Theory vs. B-Theory of time, or the sort of person who can’t follow the argument for why all our remaining uncertainty should be within different many-worlds interpretations rather than slopping over outside, will not be able to access it.

Robert: The less strong is the case behind our existing theories of consciousness, the more credence we should put on ‘something else’.

Physicalist monism doesn’t lend itself to an easy solution to the hard problem of consciousness. As I am more confident that I have subjective experience than that the external world exists, this is a drawback. Of course idealism, dualism, panpsychism, and ‘something else entirely’ have serious drawbacks too. But until we are closer to a theory of mind that produces widely acceptable results, I don’t want to make belief in any of these theories part of my identity.

Of course, the probability that consciousness has some causal connection to what’s going on in the brain is pretty close to 100% (though even then, we may be in a simulation and in fact that brain is just put there to confuse us).

Eliezer: Robert, is there some part of you that knows that in real life of course the real answer is going to be a physicalist one, even if your System 2 is saying not to be confident of that?

Robert: Of course my gut instinct is that there is only one kind of stuff and it’s all physical. This is the opposite of the normal evolved human intuition. But I am instinctively a physicalist, not least because I’ve spent my adult life being surrounded by people who make fun of anyone who says otherwise. Believing this is a handy identity marker that the skeptics/rationalists use to draw the boundaries on their community. And sure, physicalist science has done a great job explaining things.

But consciousness is the natural counter. I am almost sure consciousness exists, but it doesn’t seem to be a physical thing, at least not as physical stuff is understood now. Yes, physical stuff seems to be involved, and pinning down that involvement would be great, but unless I’m being massively fooled, there seems to be something else there as well. Maybe my subjective experience can be satisfactorily squared with physicalism, but if so, I haven’t seen it done yet.

Eliezer: Do you think there is a relatively straightforward piece of knowledge which might convince you that consciousness is physical? Like, is it plausible to you that such an insight exists and that it is not supernaturally sacred, that afterward you would look back and be surprised you ever were confused?

My point is that it shouldn’t be very weird to you if someone else seems much more confident than you of physicalist consciousness, since you register a high probability that it is true, and also a high probability that given that it is true there’s some simple insight that makes it seem much more obvious. Like this is not a case where you’re entitled to conclude that if you don’t know, someone else who claims to know is probably mistaken. (Example case where that inference is valid: AI timelines.)

Robert: You have that insight? If so, please share. Of course it is never weird to see people overconfident in their theories, that is the nature of being human. But that doesn’t make it any less important to push back.

Eliezer: (points to Reductionism sequence on LW) … nobody said it had to be condensed into a sentence.

Robert: Any particular part where it explains how to reconcile physicalism with my seeming to have subjective experience?

Eliezer: The most direct address might be in Zombies! Zombies? which can plausibly be read out of order. It’s not going to give you the full experience, and it’s more like telling you that physicalism has to be true than giving you a sense that it can be true despite the apparent impossibility. Good and Real might get you further especially if you are allergic to LW posts.

Robert: In large part I’m taking my lead from Chalmers, who is very knowledgable of all the arguments in this area, isn’t convinced of physicalism as far as I can see, and seems unusually intellectually honest and modest.

Eliezer: If I can make you read more:

How An Algorithm Feels From Inside

Dissolving the Question

Hand vs. Fingers

Heat vs. Motion

Angry Atoms

Reductionism

Excluding the Supernatural

Robert: OK, I’ll read a decent amount and see if I’m convinced.

Of course even if I am convinced of physicalism, that would still leave a lot of uncertainty about what precise physical structures are required. And even assuming something like a HOT of consciousness, it doesn’t seem safe to assume animals are not conscious (based on priors, common sense, and Carl’s links above).

Eliezer: We’re dealing with a relatively long inferential chain here. Trying to get all the way to the end is out of the cards. At best it will seem to you plausible that, extrapolating from earnest-tokens provided, it’s not implausible that I could occupy the epistemic state I claim to occupy.

Robert: The longer is the inferential chain, and the less other experts have chosen the same inferential chain, the more likely that you’ve made a mistake somewhere! But sure, I appreciate that you’re making your case in good faith.

Rob Bensinger: “Physicalist monism doesn’t lend itself to an easy solution to the hard problem of consciousness. As I am more confident that I have subjective experience than that the external world exists, this is a drawback.”

I used to think this way, but no longer do. I could list a few things that persuaded me otherwise, Robert, if that helps.

(Backstory: I’ve studied Chalmers’ arguments in some detail, and these days I don’t think they refute physicalism. I do find them very interesting, and I think they’re evidentially weak and outweighed-by-stronger-considerations, rather than thoughtless or confused. I also agree Chalmers is one of the best philosophers out there. (He discovered a lot of the best arguments against his view!) But I don’t think there’s any human institution that’s up to the task yet of consistently incentivizing ‘getting the right answer to the hardest philosophical questions in a timely fashion’. Top-tier analytic philosophers like David Lewis, David Armstrong, Donald Davidson, Peter van Inwagen, or Thomas Nagel — or David Chalmers — don’t receive esteem and respect in academic philosophy because they’ve proven that they’re remarkably good at generating true beliefs. They receive esteem and respect because they have a lot of creativity and ingenuity, are very careful close readers of other prominent philosophers, and give meticulous and thought-provoking arguments for exotic theses. In some ways, it matters more that their writing is intellectually fun than that it’s strictly accurate.)

Changes that caused me to break with Chalmers’ basic perspective:

(1) I came to believe that ‘consciousness’ is not a theory-neutral term. When we say ‘it’s impossible I’m not conscious’, we often just mean ‘something’s going on [hand-wave at visual field]’ or ‘there’s some sort of process that produces [whatever all this is]’. But when we then do detailed work in philosophy of mind, we use the word ‘consciousness’ in ways that embed details from our theories, our folk intuitions, our thought experiments, etc.

As we add more and more theoretical/semantic content to the term ‘consciousness’, we don’t do enough to downgrade our confidence in assertions about ‘consciousness’ or disentangle the different things we have in mind. We don’t fully recognize that our initial cogito-style confidence applies to the (relatively?) theory-neutral version of the term, and not to particular conceptions of ‘consciousness’ involving semantic constraints like ‘it’s something I have maximally strong epistemic access to’ or ‘it’s something that can be inverted without eliminating any functional content’.

E.g., if you think the inverted qualia argument against physicalism is as certain as a minimalist cogito, you may be startled to learn that the argument doesn’t work for any known color experience — see this. But we were so certain! We were certain of something, maybe. But how certain are we that we can correctly articulate what it is that we’re certain about?

Possibly that particular argument can be repaired, but it does tie in to a broader point…

(2) As we learn new things, that can feed back in to our confidence in our premises. Evidence about our own mental states may have temporal priority, but that doesn’t mean that we retain more confidence in introspective claims than in physics even after we’ve finished examining the psychology research showing how terrible humans are at introspection. (Or even after examining physics.)

How good an agent is at introspecting is a contingent fact about its makeup. You can build an AI that’s much better at introspecting its own mental states than at perceiving sensory objects, a RoboDescartes. Such an agent would be better off relying on its reflective reasoning than at doing science, and it would be foolish to trust science over phenomenology and introspectionist psychology. Empirically, however, we aren’t that kind of mind; we screw up all the time when we introspect, and our reflective judgments seem a lot more vulnerable to overconfidence and post-facto rationalization than is our ordinary perception of external objects.

(3) On top of that, we should be especially suspicious of our introspection in cases where there’s no cognitive or evolutionary account for why it should track reality. Evolution won’t waste effort adding extra truth-tracking faculties unless it actually helped our ancestors survive and reproduce. If Chalmers’ arguments are right, not only is there no such account available, but there can never be one. In fact, our brains are systematically deluded about consciousness, though our acausal minds are somehow systematically correct about our world’s metaphysics. (This is the main point Eliezer focuses on in his objections to Chalmers.)

We can’t provisionally forget the history of science, the available literature from the sciences of brain and mind, etc., when we do philosophy or phenomenology. We have to actually bear in mind all that science when we critically examine cogito-like evidence. It’s all of a kind. That sort of perspective, and a perspective that focuses on ‘what kind of work is this cognitive system doing, and what mechanism could allow it to do that?’ a la A Priori, is I think a lot of what gets under-emphasized in philosophy of mind. Philosophers who call themselves ‘reductionists’ have more practice asserting reductionism as a metaphysical thesis, than they have actually performing reductions on empirical phenomena, or thinking of their intuitions as physical processes with physical explanations. (Even Chalmers can accept this point, since he thinks for every intuition our consciousness has, there’s a corresponding intuition in our brains that phenomenal consciousness has absolutely no explanatory relevance to.)

Eric Bruylant: As a moral vegetarian since the age of four, this by a considerable margin the most relevant argument against my reasons for vegetarianism which I have encountered so far, though I’m not likely to switch over primarily due to ethical caution, and to some extent inertia. From my experience with animals it seems at least plausible that, at minimum, many social mammals possess reflective self-awareness (>5% probability that several branches of mammals other than apes have it), and so long as the opportunity costs of avoiding eating meat remain low and my aversion to encouraging potential death of sentient life is high sticking with vegetarianism seems like a reasonable precaution. I would be interested in more details of why you think key parts of the machinery which generates consciousness are missing from most social mammals, since reflective self-awareness and similar mind-modeling tools seem like they would be extremely useful for any creature engaging in politics.

Eliezer: Eric, if an animal has coalitions of coalitions (bottlenose dolphins) or is doing anything that looks like political plotting (chimps) then that is an instant stop eating order. A book called PIG POLITICS: Only Slightly Less Complex Than Chimpanzee Politics would get me to stop eating bacon until I had learned whether the title was an exaggeration. I also want to emphasize that the “why so confident?” is a straw misquestion from people who can’t otherwise understand why I could be unconfident of many details yet still not take into account the conflicting opinion of people who eg endorse P-zombies. I guess that pigs are lacking parts whose presence I see little positive reason to expect. The chimpanzee politics that produced the selection pressure for their renowned cognitive abilities is actually pretty damned impressive. I never heard of pigs having anything like that.

Robert, curiosity: Suppose I claimed to be able to access an epistemic state where (rather than being pretty damn sure that physicalism is true) I was pretty damn sure that P-zombies / epiphenomenalism was false. How does the intuitive plausibility of my being able to access such an epistemic state feel to you? Does this equally feel like “Eliezer shouldn’t be so sure, so many experts disagree with him” or is it more plausible that “Hm, it seems probable that epiphenomenalism is false, and if it is false it seems plausible that there’s some legitimate way for Eliezer to know it’s false.”

Robert: Your believing it is false is certainly evidence against, but in a field with lots of epistemic peers who aren’t yet convinced by your line of argument it isn’t overwhelming evidence. Humans just aren’t very good at doing philosophy, so I can’t put too much trust in any one person’s views/intuitions.

Eliezer: Robert: I’m pretty happy to take a stand on it being okay for me to not update on other people believing in P-zombies. If we agree that these issues are of the same kind, then this case is far far more accessible to me — much much easier to communicate. Like “not P-zombies” is 3% of the way to the finish line of the previous issue.

Rob Bensinger: Robert [Wiblin]: ‘Zombies! Zombies?’ probably doesn’t hit hard enough outside the context of the sequences. Inside the sequences, it’s one of dozens of posts hammering on one of Eliezer’s most basic theses: ‘Beliefs are physical states of the brain, useful because we’ve evolved to be able to put those states in physical correlation with configurations of stuff outside the brain. Acquiring veridical beliefs, then, is a special case of thermodynamic work, and requires the emission of waste heat. Any specific reliable belief about any matter of fact must (in principle) admit of a causal explanation that links the brain state to the worldly circumstance it’s asserting.’

Now we’re not just picking on Chalmers. We have a general causal theory of evidence, and as a side-effect it predicts that brains can’t know about epiphenomenalism (even if it’s true!). No event in the brain, including any thought experiment or intuition or act of introspection, can causally depend in any way on epiphenomenalism, the character of (epiphenomenal) phenomenal consciousness, etc. Our brains’ miraculously guessing correctly that they are attached to an epiphenomenal ghost would be a pure lucky coincidence. Like guessing ‘there’s a ghostly wombat 200 feet under my house that has never causally impacted anything’ and getting it right; or like guessing ‘every diamond lattice in the universe is attached to an epiphenomenal feeling of regret’. The greater causal proximity of our brains to our consciousness shouldn’t make any difference, because the causal arrows only go away from the brain, not toward it.

If you assign a low prior to ‘all diamonds are epiphenomenally regretful’, you should also assign a low prior to ‘my brain is currently epiphenomenally attached to a visual experience of seeing this sentence, which has no impact on my brain’s beliefs or assertions to that effect’. And if you have no prior reason to believe there’s a Matrix God who created our world, loves epiphenomenalism, and would build dualistic minds with just the right mind-matter bridging laws (a view called occasionalism), then no argument or experience you have can lend evidential support to epiphenomenalism, for the exact same reason it can’t send support to the diamonds-are-regretful thesis. If you can’t establish a causal link inside the system, then you need to posit a force outside the system to establish that causal link — else the physical process we call ‘evidence’ just won’t work.

Robert Wiblin: Why do you think you should trust your judgement on that question over the judgement of similarly qualified people? I don’t get it.

Rob Bensinger: I think it’s pretty widely accepted by philosophers that traditional epiphenomenalism is one of the more fraught and hard-to-believe views on consciousness. (E.g., it was abandoned a decade ago by its most prominent modern defender, Frank Jackson.) What Chalmers would say if you brought this criticism to him (and has said before, on LW) is that his view isn’t ‘traditional epiphenomenalism’. His view allows for phenomenal consciousness to be ‘causally relevant’ to our phenomenal judgments, because his view identifies consciousness (or its underpinnings) with the fundamental thingies (‘quiddities’) that play the functional roles described in physics. It’s mysterious what electromagnetism is ‘in itself’ (above and beyond the relations it enters into with other patterned empirical thingies), and it’s mysterious what consciousness is, so we combine the two mysteries and sort-of-solve them both. (Which feels elegant and gratifying enough that we’re willing to abide the lingering mysteriousness of the view.) This is also Strawson and David Pearce’s view. See this.

I don’t think this is an adequate response to Eliezer’s objection. Aside from occasionalism, I haven’t yet seen any adequate response to Eliezer’s objection in the literature. The problem is that this view treats ‘causal relevance’ as a primitive, like we can just sprinkle ‘causality’ vaguely over a theory by metaphysically identifying phenomena really really closely, without worrying about exactly how the physical structure of a brain ends up corresponding to the specific features of the phenomena. The technical account of evidence Eliezer is giving doesn’t leave room for that; ‘causal relevance’ is irrelevant unless you have some mechanism explaining how judgments in the brain get systematically correlated to the specific facts they assert.

If the zombie argument works, quiddities can’t do anything to explain why we believe in quiddities, because our quiddities can be swapped out for nonphenomenal ones without changing our brains’ dynamics. If the qualia inversion argument works, quiddities can’t explain why we have accurate beliefs about the particular experiences we’re having (e.g., as William James noted, that we’re experiencing phenomenal calm as opposed to phenomenal agony), because the quiddities can be swapped out for other phenomenal quiddities with a radically different character. The very arguments that seek to refute physicalism also refute all non-interactionist dualisms. I think this has been unclear to professional philosophers because philosophers of mind mostly treat ‘causality’ and ‘evidence’ as black boxes, rather than having a detailed theory of how they work (such that they would need to constrain their philosophy-of-mind views to accord with that theory).

Systematic philosophy is relatively unpopular in modern academic analytic philosophy, so different fields often carry on their debates in isolation from each other. And systematic philosophy is especially unpopular these days among hard-nosed reductionists — the sorts of academics most likely to share Eliezer’s interests, background, and intuitions.

If you want to know how a blind person guessed that you’re holding a diamond, it’s not enough to say that the diamond is ‘causally relevant’ to the blind person (e.g., photons are bouncing off of the diamond to hit the blind person). You need at least a schematic account that allows the blind person’s brain to systematically correlate with the physical structure and location of the diamond. If your assumptions are telling you that a certain systematic correlation between belief and reality is a coincidence, then, vague ‘causal relevance’ or no, at least one of your assumptions must be wrong.

Eliezer: I agree with Rob [Bensinger]’s reply.

## Eliezer Yudkowsky on effective altruism

Posted on Facebook, in reply to “The best person who ever lived is an unknown Ukrainian man“, an article by William MacAskill. Viktor Zhdanov, the ‘Ukrainian man’, was a major figure in the eradication of smallpox.

Eliezer: Wouldn’t the inventors of Science live far enough upstream of this accomplishment, and a vast number of others, that any one of them would far outweigh Viktor Zhdanov? This seems not very defensible as an answer to the stated question, versus “Who did the most good in the 20th century?” or “Who did the most good that is similar to the sort of good GiveWell tries to do?”

Commenter: Was there any one person who was that essential to inventing the scientific method?

Eliezer: Compared to the causal role that Viktor Zhdanov played in wiping out smallpox? Francis Bacon, Isaac Newton, Galileo, Copernicus, Laplace, hell, Lavoisier or Leeuwenhoek probably played a causal role in eradicating smallpox comparable to Zhdanov. This judgment of human worth seems blatantly unjustifiable to the point I suspect signaling, unless you believe in an incredible-seeming degree of fatalism about the timing of scientific epistemology and the life sciences and the replaceability of the named heroes within it, versus the replaceability or timing-sensitivity of Viktor Zhdanov, versus also the replaceability of everyone involved in the absence of nuclear war like Vasili Arkhipov and Schelling and so on.

Commenter: Those were all great guys, though I wouldn’t say they “invented science”, they just made huge fundamental strides.

Eliezer: Calling someone the literal all-time winner for human good accomplished so far, is a much stronger judgment than “Viktor Zhdanov saved millions of lives and is a great hero”, and implies a comparative judgment of all the other strong candidates.

Whom we valorize is not a value-neutral act, still less who we valorize above others. You can see how I might question and indeed, call shenanigans, on awarding the literally highest valor to someone who played a major causal role in a long causal chain ending up in completing the eradication of smallpox in the 20th century.

Over all of human history, which is the stated breadth of judgment in the article, Science as a whole clearly accomplished much more good as a whole – including, e.g., the eradication of smallpox. Even a relatively more distributed or relatively more replaceable causal role in Science’s development would lead to a clear claim on scoring more net utilitarian points. Consider humanity’s entire development curve over the last 500 years, then consider the total effect over all that time of shifting the development curve 1 year forward, then ask whether any of the scientists on my given list might have accomplished that much with their lives, when Science was young and fragile. Even if you believe the Renaissance was inevitable, I find it hard to believe that the earliest scientists really made *so* little difference to how it developed or how long it took. I’m not an expert historian but I find it easy to imagine that killing Gutenberg (H/T Alyssa) would have counterfactually set back the curve of human progress by at least 1 year.

In this context, awarding the highest value of all time to a global poverty anti-disease campaign activist (after smallpox had already been significantly fought back, so that we cannot reasonably award the effort in which Zhdanov played one role, all of the credit for defeating all of the smallpox in the 20th century) is something that I not only disbelieve, but find hard to believe was a neutral error.

Anything we might say to devalorize the most important scientists of the 16th century, at this remove, is probably an argument that we could level against Zhdanov too. So I say again that it is very suspicious that the most important people involved in by far the biggest story of the last 500 years, along with the people involved in the second-biggest stories of democratic institutions like the democratic revolution in the US colonies, are being thrown under the bus for a 20th-century global poverty disease eradication activist. The existence of vaccines, antibiotics, and modern medicine *as a whole* is something like maybe the 4th most important story of the last 500 years with a significant share of its own credit going to Science. I’d put industrial revolution / capitalism / finance 3rd, where a big share of the credit there goes to democratic institutions (though not all of it, it was gathering speed in Britain well before democracy got started) and then a lot of the technological aspect flows back to Science too.

I worry there may be a kind of myopia about human history that may be developing in the Global Poverty section of EA. Human history is a big story. Lotta stuff happened. Science, democracy, industrial markets, and the existence of modern medicine are not unreal background forces that we quietly assume so that we can focus on real understandable things like delivering a vial of vaccine to another country. Science, democracy, industrial markets, and modern medicine did not always exist. People had to fight for them. They gave their lives for them. The world has not always been the way it has been. (And it won’t always be the way it is, either.)

PS: I expect Holden not to particularly disagree with me about any of this, so I’m not saying all global poverty EAs are making this mistake or that the case for global-poverty EA is conceptually tied to this mistake.

## The Medical Malpractice Song

Song by Texas lawyers Will Hutson and Chris Harris. I haven’t researched this myself, but it seems like an interesting topic you don’t hear that much about. Song lyrics in italics.

Chris: I’m just fucking with you, man. Ready? Hey Will.

Will: Hey Chris.

Don’t call us with medical malpractice cases
Unless they happened outside of Texas
Because we voted for tort reform down here
So our doctors can kill us with zero fear

Don’t call us with your med mal cases
Tort reform made them go away
We were sold a bill of goods and we bought it
And we’re still paying for it every single day.

Will: Hi, Will Hudson here.

Chris: And I’m Chris Harris. Back in 2003, the Texas insurance industry and four wealthy businessmen promoted tort reform.

Will: Or better known as lawsuit reform.

Chris: Which was so bad that they couldn’t even get the Texas legislature to pass it, because it would have been unconstitutional. So what did they do? They got us, the voters of Texas, to pass it as a constitutional amendment.

Will: After all, if it’s in the Constitution, it can’t be unconstitutional.

Chris: The insurance industry told us that the reason healthcare costs were spiraling out of control and doctors were leaving the state, was because of all the frivolous lawsuits being filed against doctors. This was a lie. Plaintiff’s attorneys have always taken medical malpractice cases on contingency, which means they only get paid if they win. They never have taken frivolous med mal cases because these cases are incredibly expensive, costing the plaintiff’s lawyer between \$50,000 and \$100,000 out of his own pocket. We don’t have that kind of money to waste on a case.

Will: It’s not like we’re doctors or something.

Chris: These cases are heavily defended by some of the best lawyers in the country, and anybody who thinks that an insurance company just writes a check to make a lawsuit go away, has been lied to. It doesn’t happen, and it never did. So…

Don’t call us with medical malpractice cases
Unless they happened outside of Texas
Tort reform was based on a pack of lies
And you’ll get an education when someone dies

Don’t call us with your med mal cases
It breaks our hearts every single time
We have to tell you that your lawsuit has no merit
Even after a loved one has died

Chris: After tort reform in Texas, it rarely makes sense – financial sense – for a lawyer to take a medical malpractice case, even if there’s clear negligence.

Will: Can you give us an example?

Chris: Sure. Let’s say you take your eight year old daughter to the ER, because she’s complaining of acute pain near where her appendix is. The ER doctor looks at her and he says, well I think she’s got a broken finger, take her home and give her some Advil. Following this advice, you take your child home, her appendix ruptures and she dies. Well in that lawsuit, if everything goes perfectly, the most you could possibly recover after tort reform would be \$250,000. Which may sound like a tidy sum, except your lawyer has to recoup that hundred thousand dollars in expenses that he risked on your behalf.

Will: And that comes right off the top.

Chris: And, since no attorney’s fees are allowed under the law, the only way your lawyer can be paid for his years of work is out of that same \$250,000. That means after a complete and total legal victory, you’ve lost your child and now you have a whopping \$50,000. You’re going to be extremely angry. And contrary to popular belief, most lawyers are human beings and we really have no interest in taking a case where the client is furious even after a win.

Don’t call us with medical malpractice cases
Unless they happened outside of Texas
Because we voted for tort reform down here
We’d really love to help you, but we’d starve, so…

We’re not taking your med mal cases
They’re frivolous now and you will lose
And even if we win
(Which we probably won’t)
You’ll just hate us and think we cheated you

Chris: Oh, and I left out the worst part of that example. You can’t even sue an ER doc in Texas for negligence.

Will: It’s literally against the law.

Chris: You can only sue an ER doc for gross negligence. Will, what is negligence?

Will: Negligence would be driving down the road while texting.

Chris: And what is gross negligence?

Will: Gross negligence is driving down the road at 100 miles per hour, going in the opposite direction, into oncoming traffic, with a blindfold, smoking crack, while texting. You may not mean to kill anybody, but, you know, it’s highly likely.

Chris: Right. It’s basically impossible to prove that a doctor was grossly negligent. However, if you can provide us with a blood alcohol test proving that your doctor was drunk when he treated you –

Will: – And texting –

Chris: – And you have over \$500,000 in lost wages due to his negligence, we want to talk to you. Otherwise …

Don’t call us with medical malpractice cases
Unless they happened outside of Texas
Because we voted for tort reform down here
But our healthcare costs went up
(We took it in the rear)

Don’t call us with your med mal cases
There’s literally nothing we can do
Except refer you to other Texas lawyers
Who will tell you the same thing too

Will: Hey, but at least healthcare costs went down in Texas, right?

Chris: (Laughter) No, that didn’t happen.

## Goal Kiting

When you use check kiting, you write a check against bank A to pay off bank B, and you write another check against bank B to pay off bank A. Similarly, when you use goal kiting, you address any objections to goal A by saying your plan will also achieve goal B, and you address objections to goal B by saying you will also achieve goal A.

There are many examples, but to pick one in particular, the US education system does this egregiously. If colleges aren’t preparing students for jobs with practical training, that’s OK, because education is a higher ideal that isn’t supposed to be practical. If you don’t feel expensive college tuition is worth it for you, then you should look at statistics about how much better jobs college graduates get. If foreign language classes don’t teach you the language, that’s not a problem, because you’re being exposed to other cultures and that’s valuable. We keep the students bottled up in the same place all day for learning; if they don’t learn, well, the real point was to teach them social skills; if the “social skills” they learn are becoming nasty bullies, that just means they need to be in school more, so we can teach them not to bully people. We teach trigonometry because people will need to use it; if they don’t need to use it, it’s still important that they think quantitatively; if tests prove they can’t think quantitatively, well, the real point is so they can “learn how to learn”. Every child needs to learn how to write in cursive; if no adult actually requires cursive for anything, well, then, it’s still very important to teach it because it makes children develop fine motor skills, which was really the whole point all along. (Yes, people have really said that as a justification for cursive, and they were being serious.)

## How Politics Becomes Toxic

Step 1: There’s a serious problem that a lot of people have. Mostly, people ignore it, or assume that there’s nothing they can do about it.

Step 2: People start complaining about the problem, expressing their frustration, etc.

Step 3: Groups start organizing around the problem. People propose moderate, reasonable solutions and start lobbying for them. Mailing lists are created, signs are made, pamphlets are distributed, and people start identifying as Pro-Xers.

Step 4: A big debate starts, Pro-Xers vs. Anti-Xers. Both groups duke it out, trying to win supporters and fight for their cause.

Step 5: The Pro-Xers win their first major victory, and everybody cheers. A few moderate Pro-Xers see that X is winning, and decide to spend less time on X, becoming passive supporters. Meanwhile, other people see that X is winning, and decide to become Pro-Xers to get status and power.

Step 6: The problem starts to get better (or at least, people think it’s getting better). This makes the Pro-Xers popular and wins them lots of support.

Step 7: The Pro-Xers win more victories. At each step, the leadership becomes more radical, and more of the moderates retire and become passive supporters. The Pro-X leadership starts becoming corrupt, using their high status to do favors for their friends, and so on.

Step 8: Either out of partisanship or ignorance, the media doesn’t notice the steady radicalization of X, and assumes that the current Pro-X positions are similar to the old, moderate Pro-X positions. All of the old moderates continue to support X due to halo effect.

Step 9: X is now a toxic group. The leaders of the Pro-X movement are a mix of radical extremists and self-interested power brokers, and X is so influential that they’re now effectively dictators, with people bending over backwards to give them what they want.