The Medical Malpractice Song

Song by Texas lawyers Will Hutson and Chris Harris. I haven’t researched this myself, but it seems like an interesting topic you don’t hear that much about. Song lyrics in italics.

Chris: I’m just fucking with you, man. Ready? Hey Will.

Will: Hey Chris.

Don’t call us with medical malpractice cases
Unless they happened outside of Texas
Because we voted for tort reform down here
So our doctors can kill us with zero fear

Don’t call us with your med mal cases
Tort reform made them go away
We were sold a bill of goods and we bought it
And we’re still paying for it every single day.

Will: Hi, Will Hudson here.

Chris: And I’m Chris Harris. Back in 2003, the Texas insurance industry and four wealthy businessmen promoted tort reform.

Will: Or better known as lawsuit reform.

Chris: Which was so bad that they couldn’t even get the Texas legislature to pass it, because it would have been unconstitutional. So what did they do? They got us, the voters of Texas, to pass it as a constitutional amendment.

Will: After all, if it’s in the Constitution, it can’t be unconstitutional.

Chris: The insurance industry told us that the reason healthcare costs were spiraling out of control and doctors were leaving the state, was because of all the frivolous lawsuits being filed against doctors. This was a lie. Plaintiff’s attorneys have always taken medical malpractice cases on contingency, which means they only get paid if they win. They never have taken frivolous med mal cases because these cases are incredibly expensive, costing the plaintiff’s lawyer between $50,000 and $100,000 out of his own pocket. We don’t have that kind of money to waste on a case.

Will: It’s not like we’re doctors or something.

Chris: These cases are heavily defended by some of the best lawyers in the country, and anybody who thinks that an insurance company just writes a check to make a lawsuit go away, has been lied to. It doesn’t happen, and it never did. So…

Don’t call us with medical malpractice cases
Unless they happened outside of Texas
Tort reform was based on a pack of lies
And you’ll get an education when someone dies

Don’t call us with your med mal cases
It breaks our hearts every single time
We have to tell you that your lawsuit has no merit
Even after a loved one has died

Chris: After tort reform in Texas, it rarely makes sense – financial sense – for a lawyer to take a medical malpractice case, even if there’s clear negligence.

Will: Can you give us an example?

Chris: Sure. Let’s say you take your eight year old daughter to the ER, because she’s complaining of acute pain near where her appendix is. The ER doctor looks at her and he says, well I think she’s got a broken finger, take her home and give her some Advil. Following this advice, you take your child home, her appendix ruptures and she dies. Well in that lawsuit, if everything goes perfectly, the most you could possibly recover after tort reform would be $250,000. Which may sound like a tidy sum, except your lawyer has to recoup that hundred thousand dollars in expenses that he risked on your behalf.

Will: And that comes right off the top.

Chris: And, since no attorney’s fees are allowed under the law, the only way your lawyer can be paid for his years of work is out of that same $250,000. That means after a complete and total legal victory, you’ve lost your child and now you have a whopping $50,000. You’re going to be extremely angry. And contrary to popular belief, most lawyers are human beings and we really have no interest in taking a case where the client is furious even after a win.

Don’t call us with medical malpractice cases
Unless they happened outside of Texas
Because we voted for tort reform down here
We’d really love to help you, but we’d starve, so…

We’re not taking your med mal cases
They’re frivolous now and you will lose
And even if we win
(Which we probably won’t)
You’ll just hate us and think we cheated you

Chris: Oh, and I left out the worst part of that example. You can’t even sue an ER doc in Texas for negligence.

Will: It’s literally against the law.

Chris: You can only sue an ER doc for gross negligence. Will, what is negligence?

Will: Negligence would be driving down the road while texting.

Chris: And what is gross negligence?

Will: Gross negligence is driving down the road at 100 miles per hour, going in the opposite direction, into oncoming traffic, with a blindfold, smoking crack, while texting. You may not mean to kill anybody, but, you know, it’s highly likely.

Chris: Right. It’s basically impossible to prove that a doctor was grossly negligent. However, if you can provide us with a blood alcohol test proving that your doctor was drunk when he treated you –

Will: – And texting –

Chris: – And you have over $500,000 in lost wages due to his negligence, we want to talk to you. Otherwise …

Don’t call us with medical malpractice cases
Unless they happened outside of Texas
Because we voted for tort reform down here
But our healthcare costs went up
(We took it in the rear)

Don’t call us with your med mal cases
There’s literally nothing we can do
Except refer you to other Texas lawyers
Who will tell you the same thing too

Will: Hey, but at least healthcare costs went down in Texas, right?

Chris: (Laughter) No, that didn’t happen.

Goal Kiting

When you use check kiting, you write a check against bank A to pay off bank B, and you write another check against bank B to pay off bank A. Similarly, when you use goal kiting, you address any objections to goal A by saying your plan will also achieve goal B, and you address objections to goal B by saying you will also achieve goal A.

There are many examples, but to pick one in particular, the US education system does this egregiously. If colleges aren’t preparing students for jobs with practical training, that’s OK, because education is a higher ideal that isn’t supposed to be practical. If you don’t feel expensive college tuition is worth it for you, then you should look at statistics about how much better jobs college graduates get. If foreign language classes don’t teach you the language, that’s not a problem, because you’re being exposed to other cultures and that’s valuable. We keep the students bottled up in the same place all day for learning; if they don’t learn, well, the real point was to teach them social skills; if the “social skills” they learn are becoming nasty bullies, that just means they need to be in school more, so we can teach them not to bully people. We teach trigonometry because people will need to use it; if they don’t need to use it, it’s still important that they think quantitatively; if tests prove they can’t think quantitatively, well, the real point is so they can “learn how to learn”. Every child needs to learn how to write in cursive; if no adult actually requires cursive for anything, well, then, it’s still very important to teach it because it makes children develop fine motor skills, which was really the whole point all along. (Yes, people have really said that as a justification for cursive, and they were being serious.)

How Politics Becomes Toxic

Step 1: There’s a serious problem that a lot of people have. Mostly, people ignore it, or assume that there’s nothing they can do about it.

Step 2: People start complaining about the problem, expressing their frustration, etc.

Step 3: Groups start organizing around the problem. People propose moderate, reasonable solutions and start lobbying for them. Mailing lists are created, signs are made, pamphlets are distributed, and people start identifying as Pro-Xers.

Step 4: A big debate starts, Pro-Xers vs. Anti-Xers. Both groups duke it out, trying to win supporters and fight for their cause.

Step 5: The Pro-Xers win their first major victory, and everybody cheers. A few moderate Pro-Xers see that X is winning, and decide to spend less time on X, becoming passive supporters. Meanwhile, other people see that X is winning, and decide to become Pro-Xers to get status and power.

Step 6: The problem starts to get better (or at least, people think it’s getting better). This makes the Pro-Xers popular and wins them lots of support.

Step 7: The Pro-Xers win more victories. At each step, the leadership becomes more radical, and more of the moderates retire and become passive supporters. The Pro-X leadership starts becoming corrupt, using their high status to do favors for their friends, and so on.

Step 8: Either out of partisanship or ignorance, the media doesn’t notice the steady radicalization of X, and assumes that the current Pro-X positions are similar to the old, moderate Pro-X positions. All of the old moderates continue to support X due to halo effect.

Step 9: X is now a toxic group. The leaders of the Pro-X movement are a mix of radical extremists and self-interested power brokers, and X is so influential that they’re now effectively dictators, with people bending over backwards to give them what they want.

Don’t Bother Arguing With su3su2u1

su3su2u1 is a pseudonymous Internet author who posts to many places, most notably Tumblr. He has argued, at great length, that MIRI is not a real research organization and that Eliezer Yudkowsky is a crackpot. Many have written responses, including me and Scott. Instead of writing yet more replies to su3su2u1’s claims about MIRI, I’d like to explain why everyone arguing with him should stop wasting their time.

EDIT: I should emphasize that the point of this post isn’t to criticize su3su2u1 just for the sake of it, but to save people from wasting their time arguing with him. Since this is my own advice, I will follow it, and not comment further on su3su2u1’s writings after today.

Although Eliezer is not a crackpot, I think everyone must admit that he (though not MIRI’s other researchers) has several apparent signs of crackpottery. These include not having a formal education; writing mostly for his own websites, instead of peer-reviewed journals; not having had an established reputation within AI, when he first started writing about AI safety; and not co-authoring his papers with well-known AI researchers. su3su2u1 frequently criticizes MIRI and Eliezer on these grounds. su3su2u1’s stated theory is that he criticizes MIRI and Eliezer because of these signs of crackpottery. Others have argued that these are merely excuses, and that su3su2u1 just criticizes because he dislikes Eliezer personally. If the “crackpot signs” all went away, under this alternative theory, su3su2u1 wouldn’t change his mind; he’d just make up new reasons for claiming Eliezer/MIRI are crackpots.

To test these theories, we could, if you will, imagine an alternate-universe Eliezer – an Eliezer-Prime – who has unconventional ideas, but none of the “crackpot signs”. For example, we could say that:

– While Eliezer himself was mostly self-educated, Eliezer-Prime got his degrees from MIT, one of the top technical universities in the world.

– When he first thought of “Friendly AI”, Eliezer wrote about it on his own website. But when Eliezer-Prime got his big idea, he instead first published it in the Proceedings of the National Academy of Sciences, one of the top few most respected scientific journals.

– su3su2u1 has said that, if he were running MIRI, his first priority would be to make everyone get a PhD. We can say that Eliezer-Prime got a PhD. For extra impressiveness, let’s say that he also got his PhD from MIT.

– Of course, a PhD might not mean that much, if it’s in an unrelated subject. If you’re an expert on X, you might still be a crackpot on Y. So we’ll specify that Eliezer-Prime was actually awarded his MIT PhD for his “crazy ideas”. His PhD thesis, of course, would have been approved by his doctoral committee, all MIT professors. We’ll pick the most renowned professors we can find for his committee, like Marvin Minsky and Gerald Sussman.

– Eliezer’s publications on Friendliness mostly aren’t peer-reviewed. But Eliezer-Prime’s are, of course. Eliezer-Prime has a track record of many relevant, technical, highly-cited and peer-reviewed publications, in respectable scientific journals.

– MIRI, which employs Eliezer, is relatively new and not that prestigious. Instead, we’ll give Eliezer-Prime a job at, say, Oxford, the oldest and arguably most prestigious university in the Anglosphere, with previous positions at other top universities like Stanford.

su3su2u1 might not agree with Eliezer-Prime, but hopefully, he wouldn’t just dismiss him as a crackpot. If he disagreed, he’d treat it like a serious discussion with a well-respected researcher in the field, and back up his points with technical, peer-reviewed sources.

Alas, it is not to be. Eliezer-Prime is real – his name is Dr. Eric Drexler, the founder of the field of nanotechnology – and su3su2u1 commented thus:

“The whole thing [molecular manufacturing] is pseudoscience. The founder, Drexler, is a crackpot himself.” [EDIT: su3su2u1 says that this quote is from a different person with the same username. However, he’s also said that he does in fact endorse this quote, so I am not misrepresenting him. See discussion below – my apologies for any confusion.]

In his dismissal of Drexler, su3su2u1 included no math, no equations, and no technical work. He based his arguments on loose verbal analogies, and (unsourced) claims that Drexler was unaware of even the basics of the field he invented, like scaling laws (which his MIT PhD thesis spends an entire chapter on). He cited only one source, an article by chemist Richard Smalley, which wasn’t technical and wasn’t even peer-reviewed. Rather, it was a two-page pop sci magazine piece, centered around a silly analogy comparing molecular chemistry to romance. Via Tumblr, I politely asked su3su2u1 for links to technical, peer-reviewed sources that rebut Drexler’s ideas; he has so far declined to reply. If he ever does, I’d be happy to post them below. But until then, it seems safe to say that the “making up excuses” theory is vindicated, and that trying to change su3su2u1’s mind just isn’t going to happen, no matter how many of his arguments are proven false.

EDIT: su3su2u1 has written a response, with two major points.

The first is that he continues to dismiss Drexler, and continues to not provide any technical arguments or peer-reviewed sources as a basis for his dismissal. He says that, “I believe you can find similar physicists making the same argument by walking into a material science department and asking any physicist about it.” But if that’s true, where are the peer-reviewed sources? Per Google Scholar, Drexler’s book Nanosystems (an edited version of his PhD thesis) has been cited over 1,700 times. His nontechnical book, Engines of Creation, has been cited over 2,200 times. His original 1981 paper in PNAS has been cited over 500 times. If su3su2u1’s opinions are common among physicists, surely there’s a peer-reviewed source which discusses them, somewhere in all those thousands of cites.

The second is that he says su3su2u1 is a common username, and one of the sources I cite is not actually him, but a different person using the same handle. [EDIT: This discussion has been moved to the comments, per Douglas Knight’s recommendation.]

On the Voynich Manuscript

The Voynich Manuscript is one of the most famous mysteries in the world. It’s a book from the 15th century, but no one has been able to identify what language it’s written in, or even what alphabet it uses. So many crazy theories have been proposed that one writer invented the Voynich Bullshit Index to score them. Of course, I haven’t solved the mystery, but I’ve spent a few weeks thinking about it over the last couple of years.

After weighing the evidence, it seems extremely likely that the Voynich is simply written in an unknown natural language, rather than a cipher, a code, or more exotic options listed by Wikipedia. The first major reason is that Voynich writing passes most known statistical tests for natural languages, such as Zipf’s Law. Since Zipf’s Law wasn’t discovered until the 20th century, it would have been impossible to deliberately fake. The second reason is prior probabilities: the number of manuscripts written in languages that now can’t be read (such as Etruscan or Linear A) is pretty large, while the number of manuscripts written entirely in ciphers is very small. The third major reason is one of information asymmetry. In 2015, cryptography is vitally important to the world economy; hence, we know far more about cryptography (and associated disciplines like steganography) than the ancients did. On the other hand, since lost languages are unimportant economically, very little is known about many of them; what information exists is usually locked up in obscure manuscripts, not available online; while a native speaker would obviously know their language well. The Voynich is very mysterious to us, but probably not mysterious to its writer, who (from handwriting analysis) is known to have written it quickly and fluidly; hence, the information asymmetry matches a natural language and does not match a code. This paper summarizes some of this evidence, and concludes from machine learning analysis that the Voynich is most likely an abjad, an alphabet without vowels (like Arabic or Hebrew).

My own best guess is that the Voynich is written in the Cuman language. To the best of my and Nick Pelling’s knowledge, no one has ever proposed this theory, which seems shocking considering the sheer extent of Voynich hypotheses (this long page just lists some of the more popular ones). Cuman is, by the standards of lost languages, quite well-understood; it has a Wikipedia page, substantial surviving literature, and it’s very clearly related to modern languages like Kazakh. It seems like a strong indicator that this class of theories is under-explored. If nobody’s thought of Cuman before, there are surely many other less-known languages that haven’t been looked at either.

Evidence in favor of Cuman:

– Cuman, unlike almost every language spoken in Europe, is non-Indo-European. (It’s related to Turkish, Mongolian, and Kazakh.) This would explain the Voynich’s lack of typical Indo-European language features.

– Cuman (like Turkish and Manchu, another proposed language) employs vowel harmony, which several have observed in the Voynich glyphs.

– The Voynich’s first known owner was Emperor Rudolf II, who was also king of Hungary. Cuman was spoken widely in Hungary in the early 1400s (per Wikipedia, the last known Cuman speakers died in Hungary in the 18th century). The Golden Horde used Cuman extensively, and it was spoken widely in the area they conquered (eastern Europe through central Asia) during the 14th century. Hence, it makes geographical sense in the relevant time period.

– Like many central Asian languages of the time, Cuman appears to have lacked a written script. We know the 14th century Church wrote dictionaries for translating it into Latin, to help convert the Cumans to Catholicism. Hence, it makes sense that a script would be invented for it.

Computer analysis of the letter frequency distribution of the Voynich shows the best match to Voynichese is Moldavian (Moldavia, next to Hungary, was also home to Cumans), followed by two other languages of the former Golden Horde area (eastern Europe and what is now Kazakhstan).

Budgeting in Effective Altruism

This post is a slightly cleaned-up version of an email conversation I had with the brilliant and friendly Kelsey Piper, after reading her blog post last July on budgeting as an EA. Since it was originally an email discussion, some parts might be vague or unclear, for which I apologize in advance. Published with permission.

In summary: Earning-to-give breaks down into #1, having a large income; #2, giving a large proportion of that income; and #3, choosing an effective and underfunded place to give to. The current connotations of ETG emphasize #2 > #1 > #3, but the order of problem difficulty is #3 > #1 > #2. A large portion of the US, especially wealthy people, already has #2 down (US charitable giving is ~2.5% of GDP). But >99.9% fail at #3 – either only donating to non-weird things that rapidly become overfunded, or donating to scams/cults/pseudoscience and other ripoffs.

Me: Hi Kelsey, I just read your Tumblr post on budgeting. I mostly agree with what you’ve said, but I’m pretty sure it’s counterproductive to talk about going on a budget for purposes of donating – there’s already huge piles of money sitting around unused everywhere, that nobody really knows what to do with. That may sound weird, but let me give some examples of people in our social network.

As you’ve probably heard, Dustin Moskovitz is worth about $9 billion, and he plans to donate the large majority of that to charity through Good Ventures/Open Philanthropy Project. Private foundations are legally required to donate 5% of their assets each year, so for an $8 billion endowment that would be annual spending of ~$400 million or more. It would take ~20,000 people getting high-paid jobs in Silicon Valley, and donating 10% of their income a year, every year, for the rest of their lives, to equal one Dustin Moskovitz. And there’s every reason to think that Dustin isn’t a one-off special case. At least 137 billionaires have signed Bill Gates’s pledge to donate half their wealth, there are several other billionaires that people within EA are already discussing philanthropy with, and the EA idea is in the middle of a big publicity/coolness boom that shows no signs of slowing down.

But even forgetting about the ultra-wealthy…. last week, I was talking to one of my friends at Google, whom I’ve known since he started working there some years ago. Within the company, he isn’t especially important or famous or anything; he just has a normal Google job with a normal Google salary. He mentioned, to my surprise, that since he started working he hadn’t bothered to cash in any of his Google stock, which by now must be worth half a million dollars or more. He has a family, he lives in the Bay Area, and as far as I know he isn’t especially frugal; he just never had a good enough reason to spend any of it. After you reach a certain point, there just isn’t that much to spend more money on.

And even within the realm of spending less to donate more… doing some rough math, there are at least half a million people in the Bay Area who own houses that are worth over $1,000 per square foot. If one of them sold their house, and moved into a new house that was just 100 square feet smaller – a barely noticeable change – they’d have over $100,000 to donate. That’s the same donation size as someone making the US median income of ~$40,000 taking the 10% giving pledge, and then sticking to it every year for the next two and a half decades. (Of course, you wouldn’t literally switch houses for 100 square feet because of transaction costs, but just trying to illustrate the general point.)

In spite of all that, I do think there are cases where donating makes sense even if you aren’t that wealthy. In particular, if what you donate to is so strange or so new or so unpopular that virtually nobody else would be willing to fund it, then donating is likely a reasonable idea (and I have donated several times on that basis). But overall, it seems likely that given the ginormous overall wealth of the Bay Area, for someone who has any use for marginal dollars beyond buying luxuries that they don’t care much about, budgeting to give more is penny-wise and pound-foolish.

Kelsey: I think that people who make significant lifestyle changes consonant with their identity as an EA are likelier to get the right answers to hard effectiveness questions (giving a painful amount of money away made me value rationality much more and in a much more direct, immediate, pressing way; there’s various evidence that people are less biased when there’s money on the line). I don’t want people who don’t give identifying as EAs. I think it turns the movement into virtue-signaling faster than almost anything else could. If I were in charge I’d set an actual “if you don’t give at least this much you’re not an effective altruist” threshold because I really do think that our most likely failure mode is becoming a movement as meaningless as the “green!” label on food, and an expectation of giving (from everyone middle-class and up) prevents that.

Me: I largely agree, but I think the situation is more complicated than the world-model implied by what you’re saying.

Forgive me for the silly metaphor, but say there’s a big asteroid hurtling towards the Earth. (I’m using this as a metaphor for x-risk, but also for “ordinary” bad things like poverty, disease, aging, and so on.) We need to build lots and lots and lots of nukes to blow up the asteroid before it hits us. There are basically two components to a simple nuclear weapon: there’s the highly enriched uranium (HEU), and then there’s a casing with explosive charges, which collides two pieces of HEU against each other at high speed. Making the casing is far from trivial, but a competent team of electrical and mechanical engineers can probably figure it out, at least well enough. On the other hand, making the HEU is enormously difficult. Even national governments with huge laboratories and thousands of scientists and multi-billion-dollar budgets often fall flat on their faces. Metaphorically, the casing represents raising money for improving the world, and the HEU represents the ability to convert money into utilons with reasonably high efficiency.

In this metaphor, GiveWell’s original model corresponds to a group that hires two teams: one of them starts work on making casings, and the other goes around to all the world’s nuclear research labs, calls them up, and asks if they have any HEU just lying around which they aren’t using for anything. On the one hand, this is certainly a good idea. On the other, it’s not really tackling the hard part of the problem; you’re just piggybacking off other people’s existing solutions. I’m hugely impressed by how Holden recognized this and took OPP in a different direction, and how they’re now tackling the hard part of the problem head-on (at least, as head-on as anyone else has).

In the metaphor, you seem to be saying that the project to stop the asteroid should be composed 100% of experts on casings, who are actively helping to manufacture them. That’s certainly better than a project which does nothing, which (as you said) is the default failure mode. But it puts emphasis, and therefore things like social status awards, on the half of the problem that’s by far the easiest to solve. It’s also (breaking the metaphor) dangerously close in memespace to making the movement about showing off self-sacrifice, which is a failure mode that humans are probably evolutionarily adapted to fall into. I think this is why people keep suggesting things like giving blood, donating kidneys, and so on despite them not being plausibly effective.

The flip side is that, unfortunately, I think you’re right when you say that not having a donate-to-enter threshold makes it easier for the movement to degenerate into meaninglessness. It’s easy to judge whether someone is donating, and then not award status to people who don’t; no one really knows how to award status to [figuring out how to turn money into utilons in a non-domain-specific way] in a way that’s resistant to cheating. But I also think that, if we’re aiming to solve a significant fraction of the world’s major problems, we should kinda expect to have to tackle murky, difficult problems that nobody really knows how to handle yet.

Kelsey: Hmm. I hadn’t thought about that before (finding ways to convert money into utilons being the Hard Problem). I guess because I’ve always sort of thought of the economy as being a fairly efficient money-to-utilons machine. But I agree that we need more people doing research about what is effective; maybe they should all be people like Holden, who first earned a lot of money he wanted to give away and then pivoted to figuring out how to give it away? This admittedly involves wasting person-years of work doing something that, in your model, is mostly signaling. But I don’t think it’s totally signaling, the research isn’t actually a limiting reagent on the good money can do, just a multiplier – and it might actually also involve learning skills that make one a better researcher, too. Maybe we should point people towards careers that involve making high-stakes decisions with tight feedback loops, to hone the skills we eventually want them to use on figuring out multipliers.

And suffering is an attractive failure mode because it’s costly signaling of commitment, and you can’t actually do without costly signaling of commitment, if commitment is important. You can at least demand that the costly signaling not compromise future ability to do good? I hope? If someone donated a kidney I’d trust them more with my money (well, with the lives of currently existing humans). I wonder if that emotion is justified.

Me: I think “the economy” is mostly just a bad category – it takes a huge number of dissimilar things and throws them together in the same box, to the point where measurements of “the economy” (GDP, unemployment, inflation, etc.) are at best rough guesses and at worst outright lies. Economics contains a fair amount of useful knowledge within it, but IMO it really needs an overhaul to about half of its ontology. This isn’t really that surprising, for a science at such an early stage – you could think of it like, say, chemistry in the 17th century. There are lots of observations and rules and procedures that basically work, but there are still central concepts like “transmutation” that need to be thrown out, and other ones like “valence electron” that haven’t been discovered yet. (Not that I know how to do that – I have guesses, of course, but this’ll be a major decades-long project just like the invention of modern chemistry was.)

I think a better metaphor is to see the world as a collection of machines. A “machine” isn’t a literal mechanical device, but a collection of devices, procedures, memes, writings, traditions, institutions, Schelling points, and so on that operate together to reliably produce certain results. Some machines work well; others work surprisingly badly; and a great many simply fail to exist or haven’t been invented yet. You could say that entrepreneurship, in a broad sense, is the creation of a new machine; FDR and Florence Nightingale were entrepreneurs by that definition. Machines can also be destroyed, and of course they constantly evolve in response to the forces around them.

The way you produce happy lives for a large number of people – a larger number than you could help directly with your own muscles – is to build a set of machines that, taken as a whole, reliably give people what they want. (What exactly they do want is a whole other complex topic, and a central question to eg. MIRI’s FAI theory. But for now, we can just say that eg. no one ever wants to get infected with malaria.) In some cases, these machines already exist, and you can freely make use of them when setting up your own stuff. Eg. if your plan is to help people by setting up a gold-mining operation in Kenya, there already exists a very efficient machine to buy, sell, transport, refine, distribute, and price gold that you can take advantage of. You can more-or-less just bring big sacks of gold dust to downtown Nairobi, and hand them off there – you can trust that someone else will take care of utilizing them in the most efficient known way. However, this machine only exists because of a number of background conditions:

– fungibility: one ounce of gold is the same as any other ounce
– perfect information: it’s easy to tell if a bar is made of gold or not
– cheap shipping and distribution: the cost of transporting and distributing an ounce of gold is far less than the gold itself
– practical contract enforcement: there exist organizations which would be meaningfully punished if they just stole all your gold, so they don’t do so
– (a bunch of others I won’t get into)

By contrast, if tomorrow you discovered a cure for cancer, by itself that would be more-or-less useless. There’s no machine for evaluating and pricing and manufacturing and distributing cancer cures. You’d have to build one yourself, and that’s a huge amount of work and requires lots of different skills – dealing with bureaucracies, hiring and managing employees, raising funding, conducting human trials, and on and on and on. If you don’t happen to have those skills, then people will keep dying of cancer. (One example I have personal familiarity with is Dr. Eric Lagasse’s work on liver regeneration – we tried to build a machine for distributing this to patients, and fell flat on our faces, despite being IMO smart and capable in other domains.)

There isn’t any limit on how powerful a machine can be – the easiest historical example is Gutenberg’s printing press, the important part of which wasn’t really a “press” so much as a new set of techniques for making and using metallic type. On the other hand, trying to build an arbitrarily powerful one faces two fundamental constraints. The first is that, to be very powerful, it has to be fundamentally dissimilar from anything that many other people are trying to do. If it were similar to ones that tons of other people were already building, eg. how to make a better lithium-ion battery, odds are someone else would have built it already. The second constraint is that the vast majority of really original ideas are terrible; if you just naively disregard existing constraints, then you’ll probably fail, because reversed stupidity is not intelligence. (Paul Graham and Peter Thiel talk about this at length in Startup Ideas and “Zero to One“, respectively, though it’s a counterintuitive enough idea that you have to sort of see it from many angles to understand it well, kinda like the proverbial elephant with the blind men.) So to succeed, you have to know something that other people don’t; to do that, you have to know how to recognize which things you don’t know; and knowing how to recognize which things you don’t know is just really really hard. Eliezer’s Sequences are the best attempt I’ve seen so far to teach it (Artificial Addition is one particularly good example), and I like to think I’m pretty smart, and even so I don’t think I really understood it until having read them three or four times over about six years.

In keeping with the analogy, any given machine, once built, usually only works within a given set of operating parameters. You can make your car put out 100 kW instead of 50 kW by pressing the gas harder, but you’ll never make it produce 10,000 kW, because it’s designed to top out at 200 kW or thereabouts. Similarly, any given charity or type of charity can only handle so much money before it clunks out. And charities (or any other machine) that can operate productively under a load of even one percent as much money as the developed world has – tens or hundreds of billions a year – are more-or-less nonexistent because of various scaling issues. You’ve probably read that humans are evolutionarily adapted to work in small groups, from a handful up to 100 or so; the further you go beyond that, the more you’re stretching the cognitive abilities of the poor saps who have to run the thing beyond their natural design limits. One of the very few well-understood ways around this is to avoid tackling the scaling problem yourself, by just redistributing the money to others in some simple, well-defined way. But precisely because this is one of a very few well-known ways around a critical bottleneck, it’s one that’s extremely popular, and you’d therefore need a huge amount of resources to substantially add to what’s already being done (IIRC, even ignoring existing aid altogether, there’s already over $300 billion per year in direct remittances to the very poor from friends and family).

Hence, under this framework, the two largest ways to contribute at the margin are:

– to build a new machine where the type of machine is relatively well-understood, and the bottleneck is that the existing machines can’t scale well and the type of labor required to build new ones is scarce; this covers both creating new charities to address tropical diseases, and most “ordinary” software entrepreneurship, as well as many other things
– to build a new machine where the type of machine isn’t well-understood, and the bottleneck is the skill and background knowledge to have the required insights into what blanks need filling in; Eliezer is one example of someone we know who’s AFAICT succeeded at this, but successes here are necessarily much rarer than in the first category

By “build”, what I really mean is “contribute to building in a relatively non-replaceable way”; there are usually many different types of skills required, hence many opportunities to contribute. And it’s certainly true that one opportunity is “provide the initial rounds of funding”. However, in order for your financial contribution to be non-replaceable, you yourself must have the same types of unusual cognitive abilities as the people running the organization – the ones that make them able to succeed when most others couldn’t. If you yourself only have ordinary-programmer cognitive abilities, and not (for example) figure-out-which-organizations-aren’t-likely-to-get-torn-apart-by-internal-conflict abilities, thenĀ on average your funding will just go to the same place as the ordinary programmer’s. And so either you won’t fund the organization at all, or lots of ordinary programmers will fund it too and your funding won’t mean much on the margin.

And you can’t outsource your judgement to an organization-evaluator – because if your ability to judge the judgement of organization-evaluators is the same as an ordinary programmer’s, then lots of ordinary programmers will follow the recommendations of the organization-evaluator and you get the same problem. The ability to contribute by offering funding is, to a first approximation, only valuable insofar as the funder personally has unusual abilities, not possessed by any billionaire or by more than a small fraction of Silicon Valley career software developers, to judge which things need more money and which need less. (And if you do have that ability – not meaning Kelsey-Piper-you here, but hypothetical-abstract-you – and don’t already have a good chunk of change to contribute, why not become an accountant? All the important-to-humanity organizations I’ve been closely involved with have been in desperate need of good accountants. Again, it’s not accounting itself that’s valuable here, but accounting combined with highly-unusual-for-accountants-judgment-of-which-organizations-to-contribute-to.)

Citations in Math Academia

Many Internet commenters have criticized MIRI for not producing enough research, relative to their budget – not writing enough papers, or not getting those papers peer-reviewed, or not getting enough citations. However, MIRI’s specialty is math and computer science, which might have lower citation counts than experiment-heavy fields like chemistry or biology. For a quick sanity check, I looked up a few non-MIRI mathematicians as points of comparison.

Grigory Margulis is probably the most accomplished mathematician I’ve personally met. He’s a Fields Medalist, won the Wolf Prize in 2005, and is doing pure math as a Yale professor full-time, so it seems reasonable to assume he’s in the upper quantiles of productivity. A Google Scholar search for the last five years turned up eight papers that he’s co-authored; by my count, those eight papers (combined) have 35 citations, of which nine are self-citations. All of those papers had multiple authors, so it took well over five person-years of total effort to produce them.

But of course, a Fields Medalist isn’t a representative math researcher. One friend of mine recently got a math Ph.D. from an elite university; as a grad student, they spent years doing math research full-time, and they also did a lot of part-time research in undergrad. They wrote several papers while in grad school, plus (of course) a dissertation, but these don’t appear to be on Google Scholar; presumably they’re still awaiting publication, or they weren’t published in a place Google indexes. They also published two papers before grad school, of which only one was peer-reviewed; these two papers have 13 total citations, of which five are self-citations.

Another friend of mine got a math Ph.D. some years ago, from a less elite university. They wrote four papers which appear in a Google Scholar search. Of those four, one wasn’t a math paper, and was published long after they graduated; one was their dissertation; one was posted on arXiv, but doesn’t seem to have been formally published; and one was published as a conference paper. Excluding the non-math paper, the remaining three papers have eight total citations.

Another friend of mine just got a math Ph.D. from an elite school, and is taking an academic job after graduating. They’ve written a number of papers, given talks, etc.; but again, a lot of these don’t appear on Google Scholar. Three of their papers are on Google Scholar, but all three appear to be arXiv papers that haven’t been formally published, and the three papers have three total citations.

But all that might just be selection bias in who I know. Using to pick two math postdocs – one from Stanford, one from Berkeley – their CVs list a combined total of eighteen papers, of which seven have been formally published, four are listed as “accepted” but not yet published, and the remaining seven are on arXiv or self-hosted. Of these 18 papers, the most cited one had a total of 13 citations, of which four were self-citations.

(Disclaimer: I’m not a math academic; comments/corrections from people who are appreciated.)


Get every new post delivered to your Inbox.

Join 64 other followers