This is a response to su3su2u1‘s critique of the Machine Intelligence Research Institute (MIRI).

“MIRI bills itself as a research institute, so I judge them on their produced research. The accountability measure of a research institute is academic citations.”

The author is obviously smart, but there are really two distinct claims here, and he/she confuses the issue by equivocating between them:

Claim 1: Number of academic citations is in fact a perfect or near-perfect indicator of the quality/importance of a body of research. Hence, if a body of work has few citations, we can safely ignore it as low-quality or unimportant.

Claim 2: Number of academic citations is treated by certain institutions as such an indicator. Hence, to obtain status within these institutions, it is instrumentally useful to get more citations.

I think claim #1 can be easily shown to be false. There are many strong arguments against it, but one obvious one is that the absolute number of citations is equal to the fraction of people in a field who cite something, times the total number of people working in that field. And the size of fields varies wildly from place to place.

Eg., consider the paper “The entropy formula for the Ricci flow and its geometric applications“. This paper, which proved the century-old Poincare conjecture, was hailed as one of the most important mathematical advances of the 21st century. It was the first paper to win a million-dollar Millennium Prize, and Science named the paper as its “Breakthrough of the Year”, the only time it has ever done so for a mathematical result (as opposed to a discovery in the physical world). According to Google Scholar, it has been cited 1,382 times.

Now, contrast Perelman’s famous paper with the medical research paper “Familial Alzheimer’s disease in kindreds with missense mutations in a gene on chromosome 1 related to the Alzheimer’s disease type 3 gene“. This paper has 1,760 citations. I wouldn’t call it “unimportant”, but I doubt even Rogaev (the lead author) would claim it’s more important than proving the Poincare conjecture. Medical research is simply a much larger field than mathematics, and so medical papers will get many more cites than math papers of equal importance. Even MIRI’s toughest critics would be hard-pressed to argue that MIRI’s research is less important or high-quality than a doctor rediscovering freshman calculus, a paper which got 75 citations.

By definition, the foundational work in any field is done when it’s new and small. And a new and small field will always have fewer citations than an established one, partly because of the issue above (fewer researchers = fewer citations), and partly because there’s been less time for citations to accumulate. So, if we are to believe Claim #1, foundational work is lower quality and less important than work in a mature field where the low-hanging fruit is already picked. I think everyone remembers prominent counterexamples.

More on Claim #2 in a bit.

“You can measure how much influence they [MIRI] have on researchers by seeing who those researchers cite and what they work on. You could have every famous cosmologist in the world writing op-eds about AI risk, but its worthless if AI researchers don’t pay attention, and judging by citations, they aren’t. (…) This isn’t because I’m amazing, its because no one in academia is paying attention to MIRI.”

This is a separate, third, claim: that MIRI’s number of citations is a good measure of how many researchers are paying attention to it. This claim is not justified, it’s simply assumed. And if one directly asks the question “how many prominent academics are paying attention to MIRI?” – rather than simply assuming citations are a good proxy, and measuring the proxy – even the most cursory Googling shows the answer is “quite a lot”. A very far from complete list:

… and one could go on for a while, but I think the point is made. When data and theory contradict, one must throw out the theory; you don’t keep the theory and throw out the data.

“And yes, I agree this one result looks interesting, but most mathematicians won’t pay attention to it unless they get it reviewed.”

This is an argument from claim #2: that regardless of whether citations to peer-reviewed papers are a good measure or not, you need them to get credibility. Claim #2 is, in fact, largely true within American research universities. However, I think it’s not true for many individual scientists, a number of whom have published scathing critiques of the current academic publication system. I’m pretty sure many, possibly most, younger researchers in math and computer science think of publishing in Elsevier and other for-profit journals as a necessary evil to get ahead within the current system. Since MIRI isn’t part of that system, why should they?

The author later suggests that MIRI should post their math papers on arXiv, one alternative to typical journals. This is a great idea, and I support it 100%. However, the original claim was not that MIRI should post to arXiv, but that (to quote) “Based on their output over the last decade, MIRI is primarily a fanfic and blog-post producing organization. That seems like spending money on personal entertainment.” This is simply not supported by the evidence.

“If they are making a “strategic decision” to not submit their self-published findings to peer review, they are making a terrible strategic decision, and they aren’t going to get most academics to pay attention that way.”

This is another argument from claim #2, and it flies in the face of all the evidence mentioned previously. Moreover, MIRI’s main goal (unlike labs that need government grants) is not to maximize academic attention, but just to get math done as quickly as possible. Some attention is probably good, but too much would be actively harmful: being a celebrity is really distracting and a huge time sink.

Moreover, any academic will tell you that peer review is not simply “submitting” a research paper, the way one submits an essay in undergrad. It is typically a months-long process that demands large amounts of time and mental capacity. This cost becomes obvious when you consider that MIRI once had seven writeups from a single week-long workshop. Even if a few of these writeups were combined into larger papers, how many weeks would it take to get them all peer-reviewed? Twenty? Forty?

“I didn’t know Russell was in any way affiliated with MIRI, he is nowhere mentioned on their staff page, and has never published a technical result with them.”

Russell and Norvig on Friendly AI

And while this other interview doesn’t explicitly mention MIRI, it’s pretty obvious that the ideas derive from Yudkowsky, Bostrom, and other MIRI-sphere folks:

“It’s very difficult to say what we would want a super intelligent machine to do so that we can be absolutely sure that the outcome is what we really want as opposed to what we say. That’s the issue. I think we, as a field, are changing, going through a process of realization that more intelligent is not necessarily better. We have to be more intelligent and controlled and safe, just like the nuclear physicist when they figured out chain reaction they suddenly realized, “Oh, if we make too much of a chain reaction, then we have a nuclear explosion.” So we need controlled chain reaction just like we need controlled artificial intelligence.”

“If he [Russell] is interested in helping MIRI, the best thing he could do is publish a well received technical result in a good journal with Yudkowsky. That would help get researchers to pay actual attention.”

I don’t doubt that this would be a good thing, but it’s at least worth noting that MIRI has a long history of being advised to do various things to get more academic credibility, and this advise failing more often than not:

“If the one is called upon to explain the rejection, not uncommonly the one says, “Why should I believe anything Yudkowsky says? He doesn’t have a PhD!” And occasionally someone else, hearing, says, “Oh, you should get a PhD, so that people will listen to you.” Or this advice may even be offered by the same one who disbelieved, saying, “Come back when you have a PhD.” (…)

And more to the point, if I had a PhD, people would not treat this as a decisive factor indicating that they ought to believe everything I say. Rather, the same initial rejection would occur, for the same reasons; and the search for justification, afterward, would terminate at a different stopping point. They would say, “Why should I believe you? You’re just some guy with a PhD! There are lots of those. Come back when you’re well-known in your field and tenured at a major university.” (…)

It has similarly been a general rule with the Singularity Institute [now MIRI] that, whatever it is we’re supposed to do to be more credible, when we actually do it, nothing much changes. “Do you do any sort of code development? I’m not interested in supporting an organization that doesn’t develop code” —> OpenCog —> nothing changes. “Eliezer Yudkowsky lacks academic credentials” —> Professor Ben Goertzel installed as Director of Research —> nothing changes. The one thing that actually has seemed to raise credibility, is famous people associating with the organization, like Peter Thiel funding us, or Ray Kurzweil on the Board.”

Moreover, it’s not at all obvious that publishing with Russell or other famous professors (in and of itself) gets people that much attention. Over Russell’s lengthy career, how many Berkeley grad students have co-authored with him? And of those, how many got anywhere near as much academic attention as MIRI already has (as demonstrated by the above links) as a direct result of co-authoring, rather than becoming famous for something else years later? I haven’t counted, but I know which way I’d bet.

(Disclaimer: I am not a MIRI employee, and do not speak for MIRI.)