Quixey Is Shutting Down

The app search startup Quixey is shutting down; it was previously valued at $600 million in a 2015 financing round. Since the only English article covering the shutdown is paywalled, the below is a (bad, sorry) translation from Chinese websites. Wikipedia has more background information on the company. Quixey founder Tomer Kagan stepped down as CEO last year, and is now running a new startup named Sigma.


Quixey, a mobile search firm that has received investment from Alibaba, is currently closed and a large number of employees have been fired two weeks ago, according to informed sources. Quixey was founded in 2009 to provide application search capabilities for large vendors, operators, search engines, and web applications. It was known to have earned $400,000 in seed investment from Innovation Endeavors (Google chairman Eric Schmidt’s investment firm) at the beginning of its history.

The reports have been confirmed. In response to inquiries, Alibaba said: “Alibaba has been Quixey and its founders’ largest financial support. Unfortunately, because the company’s development did not meet expectations, the board of directors made a decision to close the company’s business. We will continue to invest in the US market, support entrepreneurs and innovate.”

The company raised 60 million US dollars in 2015, in a round led by Alibaba and Softbank. Twitter, Goldman Sachs, GGV Capital, Google founder Eric Schmidt, George Soros, and investment institutions have also invested. Through now, Quixey has raised $130 million.

It is understood that, unlike Google and Baidu and other keyword-based search engines, Quixey is a functional search engine specifically for mobile applications, built on a variety of platforms and based on rich functional applications. Quixey’s application search is not based on an application’s title, metadata, or application description, but based on what the user wants to do, what features are needed to search. For example, if you search for the keyword “book a hotel,” Quixey will return a list of all the applications that provide this feature and select and filter the appropriate platform. Search results may include [???], and it supports iOS, Android and other operating systems, somewhat similar to Baidu’s light application distribution platform.

Quixey also works with major companies., such as the US Q&A website ask.com, Singapore’s largest telecom operator star Hub, and has reached agreements with many North American businesses to install it in their default toolbar.

Moreover, with Quixey deep learning, the scope of the index is not limited to the major app stores, but aimed at the app of “the world around.” Search covers the major sites, blog comments, evaluation, forums and so on, searching through the user comments to finally return a result. Its ambition was to achieve full platform coverage.

Since its establishment seven years ago, Quixey’s financing has been relatively smooth. Alibaba led in the C round with a 50 million US dollars stake, and continued with the support, quickly became the largest investor. In November 2016, Quixey also had a series of turbulence. John Foster was appointed as a replacement to become the new CEO. Due to not achieving revenue targets, two executives, the COO and CTO, left.

Four Layers of Intellectual Conversation

(By Eliezer Yudkowsky; posted here with permission)

Building an intellectual edifice requires ongoing conversation, and that ongoing conversation needs four layers of speech to be successful.

(Yes, four. Not three. Later I’m going to pretend I didn’t say that, but right now I’m serious and this is important.)

There is a widespread traditional notion that the total absence of critique is bad; that it is a bad sign to have a conversation consisting of people saying X and nobody saying “hey maybe not-X”.

Why is this bad? Well, because people could say stupid things about X, and nobody would call them on the stupidity. Yikes!

Okay, so here’s the thing: If the people saying “hey maybe not-X” don’t anticipate losing points from being called out on stupid critiques, that doesn’t create a conversation either. I am speaking here from awful experience and many people reading this will have seen the same.

A conversation that successfully builds an intellectual edifice has four *necessary* layers. I’m not saying “necessary” as an emphasis-word for how nice it is to have more layers. I mean, “If you eliminate the fourth layer, the mechanism falls apart.”

You need:

0: People saying X.
1: Critics saying “hey maybe not-X”.
2: Responses/counter-critiques saying “well maybe X after all”.
3: Counter-responses saying “but maybe not-X after all”.

If you eliminate layer 3, that means the conversational record doesn’t include critics responding to critiques of their criticism.

In other words, the critics saying “not-X” won’t anticipate needing to defend their “not-X” claims.

Layers 0-2 being visible in the record, but not layer 3, is the sort of situation you have when biologists speak of evolution (0, object-level claim), and a priest says something about evolution being true but only God being able to create the first life that started it (1, critique), and biologists reply with a detailed account of the current thought on abiogenesis (2, response/counter-critique); and the priest does not reply with detailed thoughts explaining why the current thinking on abiogenesis is technically flawed (absence of 3, the counter-response).

0-2 is what you have when, say, Eric Drexler is writing detailed technical books about molecular nanotechnology; and a famous chemist says something profoundly stupid indicating they have not read very far into this literature (e.g. “but the sticky-fingers problem!”); and Eric Drexler writes a response dissecting the critique, which doesn’t receive as much media attention; and the chemist doesn’t care or replies with something that is visibly not very detailed or thought-out.

Conversational layers 0-2 being visible in the record, but not much layer 3 or an unimpressive layer 3, generally represents a situation where the critic doesn’t expect that their criticism will come under harsh scrutiny to which the critic will be socially obliged to respond intelligently as part of the widely seen public record. On the critical side, that’s just as bad as there being just layer 0.

When I say layers 0-3 need to be there, I mean that there must be a social incentive to do them well; people must lose status points for saying dumb things at any of these layers. When as an outsider you look at the conversational record, all of these layers should not just be merely present as a checkmark on a list. You should be looking for the same standards of impressive technical-sounding words, or abiding by epistemic norms and discourse norms or whatever, as you would demand of the ground-level statements.

Now here’s the dire part: the current academic journal system, in practice, operates at layers 0-2. You submit your paper, and the reviewers offer a response, and you’re expected to have an intelligent response to the review. But these reviewers (often anonymous) do not expect to lose huge social brownie points if their critique is stupid. Even if the reviewer is supposed to offer some kind of counter-response for the record, it can be a casual and stupid counter-response, and nobody will go “Hey what the hell are you doing” at that.

Absent any incentive to be smart, the reviews are often really, really stupid; especially if the original paper doesn’t look to be authored by a high-status person. Though I’ve heard from more than one person with very high status in their field that even the reviews *they* get are stupid.

Modern academics treat stupid bad reviews as an environmental hazard. It’s not *conversation*. It’s not building the edifice of knowledge.

So where does the real conversation happen, in scientific fields, when there’s a proposition worthy of debate and not just another unquestioned fact to be recorded?

Maybe it happens in the bar at conferences, where people are speaking in realtime in front of their friends, and would actually lose status points if they uttered a dumb critique that was shot down and their counter-response looked stupid. Or maybe it happens on email lists. It could be happening on Facebook, for all I know of any particular field.

But the journals merely record an intellectual edifice that was built elsewhere. The real conversation that creates the intellectual edifice in the first place couldn’t happen with the journals as a medium.

The only time I’ve seen a stream of journal articles that looked like they were seriously *building*, not just *recording*, an intellectual edifice, it’s been in analytic philosophy. Analytic philosophy is about debating, qua debating, in a way that chemistry isn’t. I could be wrong, but I expect that editors of analytic-philosophy journals *do* expect intelligent counter-responses; and that reviewers expect to lose status points if they can’t come up with intelligent counter-responses. (Though this could be confounded by analytic philosophers having the highest IQ of any graduate group in academia, yes that happens to be a thing that is true.)

Unfortunately analytic philosophy lacks the ability to settle on any answer and declare it settled. So be it noted that just because you ought to demand high standards of both counter-critiques and counter-responses, it doesn’t mean that nobody’s right. Or that there isn’t such a thing as one side having an overwhelming weight of argument. Or that so long as both sides are writing long technical arguments they must have equally socially respectable positions which is all there is to epistemics, etcetera.

Mainstream media that pretends to be serious pretends to have layers 0-1, though journalists often just make up their own critiques, or twist the quoted critics to make the criticism look like the cliche they expect the reader to expect. And when the media is not pretending to be serious they operate at unvarnished layer 0.

It’s sad, it’s really sad, to compare the current academic conversation about AGI alignment–not that the academics know they should be calling it that nowadays–with the informal conversations I saw on email lists in the late nineties. Email lists where you knew that if you said something dumb, even if it was an ohmygosh virtuous “critique”, Robin Hanson might reply with an email pointing out the flaw, and everyone else on the mailing list would see that reply. On those mailing lists there was a real conversation, and that’s what built up the early edifice of thought about AI alignment. There’s been more theory built up since those days, but almost everything the public got to see in Bostrom’s _Superintelligence_ just records the edifice of thought built up on those email lists where a real, actual conversation took place.

By comparison, academic discourse on AGI comes from the void and is cast into the void. It shows little awareness of previous ideas, it is not written as if to anticipate obvious responses, the obvious responses go unpublished in the same public record, and certainly there is no detailed and impressive counter-response.

When you publish a journal article claiming to have shot down the so-called notion of the intelligence explosion once and for all, and your article is about hypercomputation being impossible, then you are clearly not operating in an environment where you expect to be socially obliged to come up with an intelligent response to counter-critique. Perhaps the thought crossed your mind that somebody might say “Hey maybe the intelligence explosion doesn’t require hypercomputation, and you made little or no effort to establish that it did, and if you say that’s true by definition then this is not the definition anyone else in the field uses.” But if so it was a fleeting thought and you didn’t expect to be troubled, to lose reputation, if any of your prey tried to reply that way to your predation. When you published your “critique”, you were done scoring all the points you expected to score off them, and you didn’t expect to lose any points for responding casually or not at all to their counter-critique.

So the academic conversation hasn’t gotten anywhere near as far as the informal conversation on those old email lists in the late 90s, never mind everything built up since then.

Unfortunately, the traditional scientific upbringing speaks only of the importance of criticism.

EAs used to ask: “Has there been critique of MIRI’s ideas? Who are the critics?” If you take this literally, they were asking to see a record of a conversation that included layers 0-1. Implicitly, they were asking to see 0-2; they would have been surprised if I showed them critique but couldn’t point to anything that responded to the critique.

But if you want to know that critics are a part of the conversation, you need to be able to point to serious-looking counter-responses by critics. Back in the old days I’d always reply “Robin Hanson is the serious critic, there isn’t really anyone else worth pointing to”, because nobody else was writing detailed counter-responses to detailed counter-critiques.

Keep that in mind the next time you’re trying to judge the strength and health of an ongoing conversation… or, this is very important, *or* when you’re wondering how seriously to take a critic. Don’t ask, “Is there a forum where both sides of the story can be heard?” Rather ask, “Is there back-and-forth-and-back-*and*-forth?” Don’t ask, “Has somebody performed the duty of critique?” Ask, “How impressed am I by the counter-response to the counter-critique?”

(Facebook discussion)

Radar Detector-Detector-Detector-Detector Almost Certainly a Hoax

The Wikipedia article on radar detector detectors used to say:

“In 1982 the US military funded a project, codenamed R4D (radar detector-detector-detector-detector), in order to develop a device capable of detecting radar detector-detector-detectors.”

This has been widely shared online, by (among others) the rationalist blog Slate Star Codex, and the /r/wikipedia and /r/TIL subreddits. However, it’s almost certainly a hoax. The evidence:

– First, this “fact” was added to Wikipedia by an unregistered, anonymous IP address. No source was provided.

– Second, similar sentences have been added to this article before. These edits are always anonymous, always unsourced, and tell contradictory stories:

“In response, a few people also employ the use of a radar detector detector detector to detect the detection of their radar protector, but that is rare. At this time, the police are developing a radar detector detector detector detector to counter-act this.” (Aug. 2008)

“Furthermore, it is now known that a radar detector detector detector detector detector is being developed by military organizations in many countries.” (Sep. 2008)

“Scientists are currently working on a radar detector detector detector detector detector detector detector detector detector which is in the early stages of prototyping. There were plans to create a radar detector detector detector detector detector detector detector detector detector detector, but these were scrapped due to gross stupidity.” (Sep. 2008)

“Scientists at Cambridge University are currently working on a radar detector detector detector detector detector detector detector detector detector which is in the early stages of prototyping. There were plans to create a radar detector detector detector detector detector detector detector detector detector detector, but these were scrapped due to gross stupidity.” (May 2009)

“This technology was countered with the invention of the radar-detector-detector-detector-detector. However, due to escalation, the development of the radar-detector-detector-detector-detector-detector was deemed necessary.” (Feb. 2011)

– Third, there seem to be no real sources for a radar detector-detector-detector-detector, or for an “R4D” project. All the Google hits for this supposed “military project” trace back to Wikipedia. Google Books, Google News, and Google Scholar turn up nothing, except for some references to the Douglas R4D cargo plane. There are a few posts on forums, but they’re obvious jokes. Eg.:

“More importantly.. Are there radar detector detector detector detectors? That Meanz we Need 2 Stay 1 Step Ahead of the Game. Some1, Quick, Create a Radar Detector Detector Detector Detector Detector *Head Explodes*” (link)

– Fourth, the military doesn’t seem that interested in radar detectors, never mind radar detector-detector-detector-detectors. Military radar towers are generally large, powerful, and obvious (see eg. this Distant Early Warning station); a radar system weak enough to avoid detection wouldn’t be very useful against stealth planes at long ranges, or against jamming devices. The Wikipedia article on radar detectors is exclusively about civilian use. There’s plenty of military interest in radar jamming, or electronic countermeasures, but that’s a different thing. (Radar jammers are illegal for civilians under FCC rules.)

– Fifth, there’s really no reason to build a radar detector-detector-detector-detector. A radar detector-detector-detector is useful if you have a radar detector, because it lets you distinguish between radar (which the detector can ignore) and radar detector detectors (which mean the detector has to shut down, in places where detectors are illegal). However, the only reason someone would have a radar detector detector detector is if they also had a radar detector. Hence, it’s easier and equally useful to simply detect the original radar detector.

Dark Patterns by the Boston Globe

After years of falling revenue, some newspapers have resorted to deception to boost their subscription numbers. These dishonest tactics are sometimes called “dark patterns” – user interfaces designed to trick people.

For example, this is a Boston Globe story on Bernie Sanders:

Screenshot from 2016-04-24 18-06-33.png

Before you can read the article, there is a pop-up ad asking you to subscribe. By itself, this is annoying, but not deceptive. The real dark pattern is hidden at the top – the ‘Close’ button (circled in red) uses a very low contrast font, making it hard to see. It’s also in the left corner, not the standard right corner. This makes it likely that users won’t see it, causing them to subscribe when they didn’t have to.

One the ‘Close’ link is clicked, deception continues:

Screenshot from 2016-04-24 18-06-57.png

At the bottom, there’s a non-removable, high-contrast banner ad asking for a paid subscription. Again, this is annoying, but honest. However, the circled text “for only 99 cents per week” is not honest. It’s simply a lie, as later pages will show.

Clicking the “Subscribe now” button brings up this page:

Screenshot from 2016-04-24 18-07-17

Here, it becomes obvious that $0.99 per week isn’t the real price. It’s common for companies to have initial discounts, which isn’t itself a dark pattern. The problem on this page is that the real price is never stated. This misleads the consumer.

Clicking the “Sign Up” button reveals yet more dark patterns:

Screenshot from 2016-04-24 18-08-14

This is the first signup form. It shows the amount charged, but only for the first month ($3.96). The real price is below that, in smaller font, and made less obvious by the red highlighting on the previous line. At first glance, it looks like the same price ($3.99), but the real rate is actually $3.99 per week, while the number in red is $3.96 for the entire month. In addition, in the left column, three of the marketing email signups are checked “yes” by default, so people will subscribe without noticing.

The next page is pretty similar, it’s a standard credit card form:

Screenshot from 2016-04-24 18-58-51.png

And this page is the last one you see before ordering:

Screenshot from 2016-04-24 19-00-27.png

It isn’t visible, but this page is yet another dark pattern, because even right before the purchase it never shows the real price. To find the real price, one must click the little “FAQs” link on the right:

Screenshot from 2016-04-24 19-04-15.png

Then, hidden among questions about crosswords, obituaries, and horoscopes, the user has to click the circled link to discover:

Screenshot from 2016-04-24 19-06-43.png

Yes, the real price isn’t the $0.99 per week in the banner ad, or even the $3.99 per week in fine print on the purchase page. It’s $6.93 per week, almost twice as much as the purchase page rate, and seven times as much as the banner. Since this price only kicks in after a year, it’s almost impossible for average users to notice, unless they carefully check each and every bank statement.

If they do find out and try to cancel, they’ll discover this catch, which isn’t stated or even implied during signup:

Screenshot from 2016-04-24 19-09-44.png

A Boston Globe reader can subscribe online. If they have a question, they can ask over email, or through a convenient live chat service. But if they want to stop paying, they have to call and ask on the phone, no doubt after a long hold time and mandatory sales pitches. There’s no plausible reason for this, other than forcing people to pay when they’d rather cancel the service.

In the short term, these dishonest tricks raise revenue for newspapers that use them. But in the longer term, they do even more damage, by giving the whole industry a reputation for bad business practices. Cable companies can get away with it because of government-granted monopolies; newspapers won’t be able to.

Vegetarian Advocacy Is Ineffective

Most pro-vegetarian advocacy is not very effective. The problem isn’t the goal of making animals happy – it’s likely that farm animals have moral value, and almost everyone agrees that factory farm conditions are horrible. Instead, the problem is the most common strategy used to achieve that goal: namely, emotionally-charged rhetoric to convince people, either in person or on the Internet, that they should personally not eat meat.[1] This category of solution to animal rights problems is likely ineffective at best, and downright harmful at worst. As GiveWell says, non-profits shouldn’t “point to a problem so large and severe (and the world has many such problems) that donors immediately focus on that problem – feeling compelled to give to the organization working on addressing it – without giving equal attention to the proposed solution, how much it costs, and how likely it is to work.”

This essay doesn’t address whether animals have nonzero moral value, which has been thoroughly discussed elsewhere. Nor does it look at other solutions to factory farming, like government legislation, or scientific research to develop meat substitutes. It simply tries to show that a lot of pro-vegetarian advocacy, as it’s currently practiced, is ineffective or outright counterproductive. Since there’s a wide range of arguments to consider, this writeup has been broken up into chunks, of which this is the first. This chunk looks at the simplest sub-question: is vegetarian advocacy a cost-effective way of reducing the amount of meat eaten?

First, we should check: is any kind of activism ever cost-effective? The answer seems to be yes. Eg., everyone knows about the campaign for gay marriage, which won a full victory in the US in 2015 (though after many decades of work). However, there seems to be a clear pattern in which activism campaigns are successful, and which aren’t. Psychologist Daniel Kahneman divides the brain into two systems: System 1, which is fast and intuitive, and System 2, which is slow and reflective. System 1 evolved before System 2, and is more connected to the physical actions we take, while System 2 is more closely linked with what we say, write, and think. The historical record shows that activism aimed at System 2 is difficult, but can sometimes be effective. On the other hand, activism aimed at System 1 is usually a waste of effort.

[Edit: The pattern of successful vs. unsuccessful activism still seems real, but the distinction being drawn here is not what Kahneman meant by System 1 and 2. Apologies for the mistake, further clarification to follow.]

For example, consider the history of activism against racism. In 1955, Rosa Parks started the Montgomery Bus Boycott, which triggered a wave of anti-racist advocacy across the US. Though it was a tough battle, after nine years, Congress and President Lyndon Johnson passed the Civil Rights Act, a sweeping bill that outlawed almost all racial discrimination. Over the next fifty years, advocates kept pushing harder and harder for an end to racism. On a System 2 level, this campaign was so successful that virtually no public figure now advocates for segregation, a radical change from 1950. When businessman Donald Sterling was caught being racist, it was such a big deal that it became the focus of his entire Wikipedia page. However, System 1 has been much more stubborn. After sixty years of advocacy, a psychology metric called the Implicit Association Test shows that most white Americans still have negative System 1 assocations with black faces.

There have been many other campaigns to persuade people’s System 1s through rhetoric, advertising, peer pressure, graphic images, and so on, but they usually get negligible or marginal results, compared with the effort invested. Consider smoking as another test case. There’s near-universal agreement that smoking is very, very bad for health, in both the short term and long term, and there’s been enormous efforts to convince smokers to quit. On one side, most smokers themselves know darn well how bad smoking is, and many make heroic efforts to stop. On the other side, governments, nonprofits, and smoking-cessation-aids companies spend billions researching how to help people stop smoking. Thousands of studies have been done on the effectiveness of anti-smoking programs, so we’ve put a lot of effort into finding the very best strategies.

The results of this enormous, expensive, fifty-year effort have been modest at best. The US smoking rate has fallen from ~40% to ~20%, a decline of ~50%, or a bit over 1% per year. Of that decline, much of it was caused by fewer people taking up smoking in the first place. Much of the remainder was caused by laws that make cigarettes more expensive and difficult to use, such as taxes, restrictions on sales, public smoking bans, restaurant smoking bans, and so on. Hence, all anti-smoking programs, cessation aids, addiction research, PR campaigns, etc. combined have given us a few tenths of a percent decline per year.[2] More generally, Scott has a long essay on how these types of programs are ineffective:

“We figured drug use was “just” a social problem, and it’s obvious how to solve social problems, so we gave kids nice little lessons in school about how you should Just Say No. There were advertisements in sports and video games about how Winners Don’t Do Drugs. And just in case that didn’t work, the cherry on the social engineering sundae was putting all the drug users in jail, where they would have a lot of time to think about what they’d done and be so moved by the prospect of further punishment that they would come clean. And that is why, even to this day, nobody uses drugs. (…)

What about obesity? We put a lot of social effort into fighting obesity: labeling foods, banning soda machines from school, banning large sodas from New York, programs in schools to promote healthy eating, doctors chewing people out when they gain weight, the profusion of gyms and Weight Watchers programs, and let’s not forget a level of stigma against obese people so strong that I am constantly having to deal with their weight-related suicide attempts. As a result, everyone… keeps gaining weight at exactly the same rate they have been for the past couple decades.”

To create a quantitative model, we can look at the test case of online ads, which Animal Charity Evaluators (ACE) have done substantial research on. ACE says that “online vegetarianism and vegetarianism ads are currently our most cost-effective intervention recommendation”.[3] In economic terms, one should naively expect that one dollar of ad purchases causes about one dollar of money moved, where “money moved” equals the change in (retail price of goods purchased – marginal cost of goods sold), summed over all relevant goods. If each dollar of ads caused more than one marginal dollar of money moved, companies would just buy more ads to make more money, until decreasing marginal returns brought gains back down to $1.

Of course, that’s only a rough approximation. Any given ad campaign might be more or less effective, for any number of reasons. However, in this case, the sheer magnitude of the gap is cause for great concern. ACE estimates that the cost-per-click (CPC) of pro-vegetarian ads is about two to twenty cents, and that based on survey data, around 2% of ad clickers become vegetarian or vegetarian. American adults spend around $5,000 to $10,000 on food per year,[4] so total money moved through becoming vegetarian is on the order of $100,000. Hence, under the naive economic model, the chance of people becoming vegetarian because of an ad click is roughly 0.00002% – 0.0002%, a massive four to five orders of magnitude smaller than ACE’s estimate. A likely explanation for this, as ACE themselves note, is that people only click on the ads if they were thinking about becoming vegetarian anyway. About two thousand people typically see an online ad for each person who clicks, so even a very small number of existing proto-vegetarians in the ad audience fully account for the survey data.

[Edit: 0.00002% is inaccurate, even within this model. Two cents per click is only available in poorer countries, which have much less total money moved, bringing the estimated odds back to around 0.0002%. However, poorer countries also consume much less meat, which largely compensates for this effect in terms of benefit per dollar.]

What empirical data we have backs this model up. After decades of vegetarian advocacy, PETA says that “society is at a turning point” for veganism. But polling data suggests that only 5% of Americans are vegetarian, and that this percentage has gone down since the 90s. Only 2% consider themselves vegan. Further, these numbers are likely overestimates. Polls often show that a few percent will support any idea, no matter how crazy, like “all politicians are secretly alien lizards” (really!); in addition, most people who said “yes” to the vegan question said “no” to the vegetarian question, which suggests lots of confusion. Even among self-described vegetarians, more detailed surveys show that most still eat an average of one serving of meat per day, which nicely confirms the System 1/System 2 model. It’s easier to convince System 2 that vegetarianism is a good idea than System 1, creating a paradox where two-thirds of ‘vegetarians’ ate meat yesterday.

In addition, even if advocacy is successful, the benefits from one person going vegetarian are not very large compared to their cost. Statistics from vegetarian advocacy groups usually cite the large numbers of animals killed. However, each individual animal life is very short, because meat becomes cheaper when farmers breed animals for rapid growth. Consider chickens as an example. The average American eats 27.5 chickens per year; since a broiler chick takes about five weeks to grow, this gives us 2.64 chicken-years of life prevented by one year of vegetarianism. To evaluate the cost of not eating chicken, we must look at not the price of the chicken, but the “consumer surplus” – how much benefit the customer derives from the product. With some rough math, this comes out to around $18.84 per chicken;[5] this is averaged over both people who like chicken a lot, and people who only like it barely enough to buy it. That gives us a total value-from-chicken (after the cost of the chicken) of $518 per person per year, which can be given up to save 2.64 years of chicken suffering.

Comparing this to human charity, GiveWell estimates a cost-per-child-saved from malaria nets of $2,838. Since GiveWell’s numbers only count children saved, given developing-world life expectancy, each life saved creates about 60 extra person-years. That equals a cost of $47 per person-year, compared to the average cost of $196 per chicken-year from not eating chicken. Therefore, the person-years are a much cheaper buy, even if we assume that chicken lives are so incredibly bad that preventing one chicken-year is as good as saving one person-year.

In fact, this estimate is still biased in favor of chickens, for two main reasons. The first is that GiveWell’s estimate doesn’t include the benefits of mosquito nets beyond saving children; these include saving adults from death by malaria (though adults have a much lower fatality rate), preventing many more non-fatal cases of malaria, preventing mosquito-borne disease in general, and of course preventing mosquito bites, which (ignoring everything else) can be done at a cost of hundredths of a penny per bite. The second reason is that Against Malaria Foundation is selected for being extremely low-risk; given a donation to AMF of $X dollars, one can be extremely confident in at least Y lives being saved. GiveWell thinks it’s likely that if we take on riskier projects, like scientific research and policy reform, the expected cost per life saved will be even lower. Indeed, one of these riskier projects is actually US policy reform to improve farm animal welfare.

Several people have suggested this is an unfair comparison. Most donors have a limited “charity budget”, and $10 that they spend on one charity is $10 which they don’t spend on another. However, what if going vegetarian increases people’s willingness to do other kinds of good? Unfortunately, psychology research suggests this is unlikely – people often have a “do-gooding budget” in addition to their financial budget, and doing one altruistic act will decrease their willingness to do others. To quote Nick Cooney, founder of animal charity The Humane League:

“The contribution ethic refers to the feeling many people have that “I’ve done my part on issue A, so it’s okay for me to ignore issues B, C, and D.” During a Humane League campaign to get restaurants to stop purchasing products from a particularly cruel farm, owners would often tell us that they already do something to help animals (“We buy our eggs from a local farm,” or “I donate to the ASPCA”) so we shouldn’t be bothering them. This phenomenon worked across issues too, as we often had owners or chefs tell us how they supported some other social cause so we should leave them alone about this one.

In addition to feeling like they’ve done their part and therefore don’t need to do anything more, people often overestimate the amount of good they’ve done. Combined, these phenomena make it hard to move people beyond small actions for their one or two preferred causes (Thogersen and Crompton 2009).”

Footnotes

1. Of course, there are many different animal-rights diets. Other types of ethics-based dietary restrictions include “vegan”, “lacto-ovo vegetarian”, “pescatarian”, “flexitarian”, “reducetarian”, and so on. Since keeping track of all the different labels is unwieldy, for the most part I’ll simply say “vegeterian”, even though the main arguments also apply to many other ethics-based diets. I’ve seen people use “veg*n”, but that’s also unwieldy due to ‘*’ not being a letter.

2. One possible complication is that nicotine is chemically addictive. However, treatments to fight the chemical part of addiction have been available for decades, which I’d expect to largely cancel out this effect. In addition, similar problems (alcoholism, gambling, sugar consumption, etc.) have generally seen similar results.

3. Although I disagree with them on several issues, ACE should be commended for trying to be quantitative.

4. Data from this Gallup poll. Note that this data is based on self-reports, and is per-household rather than per-adult, so it is only approximate.

5. The cost of a whole chicken is roughly $1.50 per pound per this article, or $9 for a six-pound chicken. From this paper, the price elasticity of demand of chicken is around -0.8, so we can naively model the demand-price curve with the ODE dy/dx = -0.8*y/x, y(9) = 1, which has the solution y(x) = 5.8/x^0.8. Integrating from x = 9 to, say, 100, we get 27.84, which subtracting the $9 for the chicken’s price gives us a consumer surplus of $18.84. This is, of course, just a rough guess.

Asking Good Questions

Aumann’s Agreement Theorem says two perfect rationalists can’t “agree to disagree”. Therefore, when two people disagree, a good question is one that makes either the asker or askee change their minds. Some examples of bad questions are:

– Why don’t you agree that abortion kills innocent babies?
– Why don’t you support welfare programs that help children with cancer?
– Isn’t it obvious that Politician Bob is corrupt?
– Do you deny supporting the poaching of baby seals?

These aren’t really questions. They’re more like attacks, with a question mark tacked on at the end. Instead, a good question tries to roll back a chain of inferences. A person might support E because they support A, and they also believe in the chain of arguments A (therefore) B (therefore) C (therefore) D (therefore) E. A good question tries to bring the debate about E back to a debate about D, and ultimately all the way back to A.

Some thoughts on how to ask better questions:

– Try to be quantitative. For example, someone might say “I think idea X is too expensive.” So you might reply, “About how much do you think X will cost?”. Sometimes this is a bit more difficult, like if your co-worker said “I think Bob would be a terrible person to hire”. But you can still be semi-quantitative; eg., “On a scale from 1 to 10, where 1 is the worst candidate and 10 is the best, where do you think Bob is?”.

– Ask for examples. Your friend might say, “Policy X has always been a disaster.” So a good next question might be, “Can you talk about some times when people tried X, and it turned out really badly?”. A lot of words are vague enough that two people will hear them, and imagine totally different things in their heads, often without realizing it. So examples can help clear up semantic misunderstandings.

– Talk about probabilities. Eg., a lot of people will say that X is a serious threat, for various different values of X. But sometimes X is very unlikely to happen; the most famous examples are media sensations like terrorist bombings, shark attacks, and stranger abductions. So a good question might be, “If I did X, about how likely do you think it is that <bad thing Y> would happen?”.

– Investigate where ideas come from. Even when people are wrong, it’s rarely because they made something up out of their heads. Much more often, they’ll hear something from Alice, who heard it from Bob, who heard it from Carol, and so on, and the original (correct) idea got lost in a long game of “telephone”. (The Science News Cycle shows this process in action.) So if you can find the original source for an idea, you might both wind up agreeing with it.

– Ask what a supporter thinks about an idea’s downsides. Sometimes, they might disagree that a downside exists; sometimes, they might think a downside exists, but that it’s very small; and sometimes, they might think the downsides are large, but the benefits are so big that it’s worth it. So if eg. someone supports using a new chemical in agriculture, you might ask “How dangerous do you think chemical X is?”. (Don’t let this become rhetorical, though. A question like “But won’t idea X kill millions of puppies?” is back in attack territory.)

– Find comparisons to other examples. A person who really liked X might say things like “X is the best Y ever”. So you might ask, “what are some other Ys, and what makes X better than them?”. Luke Muehlhauser’s post on textbooks used this technique very successfully – people who liked a book also had to name two other books they thought were worse. Otherwise, people might recommend something just because it was the only book they’d read in the field.

Why you should focus more on talent gaps, not funding gaps

This website focuses on original content, or content that would otherwise be non-Googleable. However, I am making an exception for Ben Todd’s excellent essay, “Why you should focus more on talent gaps, not funding gaps“, both because of how critically important it is and how thorough and well-written it is. The main focus of Ben’s essay is that solving problems is most often limited by human capital (which Ben calls “talent”, although it’s much broader than the word “talent” might normally imply) and social capital (combining talented people into a team that works well), not by funding. I agree with almost everything he says, and I’ve tried to write up bits and pieces of the arguments Ben makes before, but he really does a much better job. The essay is targeted at effective altruists, but I think it’s a must-read even for people who wouldn’t consider themselves EAs.