Moving To The Bay Area

Many Effective Altruists think about moving to the San Francisco Bay Area, or have already done so. There are many good reasons to move; San Francisco is, by a significant margin, the global hub of both the technology industry and the EA community. I personally live in Berkeley. However, since EA has such a large student population, I think many EAs get a skewed impression of the Bay Area. The Stanford and UC Berkeley campuses are paradisical, but in many ways, they aren’t representative of “day-to-day life” here. While many people do enjoy living in the Bay Area, I thought it would be good to write up some of the “negative factors” for an average resident that aren’t discussed as much. I encourage everyone thinking about moving to do their own research here; caveat emptor.

1) Cost of living. If you’ve thought about moving to San Francisco, you’ve probably heard rumors about how expensive it is, but the numbers still shock a lot of people. By some metrics, San Francisco is literally the most expensive city in the world.[1] As of February 2017, the average rent for a one-bedroom SF apartment is $3,368 a month.[2] For a single person, if you make $125,000 a year and rent an average one-bedroom apartment, over half your after-tax paycheck will go to rent;[3] expenses go up much more if you have any children. It’s possible to find cheaper apartments, but getting one will almost certainly make the other issues discussed below worse. (On the plus side, most buildings in San Francisco, Oakland and Berkeley are rent-controlled. If you get a rent-controlled unit, the landlord is required to renew your lease every time it ends, it is nearly impossible to evict you, and inflation-adjusted rent will go down every year you live there.[4] This does not apply to most of the suburbs.)

2) Traffic. You can find cheaper apartments, for example, in the Outer Sunset neighborhood, which is only eight miles from downtown. However, driving those eight miles can take more than an hour, each way.[5] The San Francisco area has the second-worst traffic in the US, after Los Angeles.[6] A major contributor is that San Francisco and surrounding cities have the worst roads in the US, with 71% of roads in “poor” condition, significantly worse than Detroit.[7] Parking is also difficult and expensive to find; the city of Berkeley gets 5% of its annual budget from parking tickets.[8] If you mess up and your car gets towed in SF, getting it back will cost at least $500, usually more.[9] It’s easier to park in suburbs and outlying areas, but some of them are very far-flung; eg. driving from the nVidia campus in Santa Clara to San Francisco can take over two hours.[5] These may seem like small issues – ones you would ignore on a week-long visit – but they add up when you’re forced to deal with them, multiple times a day, for month after month. By some metrics, the San Francisco area is #1 for “mega-commuters”, workers who spend at least three hours per day communting.[10] Long commutes are scientifically shown to damage personal well-being.[11]

3) Public transportation. Unlike many American cities, San Francisco’s public transit is “high coverage” – most places are in walking distance of a bus stop. However, it is also the slowest service in the US, with an average speed of 8 mph.[12] Over 40% of buses are either at least five minutes late, or will leave early (causing missed transfers), and most trips require transfers between multiple bus routes.[13] In addition, 89% of Bay Area residents live outside San Francisco itself;[14] in many of these places, transit service is spotty or non-existent. Bay Area public transit service is run by 33 different government agencies,[15] plus a smattering of private companies like Chariot and Tideline. These agencies do not generally talk to each other or coordinate their schedules. A trip from downtown Novato, a northern suburb about 30 miles from SF, to the main Apple campus in Cupertino would require using three independent bus systems, plus two independent train systems, and would take nearly four hours (assuming every ride was on time, and not including rush-hour traffic).[5]

4) Crime. Although San Francisco’s murder rate is not terrible, it has the highest property crime rate of any major US city.[16] Street harassment, vandalism, car break-ins, open-air drug dealing, and public heroin use are extremely common, especially in the areas near downtown where most offices are. Public defecation is so frequent that subway escalators are routinely shut down to remove large amounts of fecal matter;[17] a convenient heat map shows the hundreds of spots where human waste is cleaned up in any given month.[18] Needless to say, the streets are frequently dirty, smelly, or have used needles on them; this is made worse by San Francisco’s otherwise excellent summer, which often means three or more months with no significant rain. There are many distant suburban areas which don’t have this problem at all, and are very safe and pleasant to walk around in; however, because of extremely strict zoning laws, offices, stores and factories are usually not allowed there either, raising again the problem of commuting long distances.

It’s important to note that many people won’t be bothered by these. If you’re a student, independently wealthy, retired, a freelancer, or work exclusively remotely, you can get a (relatively) cheaper apartment in a good neighborhood, a long way from San Francisco itself. Since you don’t have to commute, it won’t frustrate you. You’ll probably visit downtown SF rarely, so you won’t be screamed at or assaulted on the street. Rent will be a lot lower. You can probably get groceries and other stuff delivered most of the time, and if you do go out shopping, you can go at lunchtime or late at night when the traffic’s less bad. However, I think that for a lot of people, these are factors that should be seriously considered before choosing to move.


5. for distance and time estimates

Long-Term World Improvement

There’s a lot to like about Effective Altruism, but ultimately, I fundamentally disagree with a core assumption EA makes. I’d like to explain why, what my plan is to improve on EA, and why I think it’ll be more effective at shaping the world than EA.

Effective altruism, like most groups, is based on synergy: a whole greater than the sum of its parts. You form a group because you can do more with teamwork than by yourself. You start a company because if you pool everyone’s money, you can make more products. You put the aluminium smelter next to the dam, because one feeds into the other. You win by taking advantage of positive externalities, division of labor, and splitting large fixed costs. However, no matter how much business schools use the word, not everything has synergy. You don’t put a pizza restaurant and the aircraft warehouse in the same building. You wouldn’t put physics and dance students in the same classroom. There’s no point. They’d just get in each others’ way.

A central belief of EA is that there’s synergy from all different ways of doing good. The EA message is that people donating to fund insecticide nets, campaigning for veganism, and doing research on unfriendly AI can all benefit from each other. They can all be more effective if they share websites, go to the same events, plan strategy together, fundraise together, cooperate in recruiting new members, and so on. And to some extent, I agree with that. Even projects that sound very different can, sometimes, benefit from teaming up.

However, I think there’s very little synergy between doing good on a short timescale (say, one year or less), and a long timescale (five years or more). Doing good in the long term means you must consider how systems evolve over time, and that’s a very different mode of thinking. If a charity buys cancer drugs for patients, they need to know how cancer treatment works. But if a pharma company develops a new cancer drug, they need to analyze not just how things work now, but all the ways they *might* work, five or ten or fifteen years later. A competitor could make a better drug before they release theirs. The FDA might change its rules for drug approval. The economy could crash, and make banks pull their funding. Insurers could get together, and force companies to lower prices. Courts could change how they enforce patents, the AMA might recommend more or less aggressive treatment, the tools scientists use for drug discovery could be replaced, it goes on and on and on. If anything big happens, the charity’s donors can just give elsewhere; the pharma company’s investors are mostly stuck.

And that’s a tame example. When there’s even more change, you have to make plans with no obvious connection between cause and effect. For example, suppose someone said they wanted to replace Boeing in the airplane industry, and their strategy was to sell Boeing fuel pumps. At first glance, that makes little sense. But it’s essentially what Microsoft did with computers. Sell an apparently minor, but critical, component to the industry’s leader, and expand on that component until the original leader is irrelevant. This type of indirect maneuvering has almost no connection to, say, doing a controlled study of vegan diets or polio vaccines; the time horizons, and the uniqueness of each situation, make statistical research impossible.

If short-term and long-term plans have little synergy, that implies each group ought to pick one, and focus on that. I’m picking long term. Partly because it plays to the strengths of me and my friends, who are mostly younger and more intellectual. And partly because I have long time horizons; I don’t put a huge discount on the world in ten or twenty years, compared to the world now. This preference seems to be pretty rare, and supply and demand mean it’s easier to achieve rare goals than common ones. Since many EAs share these traits, I think a lot of them would benefit from picking long-term along with me.

Right now, as I’m writing this, there are also two extra advantages to long-term. The first is that it’s been in a slump recently, so a small number of people can have a bigger impact. (Eg., to my knowledge, the only serious attempt to forecast AI trajectories in the last three years was by AI Impacts, and it’s unfortunately still unpublished.) The second is that, compared to EA, it makes developing new ideas much easier. The defining EA question for any new plan is, “does it do the most good?”. But that’s very hard to answer. Partly because “the most good” really depends on who you are, and what you value. And partly because, to say something is “the most good”, you also have to know how much good everything else is, which is a lot of work. It’s not a coincidence that the three biggest EA causes – global poverty, factory farming, and AI risk – are all ideas that came from elsewhere (and were brought into EA afterwards), not invented from a first principle of “do the most good”. Of course, effectiveness and benefit-per-dollar are still important. But if the *first* question asked is “how long does this plan take?”, it’s much easier to explore the space.

As a first step, I’m creating a new private Facebook group, called “Long Term World Improvement”. Like the plans it will discuss, it’s an experiment; it may fizzle, or transform into something very different. Topics might include, but are not limited to:

– Curing disease, ending aging, and human enhancement
– Long-term government and policy work (not the next election)
– Strategies for long-term income or influence, eg. $10 million in the next decade
– AI capabilities and safety, although I’d like to avoid generic rehashes of Bostrom/Yudkowsky
– How to build a society with non-human general intelligences
– Scientific research, tools, and meta-science
– Space exploration and colonization
– Building new cities, or large-scale relocation to an existing city
– Robotics, neuroscience, nanoscience, and brain-machine interfaces
– Rational forecasting and extrapolation methods
– Biosecurity, nuclear security, and disaster preparedness (both personal and global)
– Any new technology that’s not yet well-known, but might be very impactful (eg. lab-grown meat circa 2006)
– Historical examples and analyses of long-term plans, and what worked or didn’t
– Long-term strategies you’re personally doing or considering (encouraged)

While these projects are very different, they all take place in a single, shared future. And their success all depends on understanding that future. This links them all together, in a way that’s impossible for “doing good” as a pure abstraction.

Off-topic items include, but aren’t limited to:

– Redistribution of existing personal assets, or close equivalents (eg. bed net donation)
– Political news stories, except specific, direct impacts on plans (eg. if X has a coup, that affects political strategy there)
– Marketing products that already exist
– Rare, edge-case ethical scenarios, like the proverbial trolley problem
– Various forms of “ethical consumption” and “ethical divestment”
– Generic, “routine” news that appears very frequently (eg. new “potential cancer cures” are announced daily)
– “Long term” strategies whose core is a pyramid scheme (recruiting people to recruit people to recruit people to…)
– Generic “junk food” news, such as sports, crime, or outrage-bait

Hacker News moderation rules will apply. (For non-Hacker News users, check out; you can also create an account, turn on “showdead”, and browse to get a feel for how it works.) To join, message me on Facebook, email me at, or chat me on Signal (203-850-2427). If you don’t know me, please include a short summary of who you are, and what you’re interested in. As moderator, I reserve the right to not add people, or to remove existing people if they break the rules. Feel free to message me with questions. Good luck, and may tomorrow be brighter than today.

Quixey Is Shutting Down

The app search startup Quixey is shutting down; it was previously valued at $600 million in a 2015 financing round. Since the only English article covering the shutdown is paywalled, the below is a (bad, sorry) translation from Chinese websites. Wikipedia has more background information on the company. Quixey founder Tomer Kagan stepped down as CEO last year, and is now running a new startup named Sigma.

Quixey, a mobile search firm that has received investment from Alibaba, is currently closed and a large number of employees have been fired two weeks ago, according to informed sources. Quixey was founded in 2009 to provide application search capabilities for large vendors, operators, search engines, and web applications. It was known to have earned $400,000 in seed investment from Innovation Endeavors (Google chairman Eric Schmidt’s investment firm) at the beginning of its history.

The reports have been confirmed. In response to inquiries, Alibaba said: “Alibaba has been Quixey and its founders’ largest financial support. Unfortunately, because the company’s development did not meet expectations, the board of directors made a decision to close the company’s business. We will continue to invest in the US market, support entrepreneurs and innovate.”

The company raised 60 million US dollars in 2015, in a round led by Alibaba and Softbank. Twitter, Goldman Sachs, GGV Capital, Google founder Eric Schmidt, George Soros, and investment institutions have also invested. Through now, Quixey has raised $130 million.

It is understood that, unlike Google and Baidu and other keyword-based search engines, Quixey is a functional search engine specifically for mobile applications, built on a variety of platforms and based on rich functional applications. Quixey’s application search is not based on an application’s title, metadata, or application description, but based on what the user wants to do, what features are needed to search. For example, if you search for the keyword “book a hotel,” Quixey will return a list of all the applications that provide this feature and select and filter the appropriate platform. Search results may include [???], and it supports iOS, Android and other operating systems, somewhat similar to Baidu’s light application distribution platform.

Quixey also works with major companies., such as the US Q&A website, Singapore’s largest telecom operator star Hub, and has reached agreements with many North American businesses to install it in their default toolbar.

Moreover, with Quixey deep learning, the scope of the index is not limited to the major app stores, but aimed at the app of “the world around.” Search covers the major sites, blog comments, evaluation, forums and so on, searching through the user comments to finally return a result. Its ambition was to achieve full platform coverage.

Since its establishment seven years ago, Quixey’s financing has been relatively smooth. Alibaba led in the C round with a 50 million US dollars stake, and continued with the support, quickly became the largest investor. In November 2016, Quixey also had a series of turbulence. John Foster was appointed as a replacement to become the new CEO. Due to not achieving revenue targets, two executives, the COO and CTO, left.

Four Layers of Intellectual Conversation

(By Eliezer Yudkowsky; posted here with permission)

Building an intellectual edifice requires ongoing conversation, and that ongoing conversation needs four layers of speech to be successful.

(Yes, four. Not three. Later I’m going to pretend I didn’t say that, but right now I’m serious and this is important.)

There is a widespread traditional notion that the total absence of critique is bad; that it is a bad sign to have a conversation consisting of people saying X and nobody saying “hey maybe not-X”.

Why is this bad? Well, because people could say stupid things about X, and nobody would call them on the stupidity. Yikes!

Okay, so here’s the thing: If the people saying “hey maybe not-X” don’t anticipate losing points from being called out on stupid critiques, that doesn’t create a conversation either. I am speaking here from awful experience and many people reading this will have seen the same.

A conversation that successfully builds an intellectual edifice has four *necessary* layers. I’m not saying “necessary” as an emphasis-word for how nice it is to have more layers. I mean, “If you eliminate the fourth layer, the mechanism falls apart.”

You need:

0: People saying X.
1: Critics saying “hey maybe not-X”.
2: Responses/counter-critiques saying “well maybe X after all”.
3: Counter-responses saying “but maybe not-X after all”.

If you eliminate layer 3, that means the conversational record doesn’t include critics responding to critiques of their criticism.

In other words, the critics saying “not-X” won’t anticipate needing to defend their “not-X” claims.

Layers 0-2 being visible in the record, but not layer 3, is the sort of situation you have when biologists speak of evolution (0, object-level claim), and a priest says something about evolution being true but only God being able to create the first life that started it (1, critique), and biologists reply with a detailed account of the current thought on abiogenesis (2, response/counter-critique); and the priest does not reply with detailed thoughts explaining why the current thinking on abiogenesis is technically flawed (absence of 3, the counter-response).

0-2 is what you have when, say, Eric Drexler is writing detailed technical books about molecular nanotechnology; and a famous chemist says something profoundly stupid indicating they have not read very far into this literature (e.g. “but the sticky-fingers problem!”); and Eric Drexler writes a response dissecting the critique, which doesn’t receive as much media attention; and the chemist doesn’t care or replies with something that is visibly not very detailed or thought-out.

Conversational layers 0-2 being visible in the record, but not much layer 3 or an unimpressive layer 3, generally represents a situation where the critic doesn’t expect that their criticism will come under harsh scrutiny to which the critic will be socially obliged to respond intelligently as part of the widely seen public record. On the critical side, that’s just as bad as there being just layer 0.

When I say layers 0-3 need to be there, I mean that there must be a social incentive to do them well; people must lose status points for saying dumb things at any of these layers. When as an outsider you look at the conversational record, all of these layers should not just be merely present as a checkmark on a list. You should be looking for the same standards of impressive technical-sounding words, or abiding by epistemic norms and discourse norms or whatever, as you would demand of the ground-level statements.

Now here’s the dire part: the current academic journal system, in practice, operates at layers 0-2. You submit your paper, and the reviewers offer a response, and you’re expected to have an intelligent response to the review. But these reviewers (often anonymous) do not expect to lose huge social brownie points if their critique is stupid. Even if the reviewer is supposed to offer some kind of counter-response for the record, it can be a casual and stupid counter-response, and nobody will go “Hey what the hell are you doing” at that.

Absent any incentive to be smart, the reviews are often really, really stupid; especially if the original paper doesn’t look to be authored by a high-status person. Though I’ve heard from more than one person with very high status in their field that even the reviews *they* get are stupid.

Modern academics treat stupid bad reviews as an environmental hazard. It’s not *conversation*. It’s not building the edifice of knowledge.

So where does the real conversation happen, in scientific fields, when there’s a proposition worthy of debate and not just another unquestioned fact to be recorded?

Maybe it happens in the bar at conferences, where people are speaking in realtime in front of their friends, and would actually lose status points if they uttered a dumb critique that was shot down and their counter-response looked stupid. Or maybe it happens on email lists. It could be happening on Facebook, for all I know of any particular field.

But the journals merely record an intellectual edifice that was built elsewhere. The real conversation that creates the intellectual edifice in the first place couldn’t happen with the journals as a medium.

The only time I’ve seen a stream of journal articles that looked like they were seriously *building*, not just *recording*, an intellectual edifice, it’s been in analytic philosophy. Analytic philosophy is about debating, qua debating, in a way that chemistry isn’t. I could be wrong, but I expect that editors of analytic-philosophy journals *do* expect intelligent counter-responses; and that reviewers expect to lose status points if they can’t come up with intelligent counter-responses. (Though this could be confounded by analytic philosophers having the highest IQ of any graduate group in academia, yes that happens to be a thing that is true.)

Unfortunately analytic philosophy lacks the ability to settle on any answer and declare it settled. So be it noted that just because you ought to demand high standards of both counter-critiques and counter-responses, it doesn’t mean that nobody’s right. Or that there isn’t such a thing as one side having an overwhelming weight of argument. Or that so long as both sides are writing long technical arguments they must have equally socially respectable positions which is all there is to epistemics, etcetera.

Mainstream media that pretends to be serious pretends to have layers 0-1, though journalists often just make up their own critiques, or twist the quoted critics to make the criticism look like the cliche they expect the reader to expect. And when the media is not pretending to be serious they operate at unvarnished layer 0.

It’s sad, it’s really sad, to compare the current academic conversation about AGI alignment–not that the academics know they should be calling it that nowadays–with the informal conversations I saw on email lists in the late nineties. Email lists where you knew that if you said something dumb, even if it was an ohmygosh virtuous “critique”, Robin Hanson might reply with an email pointing out the flaw, and everyone else on the mailing list would see that reply. On those mailing lists there was a real conversation, and that’s what built up the early edifice of thought about AI alignment. There’s been more theory built up since those days, but almost everything the public got to see in Bostrom’s _Superintelligence_ just records the edifice of thought built up on those email lists where a real, actual conversation took place.

By comparison, academic discourse on AGI comes from the void and is cast into the void. It shows little awareness of previous ideas, it is not written as if to anticipate obvious responses, the obvious responses go unpublished in the same public record, and certainly there is no detailed and impressive counter-response.

When you publish a journal article claiming to have shot down the so-called notion of the intelligence explosion once and for all, and your article is about hypercomputation being impossible, then you are clearly not operating in an environment where you expect to be socially obliged to come up with an intelligent response to counter-critique. Perhaps the thought crossed your mind that somebody might say “Hey maybe the intelligence explosion doesn’t require hypercomputation, and you made little or no effort to establish that it did, and if you say that’s true by definition then this is not the definition anyone else in the field uses.” But if so it was a fleeting thought and you didn’t expect to be troubled, to lose reputation, if any of your prey tried to reply that way to your predation. When you published your “critique”, you were done scoring all the points you expected to score off them, and you didn’t expect to lose any points for responding casually or not at all to their counter-critique.

So the academic conversation hasn’t gotten anywhere near as far as the informal conversation on those old email lists in the late 90s, never mind everything built up since then.

Unfortunately, the traditional scientific upbringing speaks only of the importance of criticism.

EAs used to ask: “Has there been critique of MIRI’s ideas? Who are the critics?” If you take this literally, they were asking to see a record of a conversation that included layers 0-1. Implicitly, they were asking to see 0-2; they would have been surprised if I showed them critique but couldn’t point to anything that responded to the critique.

But if you want to know that critics are a part of the conversation, you need to be able to point to serious-looking counter-responses by critics. Back in the old days I’d always reply “Robin Hanson is the serious critic, there isn’t really anyone else worth pointing to”, because nobody else was writing detailed counter-responses to detailed counter-critiques.

Keep that in mind the next time you’re trying to judge the strength and health of an ongoing conversation… or, this is very important, *or* when you’re wondering how seriously to take a critic. Don’t ask, “Is there a forum where both sides of the story can be heard?” Rather ask, “Is there back-and-forth-and-back-*and*-forth?” Don’t ask, “Has somebody performed the duty of critique?” Ask, “How impressed am I by the counter-response to the counter-critique?”

(Facebook discussion)

Radar Detector-Detector-Detector-Detector Almost Certainly a Hoax

The Wikipedia article on radar detector detectors used to say:

“In 1982 the US military funded a project, codenamed R4D (radar detector-detector-detector-detector), in order to develop a device capable of detecting radar detector-detector-detectors.”

This has been widely shared online, by (among others) the rationalist blog Slate Star Codex, and the /r/wikipedia and /r/TIL subreddits. However, it’s almost certainly a hoax. The evidence:

– First, this “fact” was added to Wikipedia by an unregistered, anonymous IP address. No source was provided.

– Second, similar sentences have been added to this article before. These edits are always anonymous, always unsourced, and tell contradictory stories:

“In response, a few people also employ the use of a radar detector detector detector to detect the detection of their radar protector, but that is rare. At this time, the police are developing a radar detector detector detector detector to counter-act this.” (Aug. 2008)

“Furthermore, it is now known that a radar detector detector detector detector detector is being developed by military organizations in many countries.” (Sep. 2008)

“Scientists are currently working on a radar detector detector detector detector detector detector detector detector detector which is in the early stages of prototyping. There were plans to create a radar detector detector detector detector detector detector detector detector detector detector, but these were scrapped due to gross stupidity.” (Sep. 2008)

“Scientists at Cambridge University are currently working on a radar detector detector detector detector detector detector detector detector detector which is in the early stages of prototyping. There were plans to create a radar detector detector detector detector detector detector detector detector detector detector, but these were scrapped due to gross stupidity.” (May 2009)

“This technology was countered with the invention of the radar-detector-detector-detector-detector. However, due to escalation, the development of the radar-detector-detector-detector-detector-detector was deemed necessary.” (Feb. 2011)

– Third, there seem to be no real sources for a radar detector-detector-detector-detector, or for an “R4D” project. All the Google hits for this supposed “military project” trace back to Wikipedia. Google Books, Google News, and Google Scholar turn up nothing, except for some references to the Douglas R4D cargo plane. There are a few posts on forums, but they’re obvious jokes. Eg.:

“More importantly.. Are there radar detector detector detector detectors? That Meanz we Need 2 Stay 1 Step Ahead of the Game. Some1, Quick, Create a Radar Detector Detector Detector Detector Detector *Head Explodes*” (link)

– Fourth, the military doesn’t seem that interested in radar detectors, never mind radar detector-detector-detector-detectors. Military radar towers are generally large, powerful, and obvious (see eg. this Distant Early Warning station); a radar system weak enough to avoid detection wouldn’t be very useful against stealth planes at long ranges, or against jamming devices. The Wikipedia article on radar detectors is exclusively about civilian use. There’s plenty of military interest in radar jamming, or electronic countermeasures, but that’s a different thing. (Radar jammers are illegal for civilians under FCC rules.)

– Fifth, there’s really no reason to build a radar detector-detector-detector-detector. A radar detector-detector-detector is useful if you have a radar detector, because it lets you distinguish between radar (which the detector can ignore) and radar detector detectors (which mean the detector has to shut down, in places where detectors are illegal). However, the only reason someone would have a radar detector detector detector is if they also had a radar detector. Hence, it’s easier and equally useful to simply detect the original radar detector.

Dark Patterns by the Boston Globe

After years of falling revenue, some newspapers have resorted to deception to boost their subscription numbers. These dishonest tactics are sometimes called “dark patterns” – user interfaces designed to trick people.

For example, this is a Boston Globe story on Bernie Sanders:

Screenshot from 2016-04-24 18-06-33.png

Before you can read the article, there is a pop-up ad asking you to subscribe. By itself, this is annoying, but not deceptive. The real dark pattern is hidden at the top – the ‘Close’ button (circled in red) uses a very low contrast font, making it hard to see. It’s also in the left corner, not the standard right corner. This makes it likely that users won’t see it, causing them to subscribe when they didn’t have to.

One the ‘Close’ link is clicked, deception continues:

Screenshot from 2016-04-24 18-06-57.png

At the bottom, there’s a non-removable, high-contrast banner ad asking for a paid subscription. Again, this is annoying, but honest. However, the circled text “for only 99 cents per week” is not honest. It’s simply a lie, as later pages will show.

Clicking the “Subscribe now” button brings up this page:

Screenshot from 2016-04-24 18-07-17

Here, it becomes obvious that $0.99 per week isn’t the real price. It’s common for companies to have initial discounts, which isn’t itself a dark pattern. The problem on this page is that the real price is never stated. This misleads the consumer.

Clicking the “Sign Up” button reveals yet more dark patterns:

Screenshot from 2016-04-24 18-08-14

This is the first signup form. It shows the amount charged, but only for the first month ($3.96). The real price is below that, in smaller font, and made less obvious by the red highlighting on the previous line. At first glance, it looks like the same price ($3.99), but the real rate is actually $3.99 per week, while the number in red is $3.96 for the entire month. In addition, in the left column, three of the marketing email signups are checked “yes” by default, so people will subscribe without noticing.

The next page is pretty similar, it’s a standard credit card form:

Screenshot from 2016-04-24 18-58-51.png

And this page is the last one you see before ordering:

Screenshot from 2016-04-24 19-00-27.png

It isn’t visible, but this page is yet another dark pattern, because even right before the purchase it never shows the real price. To find the real price, one must click the little “FAQs” link on the right:

Screenshot from 2016-04-24 19-04-15.png

Then, hidden among questions about crosswords, obituaries, and horoscopes, the user has to click the circled link to discover:

Screenshot from 2016-04-24 19-06-43.png

Yes, the real price isn’t the $0.99 per week in the banner ad, or even the $3.99 per week in fine print on the purchase page. It’s $6.93 per week, almost twice as much as the purchase page rate, and seven times as much as the banner. Since this price only kicks in after a year, it’s almost impossible for average users to notice, unless they carefully check each and every bank statement.

If they do find out and try to cancel, they’ll discover this catch, which isn’t stated or even implied during signup:

Screenshot from 2016-04-24 19-09-44.png

A Boston Globe reader can subscribe online. If they have a question, they can ask over email, or through a convenient live chat service. But if they want to stop paying, they have to call and ask on the phone, no doubt after a long hold time and mandatory sales pitches. There’s no plausible reason for this, other than forcing people to pay when they’d rather cancel the service.

In the short term, these dishonest tricks raise revenue for newspapers that use them. But in the longer term, they do even more damage, by giving the whole industry a reputation for bad business practices. Cable companies can get away with it because of government-granted monopolies; newspapers won’t be able to.

Vegetarian Advocacy Is Ineffective

Most pro-vegetarian advocacy is not very effective. The problem isn’t the goal of making animals happy – it’s likely that farm animals have moral value, and almost everyone agrees that factory farm conditions are horrible. Instead, the problem is the most common strategy used to achieve that goal: namely, emotionally-charged rhetoric to convince people, either in person or on the Internet, that they should personally not eat meat.[1] This category of solution to animal rights problems is likely ineffective at best, and downright harmful at worst. As GiveWell says, non-profits shouldn’t “point to a problem so large and severe (and the world has many such problems) that donors immediately focus on that problem – feeling compelled to give to the organization working on addressing it – without giving equal attention to the proposed solution, how much it costs, and how likely it is to work.”

This essay doesn’t address whether animals have nonzero moral value, which has been thoroughly discussed elsewhere. Nor does it look at other solutions to factory farming, like government legislation, or scientific research to develop meat substitutes. It simply tries to show that a lot of pro-vegetarian advocacy, as it’s currently practiced, is ineffective or outright counterproductive. Since there’s a wide range of arguments to consider, this writeup has been broken up into chunks, of which this is the first. This chunk looks at the simplest sub-question: is vegetarian advocacy a cost-effective way of reducing the amount of meat eaten?

First, we should check: is any kind of activism ever cost-effective? The answer seems to be yes. Eg., everyone knows about the campaign for gay marriage, which won a full victory in the US in 2015 (though after many decades of work). However, there seems to be a clear pattern in which activism campaigns are successful, and which aren’t. Psychologist Daniel Kahneman divides the brain into two systems: System 1, which is fast and intuitive, and System 2, which is slow and reflective. System 1 evolved before System 2, and is more connected to the physical actions we take, while System 2 is more closely linked with what we say, write, and think. The historical record shows that activism aimed at System 2 is difficult, but can sometimes be effective. On the other hand, activism aimed at System 1 is usually a waste of effort.

[Edit: The pattern of successful vs. unsuccessful activism still seems real, but the distinction being drawn here is not what Kahneman meant by System 1 and 2. Apologies for the mistake, further clarification to follow.]

For example, consider the history of activism against racism. In 1955, Rosa Parks started the Montgomery Bus Boycott, which triggered a wave of anti-racist advocacy across the US. Though it was a tough battle, after nine years, Congress and President Lyndon Johnson passed the Civil Rights Act, a sweeping bill that outlawed almost all racial discrimination. Over the next fifty years, advocates kept pushing harder and harder for an end to racism. On a System 2 level, this campaign was so successful that virtually no public figure now advocates for segregation, a radical change from 1950. When businessman Donald Sterling was caught being racist, it was such a big deal that it became the focus of his entire Wikipedia page. However, System 1 has been much more stubborn. After sixty years of advocacy, a psychology metric called the Implicit Association Test shows that most white Americans still have negative System 1 assocations with black faces.

There have been many other campaigns to persuade people’s System 1s through rhetoric, advertising, peer pressure, graphic images, and so on, but they usually get negligible or marginal results, compared with the effort invested. Consider smoking as another test case. There’s near-universal agreement that smoking is very, very bad for health, in both the short term and long term, and there’s been enormous efforts to convince smokers to quit. On one side, most smokers themselves know darn well how bad smoking is, and many make heroic efforts to stop. On the other side, governments, nonprofits, and smoking-cessation-aids companies spend billions researching how to help people stop smoking. Thousands of studies have been done on the effectiveness of anti-smoking programs, so we’ve put a lot of effort into finding the very best strategies.

The results of this enormous, expensive, fifty-year effort have been modest at best. The US smoking rate has fallen from ~40% to ~20%, a decline of ~50%, or a bit over 1% per year. Of that decline, much of it was caused by fewer people taking up smoking in the first place. Much of the remainder was caused by laws that make cigarettes more expensive and difficult to use, such as taxes, restrictions on sales, public smoking bans, restaurant smoking bans, and so on. Hence, all anti-smoking programs, cessation aids, addiction research, PR campaigns, etc. combined have given us a few tenths of a percent decline per year.[2] More generally, Scott has a long essay on how these types of programs are ineffective:

“We figured drug use was “just” a social problem, and it’s obvious how to solve social problems, so we gave kids nice little lessons in school about how you should Just Say No. There were advertisements in sports and video games about how Winners Don’t Do Drugs. And just in case that didn’t work, the cherry on the social engineering sundae was putting all the drug users in jail, where they would have a lot of time to think about what they’d done and be so moved by the prospect of further punishment that they would come clean. And that is why, even to this day, nobody uses drugs. (…)

What about obesity? We put a lot of social effort into fighting obesity: labeling foods, banning soda machines from school, banning large sodas from New York, programs in schools to promote healthy eating, doctors chewing people out when they gain weight, the profusion of gyms and Weight Watchers programs, and let’s not forget a level of stigma against obese people so strong that I am constantly having to deal with their weight-related suicide attempts. As a result, everyone… keeps gaining weight at exactly the same rate they have been for the past couple decades.”

To create a quantitative model, we can look at the test case of online ads, which Animal Charity Evaluators (ACE) have done substantial research on. ACE says that “online vegetarianism and vegetarianism ads are currently our most cost-effective intervention recommendation”.[3] In economic terms, one should naively expect that one dollar of ad purchases causes about one dollar of money moved, where “money moved” equals the change in (retail price of goods purchased – marginal cost of goods sold), summed over all relevant goods. If each dollar of ads caused more than one marginal dollar of money moved, companies would just buy more ads to make more money, until decreasing marginal returns brought gains back down to $1.

Of course, that’s only a rough approximation. Any given ad campaign might be more or less effective, for any number of reasons. However, in this case, the sheer magnitude of the gap is cause for great concern. ACE estimates that the cost-per-click (CPC) of pro-vegetarian ads is about two to twenty cents, and that based on survey data, around 2% of ad clickers become vegetarian or vegetarian. American adults spend around $5,000 to $10,000 on food per year,[4] so total money moved through becoming vegetarian is on the order of $100,000. Hence, under the naive economic model, the chance of people becoming vegetarian because of an ad click is roughly 0.00002% – 0.0002%, a massive four to five orders of magnitude smaller than ACE’s estimate. A likely explanation for this, as ACE themselves note, is that people only click on the ads if they were thinking about becoming vegetarian anyway. About two thousand people typically see an online ad for each person who clicks, so even a very small number of existing proto-vegetarians in the ad audience fully account for the survey data.

[Edit: 0.00002% is inaccurate, even within this model. Two cents per click is only available in poorer countries, which have much less total money moved, bringing the estimated odds back to around 0.0002%. However, poorer countries also consume much less meat, which largely compensates for this effect in terms of benefit per dollar.]

What empirical data we have backs this model up. After decades of vegetarian advocacy, PETA says that “society is at a turning point” for veganism. But polling data suggests that only 5% of Americans are vegetarian, and that this percentage has gone down since the 90s. Only 2% consider themselves vegan. Further, these numbers are likely overestimates. Polls often show that a few percent will support any idea, no matter how crazy, like “all politicians are secretly alien lizards” (really!); in addition, most people who said “yes” to the vegan question said “no” to the vegetarian question, which suggests lots of confusion. Even among self-described vegetarians, more detailed surveys show that most still eat an average of one serving of meat per day, which nicely confirms the System 1/System 2 model. It’s easier to convince System 2 that vegetarianism is a good idea than System 1, creating a paradox where two-thirds of ‘vegetarians’ ate meat yesterday.

In addition, even if advocacy is successful, the benefits from one person going vegetarian are not very large compared to their cost. Statistics from vegetarian advocacy groups usually cite the large numbers of animals killed. However, each individual animal life is very short, because meat becomes cheaper when farmers breed animals for rapid growth. Consider chickens as an example. The average American eats 27.5 chickens per year; since a broiler chick takes about five weeks to grow, this gives us 2.64 chicken-years of life prevented by one year of vegetarianism. To evaluate the cost of not eating chicken, we must look at not the price of the chicken, but the “consumer surplus” – how much benefit the customer derives from the product. With some rough math, this comes out to around $18.84 per chicken;[5] this is averaged over both people who like chicken a lot, and people who only like it barely enough to buy it. That gives us a total value-from-chicken (after the cost of the chicken) of $518 per person per year, which can be given up to save 2.64 years of chicken suffering.

Comparing this to human charity, GiveWell estimates a cost-per-child-saved from malaria nets of $2,838. Since GiveWell’s numbers only count children saved, given developing-world life expectancy, each life saved creates about 60 extra person-years. That equals a cost of $47 per person-year, compared to the average cost of $196 per chicken-year from not eating chicken. Therefore, the person-years are a much cheaper buy, even if we assume that chicken lives are so incredibly bad that preventing one chicken-year is as good as saving one person-year.

In fact, this estimate is still biased in favor of chickens, for two main reasons. The first is that GiveWell’s estimate doesn’t include the benefits of mosquito nets beyond saving children; these include saving adults from death by malaria (though adults have a much lower fatality rate), preventing many more non-fatal cases of malaria, preventing mosquito-borne disease in general, and of course preventing mosquito bites, which (ignoring everything else) can be done at a cost of hundredths of a penny per bite. The second reason is that Against Malaria Foundation is selected for being extremely low-risk; given a donation to AMF of $X dollars, one can be extremely confident in at least Y lives being saved. GiveWell thinks it’s likely that if we take on riskier projects, like scientific research and policy reform, the expected cost per life saved will be even lower. Indeed, one of these riskier projects is actually US policy reform to improve farm animal welfare.

Several people have suggested this is an unfair comparison. Most donors have a limited “charity budget”, and $10 that they spend on one charity is $10 which they don’t spend on another. However, what if going vegetarian increases people’s willingness to do other kinds of good? Unfortunately, psychology research suggests this is unlikely – people often have a “do-gooding budget” in addition to their financial budget, and doing one altruistic act will decrease their willingness to do others. To quote Nick Cooney, founder of animal charity The Humane League:

“The contribution ethic refers to the feeling many people have that “I’ve done my part on issue A, so it’s okay for me to ignore issues B, C, and D.” During a Humane League campaign to get restaurants to stop purchasing products from a particularly cruel farm, owners would often tell us that they already do something to help animals (“We buy our eggs from a local farm,” or “I donate to the ASPCA”) so we shouldn’t be bothering them. This phenomenon worked across issues too, as we often had owners or chefs tell us how they supported some other social cause so we should leave them alone about this one.

In addition to feeling like they’ve done their part and therefore don’t need to do anything more, people often overestimate the amount of good they’ve done. Combined, these phenomena make it hard to move people beyond small actions for their one or two preferred causes (Thogersen and Crompton 2009).”


1. Of course, there are many different animal-rights diets. Other types of ethics-based dietary restrictions include “vegan”, “lacto-ovo vegetarian”, “pescatarian”, “flexitarian”, “reducetarian”, and so on. Since keeping track of all the different labels is unwieldy, for the most part I’ll simply say “vegeterian”, even though the main arguments also apply to many other ethics-based diets. I’ve seen people use “veg*n”, but that’s also unwieldy due to ‘*’ not being a letter.

2. One possible complication is that nicotine is chemically addictive. However, treatments to fight the chemical part of addiction have been available for decades, which I’d expect to largely cancel out this effect. In addition, similar problems (alcoholism, gambling, sugar consumption, etc.) have generally seen similar results.

3. Although I disagree with them on several issues, ACE should be commended for trying to be quantitative.

4. Data from this Gallup poll. Note that this data is based on self-reports, and is per-household rather than per-adult, so it is only approximate.

5. The cost of a whole chicken is roughly $1.50 per pound per this article, or $9 for a six-pound chicken. From this paper, the price elasticity of demand of chicken is around -0.8, so we can naively model the demand-price curve with the ODE dy/dx = -0.8*y/x, y(9) = 1, which has the solution y(x) = 5.8/x^0.8. Integrating from x = 9 to, say, 100, we get 27.84, which subtracting the $9 for the chicken’s price gives us a consumer surplus of $18.84. This is, of course, just a rough guess.