There’s a lot to like about Effective Altruism, but ultimately, I fundamentally disagree with a core assumption EA makes. I’d like to explain why, what my plan is to improve on EA, and why I think it’ll be more effective at shaping the world than EA.
Effective altruism, like most groups, is based on synergy: a whole greater than the sum of its parts. You form a group because you can do more with teamwork than by yourself. You start a company because if you pool everyone’s money, you can make more products. You put the aluminium smelter next to the dam, because one feeds into the other. You win by taking advantage of positive externalities, division of labor, and splitting large fixed costs. However, no matter how much business schools use the word, not everything has synergy. You don’t put a pizza restaurant and the aircraft warehouse in the same building. You wouldn’t put physics and dance students in the same classroom. There’s no point. They’d just get in each others’ way.
A central belief of EA is that there’s synergy from all different ways of doing good. The EA message is that people donating to fund insecticide nets, campaigning for veganism, and doing research on unfriendly AI can all benefit from each other. They can all be more effective if they share websites, go to the same events, plan strategy together, fundraise together, cooperate in recruiting new members, and so on. And to some extent, I agree with that. Even projects that sound very different can, sometimes, benefit from teaming up.
However, I think there’s very little synergy between doing good on a short timescale (say, one year or less), and a long timescale (five years or more). Doing good in the long term means you must consider how systems evolve over time, and that’s a very different mode of thinking. If a charity buys cancer drugs for patients, they need to know how cancer treatment works. But if a pharma company develops a new cancer drug, they need to analyze not just how things work now, but all the ways they *might* work, five or ten or fifteen years later. A competitor could make a better drug before they release theirs. The FDA might change its rules for drug approval. The economy could crash, and make banks pull their funding. Insurers could get together, and force companies to lower prices. Courts could change how they enforce patents, the AMA might recommend more or less aggressive treatment, the tools scientists use for drug discovery could be replaced, it goes on and on and on. If anything big happens, the charity’s donors can just give elsewhere; the pharma company’s investors are mostly stuck.
And that’s a tame example. When there’s even more change, you have to make plans with no obvious connection between cause and effect. For example, suppose someone said they wanted to replace Boeing in the airplane industry, and their strategy was to sell Boeing fuel pumps. At first glance, that makes little sense. But it’s essentially what Microsoft did with computers. Sell an apparently minor, but critical, component to the industry’s leader, and expand on that component until the original leader is irrelevant. This type of indirect maneuvering has almost no connection to, say, doing a controlled study of vegan diets or polio vaccines; the time horizons, and the uniqueness of each situation, make statistical research impossible.
If short-term and long-term plans have little synergy, that implies each group ought to pick one, and focus on that. I’m picking long term. Partly because it plays to the strengths of me and my friends, who are mostly younger and more intellectual. And partly because I have long time horizons; I don’t put a huge discount on the world in ten or twenty years, compared to the world now. This preference seems to be pretty rare, and supply and demand mean it’s easier to achieve rare goals than common ones. Since many EAs share these traits, I think a lot of them would benefit from picking long-term along with me.
Right now, as I’m writing this, there are also two extra advantages to long-term. The first is that it’s been in a slump recently, so a small number of people can have a bigger impact. (Eg., to my knowledge, the only serious attempt to forecast AI trajectories in the last three years was by AI Impacts, and it’s unfortunately still unpublished.) The second is that, compared to EA, it makes developing new ideas much easier. The defining EA question for any new plan is, “does it do the most good?”. But that’s very hard to answer. Partly because “the most good” really depends on who you are, and what you value. And partly because, to say something is “the most good”, you also have to know how much good everything else is, which is a lot of work. It’s not a coincidence that the three biggest EA causes – global poverty, factory farming, and AI risk – are all ideas that came from elsewhere (and were brought into EA afterwards), not invented from a first principle of “do the most good”. Of course, effectiveness and benefit-per-dollar are still important. But if the *first* question asked is “how long does this plan take?”, it’s much easier to explore the space.
As a first step, I’m creating a new private Facebook group, called “Long Term World Improvement”. Like the plans it will discuss, it’s an experiment; it may fizzle, or transform into something very different. Topics might include, but are not limited to:
– Curing disease, ending aging, and human enhancement
– Long-term government and policy work (not the next election)
– Strategies for long-term income or influence, eg. $10 million in the next decade
– AI capabilities and safety, although I’d like to avoid generic rehashes of Bostrom/Yudkowsky
– How to build a society with non-human general intelligences
– Scientific research, tools, and meta-science
– Space exploration and colonization
– Building new cities, or large-scale relocation to an existing city
– Robotics, neuroscience, nanoscience, and brain-machine interfaces
– Rational forecasting and extrapolation methods
– Biosecurity, nuclear security, and disaster preparedness (both personal and global)
– Any new technology that’s not yet well-known, but might be very impactful (eg. lab-grown meat circa 2006)
– Historical examples and analyses of long-term plans, and what worked or didn’t
– Long-term strategies you’re personally doing or considering (encouraged)
While these projects are very different, they all take place in a single, shared future. And their success all depends on understanding that future. This links them all together, in a way that’s impossible for “doing good” as a pure abstraction.
Off-topic items include, but aren’t limited to:
– Redistribution of existing personal assets, or close equivalents (eg. bed net donation)
– Political news stories, except specific, direct impacts on plans (eg. if X has a coup, that affects political strategy there)
– Marketing products that already exist
– Rare, edge-case ethical scenarios, like the proverbial trolley problem
– Various forms of “ethical consumption” and “ethical divestment”
– Generic, “routine” news that appears very frequently (eg. new “potential cancer cures” are announced daily)
– “Long term” strategies whose core is a pyramid scheme (recruiting people to recruit people to recruit people to…)
– Generic “junk food” news, such as sports, crime, or outrage-bait
Hacker News moderation rules will apply. (For non-Hacker News users, check out https://news.ycombinator.com/newsguidelines.html; you can also create an account, turn on “showdead”, and browse https://news.ycombinator.com/threads?id=dang to get a feel for how it works.) To join, message me on Facebook, email me at firstname.lastname@example.org, or chat me on Signal (203-850-2427). If you don’t know me, please include a short summary of who you are, and what you’re interested in. As moderator, I reserve the right to not add people, or to remove existing people if they break the rules. Feel free to message me with questions. Good luck, and may tomorrow be brighter than today.
I would maybe like to join, the topic is certainly one that’s close to my heart, but I fear that this kind of thing usually fails. Without a few very invested people (in the group, not just the topic) to cause discussion and very good moderation, I would expect this to go the way of most facebook groups.
The hacker news moderation actually seems pretty decent (nice links btw; useful to see), so if you’re actually planning on putting in the effort to enforce that kind of thing, that’s encouraging.
I also expect to see mostly things that don’t actually seem that plausible to me for long term world improvement. I expect to see mostly things like related scientific advancements, but without much discussion of how it affects a plausible view about how to improve the world. Maybe you’re planning on moderating that kind of stuff out.
I also find myself more on the “leveragy” end of thinking about how to improve the world in the long run. Things like
– Scientific research, tools, and meta-science
– Space exploration and colonization
– Building new cities
seem interesting and potentially relevant to improving the world in the long run, but very clearly not the main stumbling blocks. Our problems are not *not knowing how to do these things*, but not knowing how to coordinate how to do these things.
Because of that, a forum that focuses on that kind of scientific improvement doesn’t feel that attractive.
I say these things not to discourage you if you have a vision — maybe I’m missing something for why this is actually a pretty good idea — but just to state why I don’t personally feel that excited.
> The EA message is that people donating to fund insecticide nets, campaigning for veganism, and doing research on unfriendly AI can all benefit from each other.
I think the basic argument (though surprisingly one that I’ve rarely seen discussed) is that all of these causes benefit from rationality (https://www.lessestwrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes).
Deciding to be vegan, to donate a portion of your income to the third world, or to work on mitigating existential risk, all depend on the use of some rare-in-the-world cognitive faculties: consequentialism, a willingness to shut up and multiply, and (most importantly) a synching up of near-mode and far-mode reasoning.
All of these causes benefit from boosts to people’s skill on those dimensions. So groups working on those causes should get a synergy out of working together to increase rationality, which is upstream of buying any of their particular arguments.
Notably, nothing like this ever happened, as far as I know. CFAR does rationality training, but this is closer to productivity boosts + the specific flavors of rationality that are needed for thinking about AI risk.
The argument I make above suggests a world where CFAR produces rationality-training materials, but so does GiveWell, and so does ACE, and so does Giving What We Can.