There’s a lot to like about Effective Altruism, but ultimately, I fundamentally disagree with a core assumption EA makes. I’d like to explain why, what my plan is to improve on EA, and why I think it’ll be more effective at shaping the world than EA.

Effective altruism, like most groups, is based on synergy: a whole greater than the sum of its parts. You form a group because you can do more with teamwork than by yourself. You start a company because if you pool everyone’s money, you can make more products. You put the aluminium smelter next to the dam, because one feeds into the other. You win by taking advantage of positive externalities, division of labor, and splitting large fixed costs. However, no matter how much business schools use the word, not everything has synergy. You don’t put a pizza restaurant and the aircraft warehouse in the same building. You wouldn’t put physics and dance students in the same classroom. There’s no point. They’d just get in each others’ way.

A central belief of EA is that there’s synergy from all different ways of doing good. The EA message is that people donating to fund insecticide nets, campaigning for veganism, and doing research on unfriendly AI can all benefit from each other. They can all be more effective if they share websites, go to the same events, plan strategy together, fundraise together, cooperate in recruiting new members, and so on. And to some extent, I agree with that. Even projects that sound very different can, sometimes, benefit from teaming up.

However, I think there’s very little synergy between doing good on a short timescale (say, one year or less), and a long timescale (five years or more). Doing good in the long term means you must consider how systems evolve over time, and that’s a very different mode of thinking. If a charity buys cancer drugs for patients, they need to know how cancer treatment works. But if a pharma company develops a new cancer drug, they need to analyze not just how things work now, but all the ways they *might* work, five or ten or fifteen years later. A competitor could make a better drug before they release theirs. The FDA might change its rules for drug approval. The economy could crash, and make banks pull their funding. Insurers could get together, and force companies to lower prices. Courts could change how they enforce patents, the AMA might recommend more or less aggressive treatment, the tools scientists use for drug discovery could be replaced, it goes on and on and on. If anything big happens, the charity’s donors can just give elsewhere; the pharma company’s investors are mostly stuck.

And that’s a tame example. When there’s even more change, you have to make plans with no obvious connection between cause and effect. For example, suppose someone said they wanted to replace Boeing in the airplane industry, and their strategy was to sell Boeing fuel pumps. At first glance, that makes little sense. But it’s essentially what Microsoft did with computers. Sell an apparently minor, but critical, component to the industry’s leader, and expand on that component until the original leader is irrelevant. This type of indirect maneuvering has almost no connection to, say, doing a controlled study of vegan diets or polio vaccines; the time horizons, and the uniqueness of each situation, make statistical research impossible.

If short-term and long-term plans have little synergy, that implies each group ought to pick one, and focus on that. I’m picking long term. Partly because it plays to the strengths of me and my friends, who are mostly younger and more intellectual. And partly because I have long time horizons; I don’t put a huge discount on the world in ten or twenty years, compared to the world now. This preference seems to be pretty rare, and supply and demand mean it’s easier to achieve rare goals than common ones. Since many EAs share these traits, I think a lot of them would benefit from picking long-term along with me.

Right now, as I’m writing this, there are also two extra advantages to long-term. The first is that it’s been in a slump recently, so a small number of people can have a bigger impact. (Eg., to my knowledge, the only serious attempt to forecast AI trajectories in the last three years was by AI Impacts, and it’s unfortunately still unpublished.) The second is that, compared to EA, it makes developing new ideas much easier. The defining EA question for any new plan is, “does it do the most good?”. But that’s very hard to answer. Partly because “the most good” really depends on who you are, and what you value. And partly because, to say something is “the most good”, you also have to know how much good everything else is, which is a lot of work. It’s not a coincidence that the three biggest EA causes – global poverty, factory farming, and AI risk – are all ideas that came from elsewhere (and were brought into EA afterwards), not invented from a first principle of “do the most good”. Of course, effectiveness and benefit-per-dollar are still important. But if the *first* question asked is “how long does this plan take?”, it’s much easier to explore the space.

As a first step, I’m creating a new private Facebook group, called “Long Term World Improvement”. Like the plans it will discuss, it’s an experiment; it may fizzle, or transform into something very different. Topics might include, but are not limited to:

– Curing disease, ending aging, and human enhancement
– Long-term government and policy work (not the next election)
– Strategies for long-term income or influence, eg. $10 million in the next decade
– AI capabilities and safety, although I’d like to avoid generic rehashes of Bostrom/Yudkowsky
– How to build a society with non-human general intelligences
– Scientific research, tools, and meta-science
– Space exploration and colonization
– Building new cities, or large-scale relocation to an existing city
– Robotics, neuroscience, nanoscience, and brain-machine interfaces
– Rational forecasting and extrapolation methods
– Biosecurity, nuclear security, and disaster preparedness (both personal and global)
– Any new technology that’s not yet well-known, but might be very impactful (eg. lab-grown meat circa 2006)
– Historical examples and analyses of long-term plans, and what worked or didn’t
– Long-term strategies you’re personally doing or considering (encouraged)

While these projects are very different, they all take place in a single, shared future. And their success all depends on understanding that future. This links them all together, in a way that’s impossible for “doing good” as a pure abstraction.

Off-topic items include, but aren’t limited to:

– Redistribution of existing personal assets, or close equivalents (eg. bed net donation)
– Political news stories, except specific, direct impacts on plans (eg. if X has a coup, that affects political strategy there)
– Marketing products that already exist
– Rare, edge-case ethical scenarios, like the proverbial trolley problem
– Various forms of “ethical consumption” and “ethical divestment”
– Generic, “routine” news that appears very frequently (eg. new “potential cancer cures” are announced daily)
– “Long term” strategies whose core is a pyramid scheme (recruiting people to recruit people to recruit people to…)
– Generic “junk food” news, such as sports, crime, or outrage-bait

Hacker News moderation rules will apply. (For non-Hacker News users, check out https://news.ycombinator.com/newsguidelines.html; you can also create an account, turn on “showdead”, and browse https://news.ycombinator.com/threads?id=dang to get a feel for how it works.) To join, message me on Facebook, email me at alyssamvance@gmail.com, or chat me on Signal (203-850-2427). If you don’t know me, please include a short summary of who you are, and what you’re interested in. As moderator, I reserve the right to not add people, or to remove existing people if they break the rules. Feel free to message me with questions. Good luck, and may tomorrow be brighter than today.