On Wednesday, Matt Lutz made the case for effective altruism and a related philosophical movement, longtermism. In today’s article, Brian Lui levels a critique against longtermism from within the effective altruism movement.
Imagine that you have $1. You’re planning to donate that dollar to charity and have narrowed your options down to two causes: your alma mater’s endowment fund, which awards scholarships for academically gifted students, and a non-profit that delivers life-saving vitamin supplements to children in extreme poverty.
To help make your decision, you might rely on a social movement and philosophy called effective altruism—or EA for short. Effective altruism is dedicated to using evidence and reason to do the most good in the world. Effective altruists try to accomplish this maximum good by supporting philanthropic causes that get the biggest return on investment. An effective altruist, then, would probably encourage you to donate your $1 to the organization that provides vitamin supplements over the university.
If you continue down the effective altruism rabbit hole, you’ll eventually come to a branch of EA called “Longtermism.” This philosophy, which has become increasingly popular in recent years, advocates for maximizing the well-being of future generations, reasoning that the number of potential future lives saved is greater than the number of lives that can be saved today. Longtermists believe that we can accomplish this by reducing existential risks and improving the long-term trajectory of humanity. And so a longtermist might tell you to scrap your plans to donate to either of the other two charities and instead give to a more long-term existential cause—like preventing nuclear war or an AI apocalypse.
There are serious moral questions about longtermism, to say the least. For instance, it’s not obvious that people living today have the same moral obligations to future generations as we do to our own. But even if we accept that longtermism is morally sound, there remains the question of whether longtermism is an effective framework for improving the world.
There are two major reasons to think that it is not.
The first of these concerns is strictly practical: the world is so full of randomness and chaos that it’s impossible to predict how an action taken today will affect people hundreds or thousands of years from now.
There is a long history of futurists making bold predictions about the future that turn out to be incorrect. Lord Kelvin, one of the 19th Century’s most influential physicists, once claimed that the airplane would never work, and yet a few decades later, the consensus was that Mars would be inhabited by the year 2000. One review of the predictions of three notable forecasters shows that their accuracy was nothing special, ranging from 10% to 50% accuracy for hard predictions. And while prediction tools and techniques today are more sophisticated than ever, even these improvements are not up to the task of assessing the compounding unpredictability of long time periods.
Moreover, attempts to address problems often backfire. For example, artificial intelligence has risks that effective altruists were early to notice. They created a nonprofit organization, OpenAI, to research and build safe, beneficial AI. Ironically, OpenAI has become one of the main drivers of the rapid AI revolution over the last year.
In short, predictions about the future are often wrong, and wrong predictions can backfire. This is especially true if the people making those predictions are overly confident and demand sweeping, costly interventions—as some people are today—in the name of longtermism. With little evidence that our ability to forecast has dramatically improved, we should approach longtermists who believe they can accurately predict the distant future with a healthy dose of skepticism.
The second big problem with longtermism is that it crowds out other charitable efforts. Many causes associated with effective altruism do undeniably good work. For example, the Against Malaria Foundation saves tens of thousands of lives every year by providing long-lasting insecticide-treated nets to people in sub-Saharan Africa. The reason AMF is so efficient is that it facilitates a low-cost intervention that saves lives without wasting money on administrative bloat or marketing and promotion. Charities like these are a big improvement over an average one that spends money less effectively: GiveWell, an EA-aligned charity evaluator that quantifies the most impactful places to donate to, estimates that the most effective charities can save one life for around $5,000.
Unfortunately, much of the money and many of the people that go into longtermism are diverted from other areas of effective altruism. Before their forays into longtermism, EA mega-donors Sam Bankman-Fried and Dustin Moskovitz were deeply involved in promoting animal welfare and addressing global poverty, respectively. Following their conversion to longtermism, they mostly abandoned these causes in favor of areas like AI safety, space governance, and politics. This happens because effective altruists like the concept of getting the most “bang for their buck,” and the promise of helping trillions of people in the future rather than billions of people today offers the most value per dollar.
This particular concern would not be so much of a problem if longtermism was bringing new actors into the effective altruism fold rather than siphoning off existing EA support. But since most normal people find longtermism too abstract, the movement often pulls its donors and supporters from the pool of effective altruists, who are more open to quirky ideas about how to do the most good. It’s likely that longtermism is making the world a worse place by diverting time and money from better causes, which have the potential to save and improve the lives of billions of people.
Perhaps in the future longtermism will become a viable form of effective altruism. Maybe we’ll eventually develop the predictive tools and intelligence necessary for us to know how actions today will affect the world hundreds and thousands of years from now. But that’s not the case at the moment. For now, longtermism is a fatally flawed movement that often does more harm than good.
Brian Lui is an effective altruist and independent research analyst based in Sydney.
Follow Persuasion on Twitter, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:
One of the more annoying and least explicable features of life in the 21st Century is our hubris in glibly assuming that because we have better technology, we are smarter, wiser, and more virtuous than our ancestors. This hubris finds an expression in Longtermism, which as the author points out, assumes against all evidence that we're suddenly much better at predicting the future.
Better predictive tools are nice, but a little humility would be more useful because, as J. R. R. Tolkien wrote almost a hundred years ago now in The Return of the King, "[I]t is not our part to master all the tides of the world, but to do what is in us for the succour of those years wherein we are set, uprooting the evil in the fields that we know, so that those who live after may have clean earth to till. What weather they shall have is not ours to rule."
In the discipline of software development new lifecycle development methodologies were adopted and became best practices because of the inadequacy of the previous standards that relied on a paper-based analysis of future outcomes. The new methodologies were essentially an acknowledgement that human analysis was inadequate to accurately design solutions that would hit a future target... there are generally far too many criteria to effectively capture and understand, and the future is generally always unknown and subject to surprise change.
New methodologies rely on rapid incremental change and adjustment. You have an existing system and change will disrupt that system. Instead of developing a complete new system that changes everything all at once, smaller steps of change are design. The results are analyzed and a new change cycle repeats including any required adjustments from the lessons learned from the previous change.
These methodologies work and they are the standards for all software development today. Those principles are also adopted in a general sense in the best-run companies as constant improvement. For example, Six Sigma that strives for zero manufacturing defects.
The problem with our political system to adopt these modern approaches to change are two-fold. One is the political process for deciding and then funding change... it requires the whole change to be debated and voted on, and then the money flows all at once to make the change within a political cycle.
The other problem is that activism has become an industry. Because so many activists depend on the subject that they exploit for a living, they have a natural motivation to see that subject perpetuated. A good example is the "war on hunger". The US has spent trillion and continues to spend billions to feed those that cannot or do not adequately feed themselves and their families. Although there is much less real hunger in the nation than there has been previously, there are still significant problems not solved. And many of those problems are newly developed as things have changed. A good example are the food desserts that have multiplied in rural and urban areas as retail food corporations have consolidated and cost of running a restaurant has increased to the point that fewer can operate in high-cost, and low-population areas. And the hunger activists shifted their topic to "food insecurity" and the lack of access to health food.
This topic should be a problem with a final solution that would be a multi-pronged system approach. Models for that system design should be tried in small batches in areas of the country. Lessons learned should be adopted and the program expanded to other areas of the country. The entire system should be on a constant-improvement loop... with adjustments made as needed and custom exceptions for areas of the country based on those unique circumstances and local autonomy.
However, solving this problem puts activists out of work. And they will naturally work to undermine a final solution for their own self interest.
So, there are two fundamental problems that continue this failed approach that relies on longtermism. One is our political and governmental approach to change, and the other is the large activism industry. The former can be dealt with by legislation and required adoption by government agencies to use an incremental change approach tied to funding tranches. I think the latter can be better-managed by 501 C 3 corporations involved in political activism being taxed when their operations exceed a certain financial size.