Discover more from Persuasion
The Case For Effective Altruism
Most of the criticism leveraged at the philosophical movement—including its sister philosophy, longtermism—doesn’t tackle its core ideas.
In November of last year, the cryptocurrency exchange FTX blew up. Its founder and CEO, Sam Bankman-Fried, has been charged with perpetrating massive fraud. One of the interesting side-effects of this story is the increased attention that’s been paid to effective altruism and longtermism, two philosophical ideas that have been gaining in popularity in recent years. Adherents of these views tend to be extremely passionate advocates and have (loosely) organized themselves into a “movement” or “community” that promotes those views. Bankman-Fried was one of the most vocal and visible members of the movement. For those who are skeptical of effective altruism and longtermism, Bankman-Fried’s downfall is proof that those ideas were always worthless. Even for those who remain dedicated to the movement, the FTX blowup has prompted a crisis of faith and a moment of reflection.
I’m not a member of the “movement” in any sense. I’m just a philosophy professor who teaches the philosophical ideas that movement is based on. So my perspective on these debates is not as a partisan, but as someone who grades papers. And unfortunately, much of the critical discussion of effective altruism and longtermism is B work at best: it shows some grasp of the issues under debate, but doesn’t demonstrate a full understanding of the core ideas.
Those core ideas come from philosophical work in ethics in the late 20th century, particularly from the writing of Peter Singer and Derek Parfit. Singer and Parfit’s ideas have proved so compelling that many others—first philosophers, and then activists and philanthropists—have accepted their views and developed them further while attempting to put them into practice. Some of the ways that Singer and Parfit’s views have been developed have received criticism, much of it warranted. But the core ideas themselves typically remain untouched by such debate.
In Peter Singer’s paper, “Famine, Affluence, and Morality,” Singer defends two core moral propositions. The first is that “suffering and death from lack of food, shelter, and medical care is bad.” That should be uncontroversial. Singer doesn’t even argue for it, simply saying that if you disagree with that claim, you’re not his target audience. The second proposition is that “if it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it.”
Singer illustrates this second proposition with the following thought experiment: suppose you are walking near a shallow pond and see a young child drowning. Unless someone jumps in to rescue the child, he will die. If you jump in, you can easily save him. However, you’re wearing very nice, expensive clothes, and those clothes will be ruined if you jump in. If you save the drowning child, you will prevent something bad from happening, and you will have to sacrifice to do it. But the thing that you will sacrifice, your nice clothes, is not of comparable moral importance to the life of a child. So you ought, morally, to save the child. If you don’t, you’re doing something profoundly morally wrong.
Singer’s two propositions entail that if you can prevent suffering and death from lack of food, shelter, and medical care without sacrificing anything of comparable moral importance, then you ought, morally, to do so. And, in fact, we can prevent suffering and death from lack of food, shelter, and medical care. Many people, particularly in the developing world, die from these causes. And there are many charities that are set up to provide these people with the food, shelter, and medical care that they need to survive. If you donated a substantial portion of your income to those charities, they would have the resources to save the lives of more people. So you ought to donate a substantial portion of your income to charity unless doing so would sacrifice something of comparable moral importance.
This is not something that it would be nice for you to do. It’s something you’re morally obligated to do. Everyone who doesn’t donate is like the person who walks blithely past the drowning child. Those drowning children happen to be in South Asia and sub-Saharan Africa, but physical distance is not morally relevant. Someone’s being far away is morally relevant only insofar as it means that we can’t help them. But in today’s interconnected world, with international aid organizations working in even the poorest of places, distance is no bar to our ability to help. Giving money to aid organizations increases their power to steer resources to people who will die for lack of them.
How much money must we give to aid organizations? A strict utilitarian would say that we should give until the amount of suffering we would alleviate is equal to the amount of suffering we would incur on ourselves. But while many effective altruists are utilitarians, Singer’s argument doesn’t assume strict utilitarianism. He says that we should give up until the point where we’d sacrifice something of “comparable moral importance.” We should give until we’d have to give up something that’s in the same ballpark, morally speaking. What is of comparable moral importance to suffering and death from lack of food, shelter, and medical care? That’s a difficult question with no agreed-on answer. But owning a set of nice clothes surely doesn’t qualify, as the case of the drowning child illustrates. So you shouldn’t buy nice clothes and should, instead, give your money to international aid organizations.
I won’t try to defend an exact criterion for how much you are morally obligated to give under Singer’s criterion. That’s something you’ll have to work out for yourself. If you can look yourself in the mirror and say, honestly, “This thing that I’m buying for myself is of comparable moral importance to saving the life of a child,” then go ahead and buy that thing for yourself. But I suspect that few of your expenditures will pass that test.
How much money does it take to save the life of a child? Inspired by Singer’s work, this has been the subject of intense study. GiveWell, an effective altruist organization, has concluded that the most cost-effective way to save a life is by funding medical treatment to prevent malaria infection. Malaria is practically unknown in the developed world, but it remains a scourge in undeveloped areas with tropical climates. By donating about $5,000 to prevent malaria infection, you will save, on average, one child’s life. That might sound like a lot of money. But really, it’s a shockingly small amount. How much money do you spend in a year on things that are less important than the life of a child? How many lives could you save if you were willing to live more frugally?
One of the most disruptive ideas to come from Singer’s work—the idea which puts the “effective” in “effective altruism”—is that we should apply this logic to charitable giving itself. We have a tendency to lump all charitable giving together into an undifferentiated lump of “good things to do,” or “good causes to support.” If you’re fortunate enough to have enough money that you’re happy to give altruistically, that’s wonderful. But what are you giving to? A donation to your alma mater might allow them to renovate the student dorms, even get your name on the building if you are rich enough to write a big check. But if you can give $1 million to renovate the dorms, you could also give enough money to malaria prevention to save 200 lives. I’m not saying that nicer dorms aren’t important. But are they of comparable moral importance to saving 200 lives? Look yourself in the mirror, be honest with yourself, and see what you think.
The same logic applies to all other areas of charitable giving and at all other levels. GiveWell promotes malaria prevention because it is the most cost-effective way to save a life. If you donate $5,000 to any cause, even life-saving causes, you are probably not saving a life. If you give to malaria prevention, you are. Is the thing that you’re donating to of comparable moral importance to saving the life of a child? That’s between you and your mirror.
As with effective altruism, the basic idea of longtermism is simple and hard to dispute: the interests of people in the future matter just as much as the interests of people living today. In his book Reasons and Persons, Derek Parfit illustrates this idea with environmental concerns. If one environmental policy would have us burning through the earth’s resources in a way that would leave future generations immiserated, while another policy would preserve natural resources in a way that would ensure that future generations would enjoy lives that are at least as good as ours, preservation would be better than depletion. If nuclear power today will generate a huge amount of nuclear waste that we can’t destroy and would eventually create a huge burden for future generations, Parfit argues, we ought not generate that nuclear waste if there is another energy policy that has all of the same benefits without this cost. The fact that the people who would bear these costs are in the future doesn’t make these costs less morally important. In general, it’s immoral to immiserate future generations. They don’t exist yet, but they will, and when they do exist, their interests will matter.
What makes this line of thinking so pressing is that there are far more people in the future than there are today. If we treat the interests of future people as just as important as the interests of current people, there is much more to be gained by focusing on future-oriented interventions than present-oriented interventions. Making the world a better place today will improve the well-being of, at most, about eight billion people. Making the world a better place in the future will improve the well-being of a potentially infinite number of people. This means that we should be giving the future much more consideration than we actually do. (And if you think there won’t be many more than eight billion people in the future, that’s because you’re anticipating some calamity that would eliminate humanity. Surely preventing such a calamity would be a good thing!)
The most important objection to this kind of reasoning is that it’s very hard, perhaps impossible, to know what kinds of actions will be best for future people. Given this shortcoming, isn’t it best to focus on the here and now? But this objection doesn’t actually refute the basic argument for longtermism. To see why not, we need to draw a distinction between two kinds of moral claims: moral principles and practical directives. Moral principles give a general, abstract characterization of what is valuable or what our obligations are. “Lying is wrong” is a good example of a moral principle. Practical directives tell us what to do in particular cases. They are applications of moral principles. “Don’t tell your son that Santa Claus exists” is a good example of a practical directive. Importantly, the practical directive not to tell my son that Santa Claus exists doesn’t follow from the moral principle that lying is wrong by itself. To get that further conclusion, we need to make some further empirical assumptions: specifically, the assumption that Santa Claus doesn’t exist. (If Santa Claus did exist, it wouldn’t be lying if I told my son that he does!) In general, practical directives only follow from moral principles given some empirical assumptions about the way the world is.
The idea that future people matter just as much as present people is a moral principle. To extract practical directives from that moral principle, we need to make empirical assumptions about the way the world will be in the future and what the effects of our current actions will be. Those assumptions are very hard to know with any degree of certainty. But this only means that it is very hard to extract practical directives from longtermist moral principles. It does not mean that longtermist moral principles are false. It would certainly be convenient if it were always easy to derive practical directives from true moral principles, but whoever said morality was convenient? Sometimes it’s just hard to know what to do. Pointing out that it’s hard to know what to do if longtermism is true isn’t a good objection to longtermism.
It’s important to realize that the need to make empirical assumptions about the future in order to extract practical directives is not a unique problem for longtermism. Consider the practical directive not to drive drunk. We think it’s wrong to get behind the wheel while intoxicated not because the act itself is wrong but because it could lead to bad consequences during the ride home. There’s a sense in which someone who gets into a fatal crash while sober has done something worse than someone who drives drunk but makes it home safely. Yet we still say that the drunk driver has done something wrong. This is because, based on our best information at the time about the sorts of things that tend to happen when someone drives drunk, drunk driving imposes unjustifiable risks on the driver and others. That seems like perfectly legitimate moral reasoning. Longtermism simply says we should extend that kind of reasoning forward to the farther future. Neither is it odd to consider the impacts of today’s actions decades or centuries from now. Much climate change activism is premised on the need to prevent bad consequences that will occur in the far future. Longtermism simply says we should engage in this sort of reasoning more systematically.
Of course, making specific predictions about the far future is extraordinarily difficult. But this just means we should engage in long-term moral reasoning with caution and humility. Nonetheless, there are some actions and policies which, based on what we know today, seem like relatively good bets to hedge against very plausible risks. Mitigating climate change is one of these. Developing a new generation of antibiotics to counter the increasing emergence of antibiotic-resistance bacteria is another. Developing the technology to detect and destroy planet-killing asteroids is the sort of thing we could work on today and which some future generations will be very grateful for if the history of asteroid impacts on Earth is any indication.
There’s plenty of controversy over how to implement effective altruism and longtermist ideas. Some people sincerely think that you’re morally obligated to work in finance to donate all of your money to research that prevents a rogue AI from destroying humanity. Others think that’s an insane conclusion. But if that is an insane conclusion, the mistake seems to arise in our reasoning about how to implement the core moral principles from Singer and Parfit. The interests of future people matter just as much as the interests of present people; the suffering and death of either present people or future people is bad; and if we can prevent something bad from happening without sacrificing anything of comparable moral importance, we ought to do it. If you agree with that, you’re fundamentally on board with the effective altruist/longtermist project. The rest is just bickering over details.
Matt Lutz is an Associate Professor of Philosophy at Wuhan University and writes the Substack Humean Beings.
And, to receive pieces like this in your inbox and support our work, subscribe below: