The Case For Effective Altruism
Most of the criticism leveraged at the philosophical movement—including its sister philosophy, longtermism—doesn’t tackle its core ideas.
In November of last year, the cryptocurrency exchange FTX blew up. Its founder and CEO, Sam Bankman-Fried, has been charged with perpetrating massive fraud. One of the interesting side-effects of this story is the increased attention that’s been paid to effective altruism and longtermism, two philosophical ideas that have been gaining in popularity in recent years. Adherents of these views tend to be extremely passionate advocates and have (loosely) organized themselves into a “movement” or “community” that promotes those views. Bankman-Fried was one of the most vocal and visible members of the movement. For those who are skeptical of effective altruism and longtermism, Bankman-Fried’s downfall is proof that those ideas were always worthless. Even for those who remain dedicated to the movement, the FTX blowup has prompted a crisis of faith and a moment of reflection.
I’m not a member of the “movement” in any sense. I’m just a philosophy professor who teaches the philosophical ideas that movement is based on. So my perspective on these debates is not as a partisan, but as someone who grades papers. And unfortunately, much of the critical discussion of effective altruism and longtermism is B work at best: it shows some grasp of the issues under debate, but doesn’t demonstrate a full understanding of the core ideas.
Those core ideas come from philosophical work in ethics in the late 20th century, particularly from the writing of Peter Singer and Derek Parfit. Singer and Parfit’s ideas have proved so compelling that many others—first philosophers, and then activists and philanthropists—have accepted their views and developed them further while attempting to put them into practice. Some of the ways that Singer and Parfit’s views have been developed have received criticism, much of it warranted. But the core ideas themselves typically remain untouched by such debate.
Effective Altruism
In Peter Singer’s paper, “Famine, Affluence, and Morality,” Singer defends two core moral propositions. The first is that “suffering and death from lack of food, shelter, and medical care is bad.” That should be uncontroversial. Singer doesn’t even argue for it, simply saying that if you disagree with that claim, you’re not his target audience. The second proposition is that “if it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it.”
Singer illustrates this second proposition with the following thought experiment: suppose you are walking near a shallow pond and see a young child drowning. Unless someone jumps in to rescue the child, he will die. If you jump in, you can easily save him. However, you’re wearing very nice, expensive clothes, and those clothes will be ruined if you jump in. If you save the drowning child, you will prevent something bad from happening, and you will have to sacrifice to do it. But the thing that you will sacrifice, your nice clothes, is not of comparable moral importance to the life of a child. So you ought, morally, to save the child. If you don’t, you’re doing something profoundly morally wrong.
Singer’s two propositions entail that if you can prevent suffering and death from lack of food, shelter, and medical care without sacrificing anything of comparable moral importance, then you ought, morally, to do so. And, in fact, we can prevent suffering and death from lack of food, shelter, and medical care. Many people, particularly in the developing world, die from these causes. And there are many charities that are set up to provide these people with the food, shelter, and medical care that they need to survive. If you donated a substantial portion of your income to those charities, they would have the resources to save the lives of more people. So you ought to donate a substantial portion of your income to charity unless doing so would sacrifice something of comparable moral importance.
This is not something that it would be nice for you to do. It’s something you’re morally obligated to do. Everyone who doesn’t donate is like the person who walks blithely past the drowning child. Those drowning children happen to be in South Asia and sub-Saharan Africa, but physical distance is not morally relevant. Someone’s being far away is morally relevant only insofar as it means that we can’t help them. But in today’s interconnected world, with international aid organizations working in even the poorest of places, distance is no bar to our ability to help. Giving money to aid organizations increases their power to steer resources to people who will die for lack of them.
How much money must we give to aid organizations? A strict utilitarian would say that we should give until the amount of suffering we would alleviate is equal to the amount of suffering we would incur on ourselves. But while many effective altruists are utilitarians, Singer’s argument doesn’t assume strict utilitarianism. He says that we should give up until the point where we’d sacrifice something of “comparable moral importance.” We should give until we’d have to give up something that’s in the same ballpark, morally speaking. What is of comparable moral importance to suffering and death from lack of food, shelter, and medical care? That’s a difficult question with no agreed-on answer. But owning a set of nice clothes surely doesn’t qualify, as the case of the drowning child illustrates. So you shouldn’t buy nice clothes and should, instead, give your money to international aid organizations.
I won’t try to defend an exact criterion for how much you are morally obligated to give under Singer’s criterion. That’s something you’ll have to work out for yourself. If you can look yourself in the mirror and say, honestly, “This thing that I’m buying for myself is of comparable moral importance to saving the life of a child,” then go ahead and buy that thing for yourself. But I suspect that few of your expenditures will pass that test.
How much money does it take to save the life of a child? Inspired by Singer’s work, this has been the subject of intense study. GiveWell, an effective altruist organization, has concluded that the most cost-effective way to save a life is by funding medical treatment to prevent malaria infection. Malaria is practically unknown in the developed world, but it remains a scourge in undeveloped areas with tropical climates. By donating about $5,000 to prevent malaria infection, you will save, on average, one child’s life. That might sound like a lot of money. But really, it’s a shockingly small amount. How much money do you spend in a year on things that are less important than the life of a child? How many lives could you save if you were willing to live more frugally?
One of the most disruptive ideas to come from Singer’s work—the idea which puts the “effective” in “effective altruism”—is that we should apply this logic to charitable giving itself. We have a tendency to lump all charitable giving together into an undifferentiated lump of “good things to do,” or “good causes to support.” If you’re fortunate enough to have enough money that you’re happy to give altruistically, that’s wonderful. But what are you giving to? A donation to your alma mater might allow them to renovate the student dorms, even get your name on the building if you are rich enough to write a big check. But if you can give $1 million to renovate the dorms, you could also give enough money to malaria prevention to save 200 lives. I’m not saying that nicer dorms aren’t important. But are they of comparable moral importance to saving 200 lives? Look yourself in the mirror, be honest with yourself, and see what you think.
The same logic applies to all other areas of charitable giving and at all other levels. GiveWell promotes malaria prevention because it is the most cost-effective way to save a life. If you donate $5,000 to any cause, even life-saving causes, you are probably not saving a life. If you give to malaria prevention, you are. Is the thing that you’re donating to of comparable moral importance to saving the life of a child? That’s between you and your mirror.
Longtermism
As with effective altruism, the basic idea of longtermism is simple and hard to dispute: the interests of people in the future matter just as much as the interests of people living today. In his book Reasons and Persons, Derek Parfit illustrates this idea with environmental concerns. If one environmental policy would have us burning through the earth’s resources in a way that would leave future generations immiserated, while another policy would preserve natural resources in a way that would ensure that future generations would enjoy lives that are at least as good as ours, preservation would be better than depletion. If nuclear power today will generate a huge amount of nuclear waste that we can’t destroy and would eventually create a huge burden for future generations, Parfit argues, we ought not generate that nuclear waste if there is another energy policy that has all of the same benefits without this cost. The fact that the people who would bear these costs are in the future doesn’t make these costs less morally important. In general, it’s immoral to immiserate future generations. They don’t exist yet, but they will, and when they do exist, their interests will matter.
What makes this line of thinking so pressing is that there are far more people in the future than there are today. If we treat the interests of future people as just as important as the interests of current people, there is much more to be gained by focusing on future-oriented interventions than present-oriented interventions. Making the world a better place today will improve the well-being of, at most, about eight billion people. Making the world a better place in the future will improve the well-being of a potentially infinite number of people. This means that we should be giving the future much more consideration than we actually do. (And if you think there won’t be many more than eight billion people in the future, that’s because you’re anticipating some calamity that would eliminate humanity. Surely preventing such a calamity would be a good thing!)
The most important objection to this kind of reasoning is that it’s very hard, perhaps impossible, to know what kinds of actions will be best for future people. Given this shortcoming, isn’t it best to focus on the here and now? But this objection doesn’t actually refute the basic argument for longtermism. To see why not, we need to draw a distinction between two kinds of moral claims: moral principles and practical directives. Moral principles give a general, abstract characterization of what is valuable or what our obligations are. “Lying is wrong” is a good example of a moral principle. Practical directives tell us what to do in particular cases. They are applications of moral principles. “Don’t tell your son that Santa Claus exists” is a good example of a practical directive. Importantly, the practical directive not to tell my son that Santa Claus exists doesn’t follow from the moral principle that lying is wrong by itself. To get that further conclusion, we need to make some further empirical assumptions: specifically, the assumption that Santa Claus doesn’t exist. (If Santa Claus did exist, it wouldn’t be lying if I told my son that he does!) In general, practical directives only follow from moral principles given some empirical assumptions about the way the world is.
The idea that future people matter just as much as present people is a moral principle. To extract practical directives from that moral principle, we need to make empirical assumptions about the way the world will be in the future and what the effects of our current actions will be. Those assumptions are very hard to know with any degree of certainty. But this only means that it is very hard to extract practical directives from longtermist moral principles. It does not mean that longtermist moral principles are false. It would certainly be convenient if it were always easy to derive practical directives from true moral principles, but whoever said morality was convenient? Sometimes it’s just hard to know what to do. Pointing out that it’s hard to know what to do if longtermism is true isn’t a good objection to longtermism.
It’s important to realize that the need to make empirical assumptions about the future in order to extract practical directives is not a unique problem for longtermism. Consider the practical directive not to drive drunk. We think it’s wrong to get behind the wheel while intoxicated not because the act itself is wrong but because it could lead to bad consequences during the ride home. There’s a sense in which someone who gets into a fatal crash while sober has done something worse than someone who drives drunk but makes it home safely. Yet we still say that the drunk driver has done something wrong. This is because, based on our best information at the time about the sorts of things that tend to happen when someone drives drunk, drunk driving imposes unjustifiable risks on the driver and others. That seems like perfectly legitimate moral reasoning. Longtermism simply says we should extend that kind of reasoning forward to the farther future. Neither is it odd to consider the impacts of today’s actions decades or centuries from now. Much climate change activism is premised on the need to prevent bad consequences that will occur in the far future. Longtermism simply says we should engage in this sort of reasoning more systematically.
Of course, making specific predictions about the far future is extraordinarily difficult. But this just means we should engage in long-term moral reasoning with caution and humility. Nonetheless, there are some actions and policies which, based on what we know today, seem like relatively good bets to hedge against very plausible risks. Mitigating climate change is one of these. Developing a new generation of antibiotics to counter the increasing emergence of antibiotic-resistance bacteria is another. Developing the technology to detect and destroy planet-killing asteroids is the sort of thing we could work on today and which some future generations will be very grateful for if the history of asteroid impacts on Earth is any indication.
There’s plenty of controversy over how to implement effective altruism and longtermist ideas. Some people sincerely think that you’re morally obligated to work in finance to donate all of your money to research that prevents a rogue AI from destroying humanity. Others think that’s an insane conclusion. But if that is an insane conclusion, the mistake seems to arise in our reasoning about how to implement the core moral principles from Singer and Parfit. The interests of future people matter just as much as the interests of present people; the suffering and death of either present people or future people is bad; and if we can prevent something bad from happening without sacrificing anything of comparable moral importance, we ought to do it. If you agree with that, you’re fundamentally on board with the effective altruist/longtermist project. The rest is just bickering over details.
Matt Lutz is an Associate Professor of Philosophy at Wuhan University and writes the Substack Humean Beings.
Follow Persuasion on Twitter, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:
I'm persuaded that there's certainly nothing wrong with "effective altruism," which I take to be the broad urging, especially of the well-off -- backed by some perhaps surprising facts -- that you could do a lot more good if you directed your charitable giving differently, and did some more giving too. I likewise accept the broad premise of "longtermism," which is that we ought to care a lot about future people, or, as politicians might put it, "our children and our grandchildren." Indeed, both notions seem like no-brainers.
I have more difficulty, however, with Singer's logic, for the following reasons:
1. It would seem to render immoral everyone leading a comfortable life, even if they make big (and effective!) charitable contributions. Because they could do yet more. Of the stuff that makes their life comfortable -- from their nicer-than-necessary car to their bigger-than-necessary home to their non-Kraft cheese to their non-second-hand wardrobe to their nights out to even their kids' school tuitions to say nothing of their kids' copious toys and such -- none of it is of "comparable moral importance" to a child's life. And that list hardly describes wealthy people alone. It describes middle-class, even lower-middle-class people too. We're not talking about yachts and Rolexes here. And yet even a pretty modest lifestyle would fail the Singer test, because the money spent on even that one steak dinner at a fancy restaurant in years for dad on his birthday could be better spent so long as suffering persists, so long as there is, to put it in terms of Singer's analogy, a drowning child anywhere in the world. I mean, you're eating a $50 ribeye while a kid drowns! The logic here would seem to dictate that we are all morally obligated to live on bare necessities alone so long as only two conditions are present: (1) some people in the world face serious suffering and/or life-threatening problems that (2) we have some power to alleviate through giving money. (Note that this is true even if you or members of your own family are suffering in comparable ways yourselves, because spending to alleviate that would be included in "bare necessities." You just wouldn't be permitted anything more.) Can that be right? It doesn't seem right. It doesn't seem to track with just about everyone's moral intuitions and sense of obligation.
2. Speaking of which, do we accept Singer's flattening of obligation to the point that we cannot in good conscience say that we owe any particular duties to anyone that would place a priority claim on our care, concern, and resources by virtue of our relationship to them, our association with them, or our situation in relation to them? I mean, so much for birthday presents! And that's the least of it. To put it more seriously, would we begrudge a mother rushing to save her own drowning child even if she could just as easily save two other strangers drowning yards away instead? I'm not sure how one child is of "comparable moral importance" to two in the grand scheme. Two seems like twice as much importance, not really comparable at all. Let's take another case. Suppose a wealthy person would like to establish a grant program to support burgeoning artists from disadvantaged communities in her home town. She likes the idea of helping her community. She likes to nurture the creativity of young people who have had a rough time. She likes art. Along comes Singer with a bullhorn: "Malaria, malaria, malaria!" Seems churlish. Anyway, imagine she responds, hey, I'm already giving fully half of my wealth to charity, more after I die, and fully half of that to your list of approved "effective" causes, including malaria. But I feel a kinship with people in my home town and I want to aid causes close to my home and close to my heart. I read nothing here to suggest that Singer, hearing this defense, wouldn't put bullhorn to lips a second time: "Distance doesn't matter! Malaria, malaria, malaria!"
3. Speaking of art, isn't there much in this world that is valuable, that we want, but that will always fail a one-on-one matchup with a child's life in any plausible balancing of "moral importance"? Must we dispense with all those important and valuable things until there's no more suffering? Art seems like a perfect example. Singer once wrote an op-ed in the New York Times comparing relieving blindness in the developing world to contributing to a local art museum. His point was that blindness is obviously worse than not having a new wing on the art museum. Fair enough, and yet that standard would seem to prohibit all spending on art museums, indeed all spending on art itself, until more acute problems are solved. One might say, wait, that doesn't follow. I'm just talking about this one artwork, this one museum wing. I'm not saying that "having art" lacks serious moral importance. But it certainly follows. Because if one child's life beats this one painting, that will be true for every painting, and, presto, no more museums, no more art. By this logic, Singer would seem to sap the world of much of what makes life worth living (and seeing)! He would, it seems, save some humans (for a little while) at the expense of humanity.
4. We run into a further big problem when we think about generalizing this logic, which is that most luxury items (and I define luxury here to mean non-necessity) will fail the one-on-one moral matchup, and yet the world would be far worse -- and there would be a lot more suffering over the long term -- if everybody always took Singer seriously and nobody produced them or bought them. The reason any country is rich is because of lots of production of lots of unnecessary products and services, a/k/a, the economy. If the world were to stop buying all that unnecessary stuff, we'd be plunged into a global depression of unimaginable depth, and there'd be a lot fewer resources to alleviate suffering. Indeed, economic development of developing countries is the surest way to alleviate suffering there, and that route has already been spectacularly successful in, say, China and Africa too.
Perhaps Singer has worked through all these objections elsewhere, and I'm just unaware. What am I missing?
Dangerous asteroid impacts on Earth do occur (ask the Dinosaurs). However, they are very rare. Malaria is far from rare. The only good news is that a Malaria vaccine may have been developed.