Jul 5, 2023·edited Jul 8, 2023

I'm persuaded that there's certainly nothing wrong with "effective altruism," which I take to be the broad urging, especially of the well-off -- backed by some perhaps surprising facts -- that you could do a lot more good if you directed your charitable giving differently, and did some more giving too. I likewise accept the broad premise of "longtermism," which is that we ought to care a lot about future people, or, as politicians might put it, "our children and our grandchildren." Indeed, both notions seem like no-brainers.

I have more difficulty, however, with Singer's logic, for the following reasons:

1. It would seem to render immoral everyone leading a comfortable life, even if they make big (and effective!) charitable contributions. Because they could do yet more. Of the stuff that makes their life comfortable -- from their nicer-than-necessary car to their bigger-than-necessary home to their non-Kraft cheese to their non-second-hand wardrobe to their nights out to even their kids' school tuitions to say nothing of their kids' copious toys and such -- none of it is of "comparable moral importance" to a child's life. And that list hardly describes wealthy people alone. It describes middle-class, even lower-middle-class people too. We're not talking about yachts and Rolexes here. And yet even a pretty modest lifestyle would fail the Singer test, because the money spent on even that one steak dinner at a fancy restaurant in years for dad on his birthday could be better spent so long as suffering persists, so long as there is, to put it in terms of Singer's analogy, a drowning child anywhere in the world. I mean, you're eating a $50 ribeye while a kid drowns! The logic here would seem to dictate that we are all morally obligated to live on bare necessities alone so long as only two conditions are present: (1) some people in the world face serious suffering and/or life-threatening problems that (2) we have some power to alleviate through giving money. (Note that this is true even if you or members of your own family are suffering in comparable ways yourselves, because spending to alleviate that would be included in "bare necessities." You just wouldn't be permitted anything more.) Can that be right? It doesn't seem right. It doesn't seem to track with just about everyone's moral intuitions and sense of obligation.

2. Speaking of which, do we accept Singer's flattening of obligation to the point that we cannot in good conscience say that we owe any particular duties to anyone that would place a priority claim on our care, concern, and resources by virtue of our relationship to them, our association with them, or our situation in relation to them? I mean, so much for birthday presents! And that's the least of it. To put it more seriously, would we begrudge a mother rushing to save her own drowning child even if she could just as easily save two other strangers drowning yards away instead? I'm not sure how one child is of "comparable moral importance" to two in the grand scheme. Two seems like twice as much importance, not really comparable at all. Let's take another case. Suppose a wealthy person would like to establish a grant program to support burgeoning artists from disadvantaged communities in her home town. She likes the idea of helping her community. She likes to nurture the creativity of young people who have had a rough time. She likes art. Along comes Singer with a bullhorn: "Malaria, malaria, malaria!" Seems churlish. Anyway, imagine she responds, hey, I'm already giving fully half of my wealth to charity, more after I die, and fully half of that to your list of approved "effective" causes, including malaria. But I feel a kinship with people in my home town and I want to aid causes close to my home and close to my heart. I read nothing here to suggest that Singer, hearing this defense, wouldn't put bullhorn to lips a second time: "Distance doesn't matter! Malaria, malaria, malaria!"

3. Speaking of art, isn't there much in this world that is valuable, that we want, but that will always fail a one-on-one matchup with a child's life in any plausible balancing of "moral importance"? Must we dispense with all those important and valuable things until there's no more suffering? Art seems like a perfect example. Singer once wrote an op-ed in the New York Times comparing relieving blindness in the developing world to contributing to a local art museum. His point was that blindness is obviously worse than not having a new wing on the art museum. Fair enough, and yet that standard would seem to prohibit all spending on art museums, indeed all spending on art itself, until more acute problems are solved. One might say, wait, that doesn't follow. I'm just talking about this one artwork, this one museum wing. I'm not saying that "having art" lacks serious moral importance. But it certainly follows. Because if one child's life beats this one painting, that will be true for every painting, and, presto, no more museums, no more art. By this logic, Singer would seem to sap the world of much of what makes life worth living (and seeing)! He would, it seems, save some humans (for a little while) at the expense of humanity.

4. We run into a further big problem when we think about generalizing this logic, which is that most luxury items (and I define luxury here to mean non-necessity) will fail the one-on-one moral matchup, and yet the world would be far worse -- and there would be a lot more suffering over the long term -- if everybody always took Singer seriously and nobody produced them or bought them. The reason any country is rich is because of lots of production of lots of unnecessary products and services, a/k/a, the economy. If the world were to stop buying all that unnecessary stuff, we'd be plunged into a global depression of unimaginable depth, and there'd be a lot fewer resources to alleviate suffering. Indeed, economic development of developing countries is the surest way to alleviate suffering there, and that route has already been spectacularly successful in, say, China and Africa too.

Perhaps Singer has worked through all these objections elsewhere, and I'm just unaware. What am I missing?

Expand full comment
Jul 5, 2023Liked by Matt Lutz

Dangerous asteroid impacts on Earth do occur (ask the Dinosaurs). However, they are very rare. Malaria is far from rare. The only good news is that a Malaria vaccine may have been developed.

Expand full comment

I generally like the idea of EA in that giving to organizations that do the most good for the cost is wise. However, I fear it suffers from a sort of Weberian rationalization process: we identify the maximum good-for-cost, as we are morally obligated to do. That puts us on a treadmill where once we've identified the highest good for cost, we are morally obligated to ignore every other cause until we've saturated that one highest value giving to the point where it's no longer the highest value, then move on to the next. My specific objections:

1) It requires us to measure "good" in some graduated manner. Also, we need to appropriately account for diminishing returns (diminishing good per cost unit) as investment increases, as well as continuously evaluating opportunity costs. If we could do all that, I'd feel a lot better about it. But I'm a relatively pragmatic person -- most humans cannot or will not do that. At best, we fall back on "experts" to do that for us, and then we have to trust them, and I don't. Ultimately, I don't have much faith that the above calculus *can* be done fairly -- though the smaller the specific consideration, the fewer variables involved, the more faith I have.

2) Even if we can do 1 above (and perhaps especially if we can do 1 above), we run into rationalization problems. As with all ongoing rationalized processes, the rationalization eventually (or even initially) distorts the overall goals (doing good effectively) in favor of the more specific goals that are measured. It closes off possibilities for consideration because what is measured eventually becomes the only visible reality. This is a core problem of with just about any "competitive" process (a process that always seeks to be more efficient, such as maximizing profitability, winning elections, etc.).

Ultimately, these are not reasons to reject EA altogether; merely reasons to not rely on it entirely, to leaven it with other ideas (common sense, rule-based ethics, etc.).

As for longtermism, I have similarly pragmatic issues with it. The future exists in a sort of cloud of possibility that roughly forms a funnel, and our ability to predict the future gets increasingly imprecise the further from the present you make predictions. It doesn't take very long for our cone of imprecision to include completely conflicting predictions, predictions where responding to the wrong one results in the opposite of the effect we want, and the cloud of imprecision itself is such that we don't even know which practical interventions will result in those kinds of opposingly bifurcated outcomes. So the further into the future we go, the less certain of the effectiveness of our interventions we can be. Given that, sacrificing the present for an imprecise future should be done very judiciously and with considerable (though not by any means unlimited) deference to today's opportunity costs for far future interventions.

Expand full comment

Thank you for a clear, logical exposition of effective altruism and longtermism as moral principles. I think, though, that you weaken your argument as a whole with the final words:

"If you agree with [the combination of moral principles explained here], you’re fundamentally on board with the effective altruist/longtermist project. The rest is just bickering over details."

Surely it's not just bickering over details, nor do I find myself fundamentally on board with the *project* in question; particularly the longtermist part. If I judge that making plans for future generations is not merely extraordinarily difficult but prohibitively fraught with the risk of negative unintended consequences, I don't set to work on the details; I reject the project.

It's one thing to recognize the validity of a moral principle and quite another to take that principle as a guide. It may make a good guide. Then again, it may be eminently disqualified as a guide. Or it may just be too chancy, or too far removed from plausible human behavior and thus too dependent on authoritarian rule. It's not always a question of settling the details.

Expand full comment

I think Scott Alexander (or someone on his blog) had a simple, persuasive objection to long-termism: do we really think we would be better off today if the Roman Emporers had optimized their policies for our welfare in 2023, to the (nonexistent) extent that they could foresee what we need? Or Napoleon? Or the Taft administration? Highly doubtful, to say the least.

Expand full comment

I find that philosophical argument for EA and longtermism impeccable. I find the author’s attempt to apply them to be flawed.

1. Re EA: Why should I trust Givewell? Even assuming the people running Givewell are morally upright and want to do the right thing, how do we know the money ends up in the right hands? What happened to the billions from the Gates foundation a few years ago? I thought that was going to be more than enough to wipe out malaria. The answer is it there is corruption all around, a finger in every pie and everyone stealing a piece. And it would be foolish to assume, without more due diligence, to just generously assume that the Givewell people are any less corrupt than BLM or SBF or pick any other charity eventually shown to be corrupt at the top. For that reason, I often prefer to make my charitable contributions locally to people and organizations I know, or can see the tangible results of my giving.

2. Re: longtermism: the author concedes that it is difficult to know the effectiveness of strategies to deal with the long term, then proceeds to give the most egregious example of ineffectiveness ie attempts to mitigate climate change. The scientific calculations and opinions, running against the consensus pipe dream to eliminate fossil fuels, are no less factual and cogent for their being suppressed by the main stream media. see Bjorn Lonborg. Other futuristic claims are no less suspect. Want to do your part for the future of humanity? Stop making contributions to these organizations that purport to prognosticate the future, and instead give it toward the economic improvement and especially education of the poorest of the next generation.

Expand full comment

“suffering and death from lack of food, shelter, and medical care is bad.” We may simplify that by saying that suffering is a bad thing, the path to that juncture not the end point. Solving one’s medical problem does not address one’s hunger. Saving some from malaria may acerbate other’s problems, such as those associated with living on a crowded planet. If one chose survival of the species as the highest goal, eradication of malaria might seem to some counterproductive. And what would be the point unless one could balance the economic equation while at it and suppress egregious exploitation by warlords, mitigate climate change and so forth. Things are connected to the extent that actions have both intended and unintended, or unknown, consequences. And if one were to ‘save’ many lives, in the sense of extending the duration between birth and death, would that necessarily contribute to the quality of the experience, or might that increase the quantity of suffering over that relatively longer period of time.

The point is not to live, but to live well. The solution to the problem of being human is holistic and, prior to longevity, is first of meaning. The pursuit of the good is the pursuit of excellence. It is our challenge to secure and to grow our persons and those of our dependencies through the pursuit of excellence and to expand that world of excellence unilaterally from the center of one’s self. To not be greedy but to release the bird you made within you to the larger world. Proximity is important, charity begins at home, meaning of more currency with neighbors than in service to a distant cause.

“The interests of future people matter just as much as the interests of present people” Not really, they don’t exist. They may have existence in the future, but that really depends on our establishment of meaning in the present, a part of which is of continuity.

Readers having interest may wish to see my essay, "Liberty and justice: morality and the logic of genetic furtherance"

Expand full comment