Book Club with Steven Pinker: Join the Persuasion community on Tuesday, October 26, at 7 p.m. EDT for a talk with Steven Pinker about his new book, Rationality: What It Is, Why It Is Scarce, Why It Matters, moderated by Moisés Naím. Details and registration here.
One realm that is sometimes excluded from the rational is the moral. Can we ever deduce what’s right or wrong? Can we confirm it with data? It’s not obvious how you could. Many people believe that “you can’t get an ought from an is.” The conclusion is sometimes attributed to the philosopher David Hume. “’Tis not contrary to reason,” he famously wrote, “to prefer the destruction of the whole world to the scratching of my finger.”
Moral statements indeed must be distinguished from logical and empirical ones. Philosophers in the first half of the twentieth century took Hume’s argument seriously and struggled with what moral statements could possibly mean if they are not about logic or empirical fact. Some concluded that “X is evil” means little more than “X is against the rules” or “I dislike X” or even “X, boo!”
But many people are not ready to reduce morality to convention or taste. When we say “The Holocaust is bad,” do our powers of reason leave us no way to differentiate that conviction from “I don’t like the Holocaust” or “My culture disapproves of the Holocaust”?
Faced with this intolerable implication, some people hope to vest morality in a higher power. That’s what religion is for, they say—even many scientists. But Plato made short work of this argument 2,400 years ago in Euthyphro. Is something moral because God commands it, or does God command some things because they are moral? If the former is true, and God had no reason for his commandments, why should we take his whims seriously? If God commanded you to torture and kill a child, would that make it right? “He would never do that!” you might object. But that flicks us onto the second horn of the dilemma. If God does have good reasons for his commandments, why don’t we appeal to those reasons directly and skip the middleman?
In fact, it is not hard to ground morality in reason.
Hume may have been technically correct when he wrote that it’s not contrary to reason to prefer global genocide to a scratch on one’s pinkie. But his grounds were very, very narrow. As he noted, it is also not contrary to reason to prefer bad things happening to oneself over good things—say, pain, poverty, and loneliness over pleasure, prosperity, and good company. O-kay. But now let’s just say that we prefer good things to happen to ourselves over bad things. Let’s make a second wild and crazy assumption: that we are social animals who live with other people, rather than Robinson Crusoe on a desert island, so our well-being depends on what others do, like helping us when we are in need and not harming us for no good reason.
This changes everything. As soon as we start insisting to others, “You must not hurt me, or let me starve, or let my children drown,” we cannot also maintain, “But I can hurt you, and let you starve, and let your children drown,” and hope they will take us seriously. That is because as soon as I engage you in a rational discussion, I cannot insist that only my interests count just because I’m me and you’re not, any more than I can insist that the spot I am standing on is a special place in the universe because I happen to be standing on it. The pronouns I, me, and mine have no logical heft—they flip with each turn in a conversation. And so any argument that privileges my well-being over yours or his or hers, all else being equal, is irrational.
When you combine self-interest and sociality with impartiality—the interchangeability of perspectives—you get the core of morality. You get the Golden Rule, or the variants that take note of George Bernard Shaw’s advice: “Do not do unto others as you would have others do unto you; they may have different tastes.” This sets up Rabbi Hillel’s version, “What is hateful to you, do not do to your fellow.”
Versions of these rules have been independently discovered in Judaism, Christianity, Hinduism, Zoroastrianism, Islam, and other religions and moral codes. These include Spinoza’s observation, “Those who are governed by reason desire nothing for themselves which they do not also desire for the rest of human-kind.” And Kant’s Categorical Imperative: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.” For that matter the principle may be seen in the most fundamental statement of morality of all, the one we use to teach the concept to small children: “How would you like it if he did that to you?”
None of these statements depends on taste, custom, or religion. And though self-interest and sociality are not, strictly speaking, rational, they’re hardly independent of rationality. How do rational agents come into existence in the first place? Unless you are talking about disembodied rational angels, they are products of evolution, with fragile, energy-hungry bodies and brains. To have remained alive long enough to enter into a rational discussion, they must have staved off injuries and starvation, goaded by pleasure and pain. Evolution, moreover, works on populations, not individuals, so a rational animal must be part of a community, with all the social ties that impel it to cooperate, protect itself, and mate. Reasoners in real life must be corporeal and communal, which means that self-interest and sociality are part of the package of rationality. And with self-interest and sociality comes the implication we call morality.
Impartiality, the main ingredient of morality, is not just a logical nicety. Practically speaking, it also makes everyone, on average, better off. Life presents many opportunities to help someone, or to refrain from hurting them, at a small cost to oneself. So if everyone signs on to helping and not hurting, everyone wins. This does not, of course, mean that people are in fact perfectly moral, just that there’s a rational argument as to why they should be.
Steven Pinker, a member of the Persuasion advisory board, is a cognitive scientist at Harvard University.
[Excerpted from Rationality: What It Is, Why It Seems Scarce, Why It Matters. Reprinted with permission from Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2021 by Steven Pinker.]
I love Steven Pinker and his defenses of rationality. Seriously. I totally agree: We should be rational. I have a degree in math, the most rational subject.
But reading the opening lines of his new book last night I choked.
“Rationality ought to be the loadstar for everything we think and do. (If you disagree, are your objections rational?)”
Well, yes, my objections are at least as rational as possible when objecting to a claim based on an undefined word -- “loadstar.”
I would like to suggest to Pinker a more rational opening, which I think leads to a much stronger defense of rationality (and would help clean up his current post).
“We should be as rational as it is rational to be, neither more nor less.”
This admits the obvious − that we cannot be totally rational. But it also suggests something quite wonderful. We have an irrational faculty called intuition, which is often even better at getting right answers than rationality.
And because intuition is easier and faster, it is often rational to use it rather than wasting too much time rationally planning the very best direction for escaping the lion we just noticed in the grass.
This is an example Pinker uses in his book's third paragraph but misunderstands. He implies that the caveman’s only alternative to rationality for coping with “the lion in the grass” is to rely on a “suite of biases, blind spots, fallacies, and illusions.”
Intuition is simply off his radar.
John von Neumann, the father of game theory and the most brilliant game theorist bar none, developed a theory that could be used to play poker rationally. Yet he was unable to carry out that rational calculation. Some cowboys were known to be extremely good at poker even without his theory. Yes, they used rationality − and heaps of intuition. Using only pure rationality, they might never have won a single hand.
Interestingly, the folklore of intuition is vast. One famous story is that the arrangement of carbon atoms in benzene (they form a ring) was discovered by the German chemist August Kekulé when he dreamed of a snake biting its tail. When stuck on a complex problem I often study it intensely before I go to bed. Frequently, I wake up with a useful insight in the morning that occurs to me before I even remember I was working on the problem. Intuition!
Now here’s what would clear up Pinker’s post. What he has done is assume three axioms and then use rationality. He finally makes this clear when he says, “When you combine self-interest and sociality with impartiality … you get the core of morality.” By “get” he means you can logically deduce morality from his three axioms: (1) self-interest, (2) sociality and (3) impartiality.
I think he’s right that you can, although it’s a rather limited morality, and to go much further he will likely need more axioms.
What he has done is to mimic math. Mathematicians pick a set of axioms (assumptions) and then prove what things they imply. Science often works the other way round − observing what’s true and figuring out underlying assumptions that predict the observation. But we can’t scientifically observe morality.
My complaint is that Pinker has mystified the process of choosing his axioms. He keeps wanting us to think his assumptions are rational, and he gives subtle “rational” arguments. No, it's easier than that. Axiom (1) and (3) are just assumptions − starting points. And axiom (2) is just something we observe to be true − basically a scientific observation.
So the proper argument for (1) and (3) is, “I like these, and you probably do too, so let’s give them a try.” That’s much simpler than phony rationality. (That’s what mathematicians do.)
And pretending to be rational when you can’t be (e.g. when picking assumptions) leads to irrational statements, like,
“As soon as we start insisting to others, ‘You must not hurt me, ... we cannot also maintain, ‘But I can hurt you.’”
“Maintain” is another non-logical, undefined word, but I’ll guess he means “believe” rather than “proclaim.” In this case, his claim is just wrong. Quite a few people believe “you must not hurt me, but I can hurt you.” They are called sociopaths. And they are the key to understanding much of our politics. I’ll bet you can even think of one (starts with a T).
We can’t be rational all of the time − like when making assumptions or playing poker. But that’s OK. We have very powerful intuitions for the hard problems. Of course, we make lots of mistakes both when trying to be logical and when trying to intuit. But that’s life for us Earthlings. So just keep this in mind.
1. Be rational about when to be rational.
2. Never, ever toss out rationality because you think it’s bad (or white).
You appear to be basing the "rationality of morality" upon the human's relationship with other humans. But what if a person is completely self-sufficient such that they don't need society? One rather unfortunate implication of your argument is that the more independent a person is (i.e., the more power they have), the less moral they need to be to remain rational. In other words, might makes right and in your system the main variable is power, as it is in all systems of human morality that exists apart from God.
Your dismissal of the Eutyphro dilemma is irrelevant. The dilemma itself is false: something is not good merely because God wills it, *nor* does God will it because he is subject to a higher authority of goodness. God commands moral behavior because he *is* the highest authority and goodness or morality is part of his nature and character. It is consistent with who he is, and he cannot act or command in opposition to his nature. We reflect this morality in our own reason because we are made in God's image (whether or not we personally believe it).