🎧 Jonathan Haidt on Why Public Discourse Has Become So Stupid
Yascha Mounk and Jonathan Haidt discuss how the internet has made "the discourse" more polarized, angry, and simplistic (and what to do about it).
One of the world’s most influential social psychologists, a professor of ethical leadership at NYU's Stern School of Business, and a member of Persuasion's Board of Advisors, Jonathan Haidt is the author of The Righteous Mind and, with Greg Lukianoff, co-author of The Coddling of the American Mind. Haidt recently wrote a much-read feature in The Atlantic entitled “After Babel.”
In this week’s conversation, Yascha Mounk and Jonathan Haidt discuss how we can make social media less toxic, what political and technological reforms might help fix the problem, and how we can empower the moderate majority to fight for its values.
The transcript has been condensed and lightly edited for clarity.
Yascha Mounk: You've just written an interesting and—ironically—viral piece about the way in which our digital institutions have made everything in American life uniquely stupid over the last ten years, and why you're not very optimistic about that changing. Tell us the basic premise of the piece. Why is everything uniquely stupid?
Jonathan Haidt: The piece is the culmination of my eight-year struggle to understand what the hell happened. I've been a professor since 1995. I love being a professor, I love universities. I just felt like this is the greatest job on Earth. I got a glimpse, as a philosophy major, of Plato's Academy—sitting under the olive trees talking about ideas. And then all of a sudden, from out of nowhere in 2014, things got weird. They got aggressive and they got frightening. This game has been going on for thousands of years, in which one person serves something, the other person hits it back—around 2014, intimidation came in. There was a new element, which was that if you say something, people won't argue why you're wrong, they'll slam you as a bad person. On the left, they'll call you a racist; on the right, they'll call you a traitor. But something changed on campus.
And Greg Lukianoff was the first to really diagnose it. So he came to talk to me in 2014. And that became “The Coddling of the American Mind.” That was my first attempt to figure out why universities were getting so weird. And we thought maybe it was just on campus, that colleges were somehow teaching these ways of thinking. But then by 2017, this is spreading more widely, as Gen Z began to graduate and take jobs in journalism and go to law school and medical school. But it wasn't just that Gen Z was carrying these bad ideas out of college, it's that they were spontaneously germinating all over the place in Europe and Canada and Britain. So I wrote another essay in The Atlantic called “The Dark Psychology of Social Networks” really trying to hone in on what it is that’s deeply wrong with the social universe. And I had little glimpses of what it was. The core idea of the piece is social media’s structural stupidity. We humans are smart as individuals. But we're actually not good at figuring out complicated things. We have confirmation bias, we search for evidence that we're right. But you have to have viewpoint diversity, you have to have people pushing against each other. And by design, we always used to have that in universities, journalism—the adversarial legal system is set up that way—and when that fails, when you systematically intimidate dissenters, the institution gets structurally stupid. It cannot do smart things. And that's what happened beginning in 2014. And it spread into journalism, I'd say, in 2018/2019. And after George Floyd, the intensity on the left of shooting dissenters was really intense. Of course, when Trump came in 2016, the Republican Party got much stupider in its own ways.
Mounk: Why has that gone out of the window? Because a lot of those institutions, at least formally, are still in place, we still have the First Amendment, we still have academic freedom. There are still some journalistic standards at The New York Times which are supposed to encourage that kind of objectivity.
And yet, those things don't seem to work as they used to. So what's the difference?
Haidt: A key part of the story here is that social media changed in very important ways in 2009, that made it much more vitalized and much more effective at intimidation. So in its original incarnation, social media is not harmful. In general, connecting people is good. You're connecting when everybody can email everybody, everybody can call people on the phone. I mean, the long history of human development is making it easier for people to talk, and that's good. But when social media platforms came out in 2003-2004, you got MySpace, Friendster, and Facebook. At first, it's just about performance: “Hey, look at me, look at my page.” But then you get the news feeds. And the key thing is in 2009, Facebook adds the like button, and Twitter copies that; Twitter adds the retweet button, and Facebook copies that. And so just with those two innovations in 2009, suddenly, it's not just “hey, come to my page and look at what I've posted.” It's constant stuff coming in which I can retweet to everybody or I can comment on or I can quote tweet and slam and talk about how terrible this person is.
And so it's as though the fabric of social media changed in that year. Now, the norms don't change right away. When Twitter first came out, it was a generally nice place—a lot of talk about trivial stuff, but it wasn't nasty. It's only as these new norms filter in, as people learn how to manipulate these platforms, as the news media kind of incorporates itself in with Twitter—so much of what's on Fox News is about something that happened on Twitter. Those changes take a few years. 2014 is when I think we have the phase change. That's when a lot of stuff gets nasty. It's also when the Russians really activate their network, of course, the Russians have been trying to mess with our democracy for a long, long time. They activate their troll networks in 2014, especially.
So to return to your original question: Don't we still have these epistemic institutions? Well, so, let's look at what happens when a university puts up a new policy. Throughout my entire time in the academy, if the president of the university or some committee puts in some new policy, and there's an expert on the policy in the economics department, he's gonna write something and say, “Excuse me, but this is completely ridiculous.” Academics, we know our areas, and when they're relevant. But at the beginning of 2015, a kind of fear came over us. Because if you do that, if you challenge anything, especially about diversity, gender, race, trans, Islam—there are a few, a few really hot button topics—if you do that, you're not going to face people saying “you're wrong, and here's why.” You're going to face people saying you're a transphobe, you're a bigot, you're a racist. And then you don't know what will happen. There could be an investigation, they can file charges. And the clearest case is Dorian Abbott at the University of Chicago, he's invited to give a talk at MIT and students protest. Why? Because he had criticized DEI practices in an article that he wrote separately in Newsweek. That's a paradigmatic example of how we have blasphemy laws, we have sacrilege, and those do not belong in the academy.
Mounk: Where do these blasphemy laws come from? Because one of the things you described in your article, citing the Hidden Tribes study, is that the devoted conservatives are, I believe, 6% of the population. Progressive activists are something like 8%, and by and large, are not the people leading those institutions. So I have one kind of response to all of this—I feel like you're going to tell me that it’s naïve—which is, excuse me, “Why don't leaders of these institutions just grow a pair and stand up for their values? Why don't they say, ‘We have to have debate within this university. And you might dislike what this person says, and you can write a response—you can even peacefully protest against somebody—but you cannot disrupt their speech, and we're not going to punish or fire them.’”
Haidt: That's the question we've all been asking since 2015, when Peter Salovey was the president at Yale—after the Christakis issue in Silliman College when they marched to his house, and gave him a list of demands. Why didn't he say, “I'm very happy to talk to you. Let's talk about these.” Why didn't he do that rather than trying to meet as many as he could and saying, “Here's $50 million for diversity hiring”? I couldn't understand at the time why no presidents did that other than Bob Zimmer at Chicago. It shows that you need some moral resources to stand on. And so what I've seen happening since 2015 is a battle for narrative control. This gets us to the Babel theme of the essay. Humans have shared intentionality, we have this ability, even if we don't speak the same language, to understand what we're trying to do together. We all know what we're doing in a classroom; we're putting on a class and we all share the same script as to what this is. But what began to happen in 2015—and I now realize it's because of social media—is that it's very difficult to have everyone in the same story. My students literally are on their devices, even when they pledge not to, some of them are still on their devices during class. Even if they're not, their thoughts are in other places. Some of them are running their businesses, some of them are doing their activism. Social media fragments the ability to co-create a story together.
And when any common story is lost, what you have is chaos, what you have is Babel, the story is not about breaking humanity into two groups to fight each other. It's about fragmenting everything, dividing everybody into their own separate languages. And so what I saw happening at Yale and elsewhere is that the activists put on a new reading of what Yale is. It's no longer one of the most progressive Ivy League schools that is a bastion of scholarship and also progressive values. It is a white supremacist institution in which the only people who get ahead are those who are crushing marginalized people. Now, Peter Salovey could have stood up to that and said, “No, Yale is not this horrible racist place.” Rather, he said, “You're right. You're right.” He validated this new narrative about what Yale is, and that explains why Yale has continued to just shame itself over and over again. So many of the worst stories come out of Yale (Yale Law School, recently) because once you validate that Yale is a white supremacist institution, it's very hard to say, “Our focus is scholarship rather than fighting white supremacy.” And it becomes incoherent.
Mounk: This suggests to me that one of the reasons for this is a real ideological weakness, and that perhaps an ideological strengthening might be part of a response. I'm struck by the extent to which American elites don't seem to hold strong beliefs. And that is different from other countries. I spent a good bit of time in France recently. I think some of the beliefs that the intellectual and political classes hold are complicated. I think the interpretation they give to what the values of the republic are and what they require in public life, in certain parts, is erroneous, or overly rigid, but there is a real strong esprit de corps and there's a conviction that this is the set of values which should structure common life and that that is worth sacrificing for. It’s worth risking career consequences for because it is actually a kind of civic religion that people strongly believe. When I look at American elites, I'm struck by the extent to which many of the people I know have simply flipped what they believe on certain important issues by 180 degrees; and even more people seem to be willing to go along with whatever is being parroted yesterday, because what they actually care most about is getting ahead.
There's a second suspicion I have here, which is that for good reasons, liberal-leaning American elites are very sensitive to the charge that they are self serving or that they are insufficiently inclusive of those groups, because America's history is one that has excluded African Americans, Native Americans, and other groups in really bad ways. And so when they see clearly illiberal forces on campus—demanding firings of people, demanding inquisitions—they make a kind of category mistake, which I call “not-too-farism.” They say, “Well, these people are fighting on the side of the angels, they're really fighting for the right thing. They’re going a little bit too far, but that'll work itself out and we'll be in a better place.” There’s an unwillingness to recognize that actually, in many ways, these ideologies are inimical to the most fundamental values of these institutions.
Haidt: I find it very helpful to think about the pre-Babel world and the post-Babel world. The pre-Babel world is everything before, let's say, 2014. And everything you're saying makes sense before 2014. Let's suppose, as I said in this essay with Tobias Rose-Stockwell, God was just really bored one day and decided to double the gravitational constant. He's up there watching these planets circle around each other, and they've done the same thing for billions of years. Let's just double the gravitational constant and see what happens. Planes would fall from the sky, and machines would stop working, and bridges would collapse. And in that new world, our intuitions wouldn't work. What you're saying makes sense before 2014. But I think if there's a whole new world, the dynamics are different. One big thing is the loss of any ability to have any sort of common or shared story or shared understanding of what's going on. I've spoken to many leaders who have faced these various groups protesting and making demands and going to social media and trying to humiliate them and attacking them and their reputations. I've spoken to leaders who have cried in front of me, recounting how painful this was for them. But in all cases, it's like, “But I'm progressive, and I share their values, and yet they keep attacking me.” It's a different world, and the pre-Babel intuitions don't apply anymore.
A big part of it is the loss of any shared story or understanding. And the other is the democratization of intimidation and freeing it from accountability. I use way too many metaphors in my writing, but one of the central ones in this new piece is that if everybody was given a little dart gun—not a gun, where you could kill him, but a gun that shoots darts which really hurt and you have to pull it out of your arm. And so if you're a university president or editor at a newspaper, and someone calls you a biot or transphobic, or whatever it is, and then there's a movement to get you labeled as that, almost nobody can stand up to it. We'd like to say, “Well, why don't you just grow up here and have some courage?” But it's not so easy to do in the moment. We're used to dealing with things at a certain speed. And we're used to a situation where if someone accuses you of something you can defend yourself. And that might play out over days and days. But when it can move so quickly and accelerate so fast, because of social media, people panic. As soon as there's a sign of trouble, it's just very hard for people to stand up to it, because we're all using our pre-Babel intuitions. I'm not sure if I've agreed or disagreed with you, but that's the central dynamic of what’s going on with the people in these institutions.
Mounk: If my advice feels a little bit naïve in this new environment, because it's drawing on pre-Babel intuitions, how can we make that basic response work in a post-Babel environment? I know you have some ideas for institutional changes, and perhaps it'll take those. But I guess I also wonder what extra-institutional changes we'll need, because my sense is the basic mechanisms of virality, the ability to share content, aren't going to go away. The sense that anybody could throw a dart at any moment, and there might be 50 more darts coming, will never entirely go away. So we have to make sure that those darts don't kill. We somehow have to make sure that when people start throwing darts, there's a protection mechanism, you have armor. That I think has to somehow consist in the realm of collective behavior.
Haidt: There are certainly some structural changes needed to social media. But yes, I agree with you. It's not going away, but we can tinker at the edges.
What can we do to make social media a less powerful tool for intimidation? Twitter actually is doing a few things. One is the ability to downvote comments, because a lot of the nastiness is in the comments. And so if you can block them, or you can downvote them, that will help. But the biggest single thing that I'm arguing for is to think of systemically important, large platforms, in the way we think of banks, which are also systemically important. Banks have “know your customer” laws. They can't just take bags of money. They can't do money laundering. They have to know who their customers are. I expected my libertarian friends to freak out when I said this. But I'm not saying you have to use your real name. You can still tweet anonymously, but to get a Twitter account, to get access to the hyper-viralization of a company that has this incredible benefit of section 230 protection, you have to get authenticated. That is, you have to demonstrate that you are a real human being, not a bot, and that you are in a particular country, and that you are old enough to use the platform. And it's not that Facebook is going to get your driver's license, it's that they would kick you over to a third party or nonprofit that would do the verification.
Now, of course, a lot of people are nasty under their real name. But an awful lot become more trollish because they're totally anonymous. They can make death threats and rape threats. They can say horrible, racist things. Twitter kicks them off, and then they just open ten more accounts. That has to stop. That's a big structural change that would help. As Francis Haugen said, a lot of the changes that we need to make are really the architecture. Content moderation can only get a little bit done, and it's always controversial. I don't even think or talk about content moderation. I'm thinking only about changes to the architecture, the amplification aspects.
Mounk: One of the structural changes you just mentioned, I find perfectly sensible. I guess I worry that it's not going to have a large impact. I believe in freedom of speech. I don't believe in freedom of speech for bots. And I don't believe in freedom of speech for authoritarian governments to undermine our public discourse. But as you're saying in your article, it's actually a small number of people with pretty dark personality traits who are able to radicalize discourse online and to smear more moderate people, and sling arrows at them. They're real human beings that are citizens of the countries in which they engage in public discourse. By and large, they're often perfectly willing to display their real names, many have a picture and have a name out there on Twitter, and there's absolutely no reason to assume that they're false. In fact, many of them have a little blue check. That change, I worry, is going to improve things at the margin, but will not deal with the real underlying mechanisms.
Haidt: It's not just at the margin. I think the change I'm talking about would have a very big effect. You're right that there would still be a lot of people, maybe even a majority, who are currently using their real name, although, according to research by Chris Bail at Duke, even though most people don't become assholes as soon as they get on, a small number do, and those assholes do an extraordinary amount of damage. A pre-Babel intuition is, “Oh, the trolls are only a few percent of people.” But the post-Babel thinking is, “It's all about dynamics.” And so if you can knock out the worst 2%, that makes it so much better, because those 2% have the equivalent impact of a hundred other people. So it would not be a marginal change, but a very big change.
Mounk: But how do you knock out those 2%? Because it seems to me that the worst 2% that I perceive on Twitter have real names and platforms. Those are not going to be knocked out by a verification.
Haidt: That's right. Here's another idea which I bandied about Silicon Valley and it's not fully baked, but something along these lines. What I'm after is this systemic change so that people are rewarded for nuance, and they're punished, or lose social credit, for complete lack of nuance. Suppose that every person—you can even have AI do this—gets rated for two things: one is cognitive complexity. That is, the ability to have two conflicting ideas in the same tweet. With 240 characters, you actually can, sometimes, have some cognitive complexity. Other people that you can see, they'd have zero cognitive complexity in their tweets. And then the other thing is hostility. The AI could figure out what's really hostile. So suppose you have a zero to five rating for every person on their feeds.
Now, in our public square, we have some incredibly nasty people (who are bots), and we have some credibly nasty people who are actually people. We're talking about the second ones. I could always just block them, but that's not a solution. That doesn't really put any pressure on them. But what if I could set it so that in the public square, I only want to hear from people and I only want people to see me if they are not zero on cognitive complexity. And they are. And they are below three on aggression. If someone's a five on aggression, I don't want them in my feed, and I don't want them to even be able to see me. And if we all had that, well, you could set it that way, you can say what you want. This is not censoring. This public square is so important to public discourse, but why should we all have to drink your urine? Why should we drink your bile? I don't want bile, period. This is politically and ideologically neutral. It's not censorship. Most of us could then actually express ourselves on Twitter without knowing that we'll be insulted horribly. And most importantly, this would put pressure on people to not be assholes. Because if you're an asshole, more people block you.
Mounk: The way that I thought about the architecture point, which speaks to the first thing you were saying about being able to downvote comments and so on, is the contrast between Reddit and Twitter. I actually find Reddit mostly a much better platform—it has some really vile corners—but in the biggest communities, it is a much more positive space, much less hostile space. There, you see first the comment that has the biggest delta between upvotes and downvotes—what people flocked towards as the consensual answer, that people found to be meaningful and interesting. Otherwise, nobody's going to upvote it. It has a cognitive complexity element that is rewarded and doesn't completely divide people. Whereas on Twitter, you're going to see first the one that has a thousand likes and a thousand, “You asshole, this is terrible” comments. The question is, unless we want to force that as a mechanism legislatively, which I think would be complicated in terms of the First Amendment, the question becomes: when do users incentivize tech companies to create those environments? What can tech companies do to teach their users to prefer those environments? Why are we still all on Twitter? Why do I hear that some of the most important people in the White House look at Twitter all the time? Can we somehow punish companies that keep us captive in this aggression loop? Because if we don't, then even if Twitter changes its architecture, some upstart social media platform is going to turn up and people will migrate onto that.
Haidt: Yes. It's a social dilemma. Many listeners will know the term “prisoner's dilemma,” where each person is better off defecting or cheating compared to the other, but if both prisoners defect or cheat on the other, they're both worse off. That's a two person economics game. Each of us is worse off stepping out of the game as an individual, but we'd all be better off if we all stepped out. We're trapped. And this is the case for kids on Instagram. No parent wants their child to be on Instagram. But we all relent because our kids say, “But everyone else is on it!” The way that we deal with such traps is through government intervention. You can't just have individuals decide to break it (that's the point of the trap), so that's a good justification right there for some sort of regulation.
Now, I teach in a business school. I have many libertarian friends. I understand that regulation usually is incredibly inefficient and often backfires. It should not be a first resort. And I'm glad we have the First Amendment. But here's the thing that I think people aren't getting. Section 230—there's a lot of good reasons to have it, but it is a privilege. Way before social media, in the 1990s, Congress said that if AOL has a chat board and people put comments up, then AOL is not the editor, and AOL is not responsible for what people say. And that made a lot of sense back then. Because you can't have a free and open internet with people posting content if you're treating them like The New York Times—”You said this, New York Times? I can sue you for libel because you printed this.” So in its origin, it makes a lot of sense. And it still does, but we've lost sight of the fact that it is a privilege.
For these companies to have freedom from lawsuits is an incredible privilege. There are certain industries that can't be sued. But generally, if you have a harmful product, you should be exposed to liability. Section 230 is not a God-given right. And so I think Congress could say, “of course, there's freedom of speech. But if you are running a platform that has section 230 protection, you have certain duties to have certain architectures and avoid other kinds of architectural features.” That's not infringing on freedom of speech. Because of course, you can do whatever you want, you just don't have section 230 protections. You can have whatever cesspool platform you want, you just don't have section 230 protection. So I think it actually could easily be done.
I'll give you three that I think are not controversial. In order to have section 230 protection, you have to have these three features: One, you authenticate your users. People can still post with a fake name, but you actually have to have some clue who they are, so that you're not just letting on millions of Russian bots and agents. Two is age verification as part of your identity authentication, so you don't have 10 year old girls mixing with 50 year old guys. You keep people under 18 off your platform, it's an adult platform. And then three, you have content moderation duties around violence and child pornography—period. What would be so bad about that? This is the bare minimum to be a vaguely responsible platform.
Mounk: I think we've gotten through this conversation the pieces of what a positive future would look like, but perhaps you can put the pieces together. Ten years from now you may write an equally viral article saying, “Everything was stupid there for a decade and then things started to improve.” What would that history say?
Haidt: From reading Phil Tetlock and having worked with him, I know that efforts to predict the future are a fool's game. And I do have to inject my note of Tetlockian modesty while I trace out these trends. But if these trends continue, then I think our country will fail catastrophically and become like an unstable Latin American country. However, will all of these trends continue? Probably not. Things will happen that change things. I have no idea really what's going to happen. And even though I'm very pessimistic, listeners should take this with a grain of salt and probably be more optimistic than I am.
What would it be? Well, I think it's going to involve some structural changes. I think there will be legislative changes, not by the US Congress, I don't think that they're going to do anything in the 2020s. I think it's going to come from the UK. The UK Parliament is actually doing great work on a child-friendly design code for the internet. And California is actually right now considering adopting basically wholesale some of the many UK recommendations. Australia is trying to do things. I think some states and other countries are actually going to succeed in passing some regulations.
I just spoke with Beeban Kidron, a member of the House of Lords who’s leading these reforms in Parliament. She thought that once they got things passed in Parliament, for the UK, the platforms are now committing to do it globally, because it's just too difficult for them to have one Twitter for the UK and a different Twitter for France. So I do think there is hope for legislation outside the US. I think the US is totally messed up and deadlocked. But other countries actually can make some progress here.
I think that we have to harden our political institutions. I think that there are reform movements that might work. This is the great Tocquevillian tradition of Americans coming together to create civic organizations to push for change. Alaska passed by referendum final-four voting and open primaries. That's incredibly important. Open primaries means that Congresspeople don't have to just fear the people who vote in the primary, because the primary is open and people from the other party get to vote as well. I'm hopeful that many states will copy Alaska and change their primary system. Like the story about the frog in the hot water, I don't think it's true that frogs will just sit there. But if we think of that metaphor, the water has been boiling in America since the 90s. The polarization has been rising since the late 90s. And then it got worse in the 2010s. Trump, COVID, George Floyd’s killing and the protests—so much has happened where we couldn't get oriented. We couldn't really have any thoughtful conversations. And I'm hopeful that those things have passed, so that we have a period when we can be more reflective. And I think the middle 80% are coming out of the bunker. I've never written an article like this, where nobody's attacking it. Nobody is calling me names, or has even said that I'm wrong. People say that, of course, it wasn't so rosy before social media. Yes, that's true, and I said that in the article. But I'm getting hundreds of people writing to me, members of the public saying, “Thank you for saying this.” The overall feeling is just exhaustion. I do think that it's going to require a mass citizens’ movement of the 80% to say, “We've had it. We don't want to yell and scream and kill people. We want to have a country where we can cooperate, despite our differences and work together with people of different races and gender identities and religions and compromise to live together.” It's going to take changes in which the middle 80% find their voice and use it.
Please do listen and spread the word about The Good Fight.
If you have not yet signed up for our podcast, please do so now by following this link on your phone.
Podcast production by John T. Williams and Brendan Ruberry. Podcast cover image by Joe O’Shea.
Connect with us!
LinkedIn: Persuasion Community