Thanks for reading! The best way to make sure that you don’t miss any of these conversations is to subscribe to The Good Fight on your favorite podcast app.
If you are already a paying subscriber to Persuasion or Yascha Mounk’s Substack, this will give you ad-free access to the full conversation, plus all full episodes and bonus episodes we have in the works! If you aren’t, you can set up the free, limited version of the feed—or, better still, support the podcast by becoming a subscriber today!
And if you are having a problem setting up the full podcast feed on a third-party app, please email our podcast team at leonora.barclay@persuasion.community
Daniel Williams is an Assistant Professor of Philosophy at the University of Sussex. He writes the Conspicuous Cognition newsletter, which brings together philosophical insights and scientific research to examine the forces shaping contemporary society and politics.
In this week’s conversation, Yascha Mounk and Dan Williams explore whether the term “misinformation” is defined too broadly, how to judge if something is fake news, and what is meant by the “everyone is biased” bias.
This transcript has been condensed and lightly edited for clarity.
Yascha Mounk: Perhaps the article of yours that first got you to my notice was one about misinformation. I’m struck by how mainstream the conversation about misinformation has become. I was recently at the World Expression Forum in Norway, which claims to be a forum fighting for free speech, but so far as I could tell, every second speech was just about how the way to really serve democracy and freedom is to censor anything that anybody might consider misinformation. So this idea that we need to not just be concerned about misinformation, but take very proactive political steps in order to rein in political misinformation—steps that I would call censorship—has become incredibly mainstream. What do you think is the problem with the discourse about misinformation? What do people often get wrong when they talk about this?
Dan Williams: I think it’s sort of difficult to know where to begin. I think there are so many issues with the way in which this topic of misinformation has been framed. It’s probably worth saying something about the history behind the emergence of this narrative concerning misinformation, much of which dates back to—and this is very well-documented—2016, when you had these two surprising populist revolts. You had Brexit in the United Kingdom here, and then the first election of Donald Trump. And these events surprised and distressed many people, myself included. People wanted to explain what’s going on with this populist backlash against what is perceived as the establishment and the elites.
One of the narratives that emerged really quite quickly around that time was that it had something to do with misinformation. The idea was something like false or misleading communication propagated by populist politicians, by pundits, and also on social media networks, had sort of manipulated large numbers of people into believing false things, into believing conspiracy theories, into supporting demagogues, which was in some way behind this populist backlash. I think the first thing to say about that as a narrative, and this is an annoying thing for a philosopher to say, but I think it’s important, is what exactly do we even mean by this term ‘misinformation’? This is omnipresent now in political discourse and it has been for the past 10 years. There are all of these research centers and social scientific papers which are studying misinformation. It’s routinely listed as one of the top global threats. What exactly is this term supposed to refer to?
Mounk: And the concern here is that it’s a term that’s used incredibly broadly for all kinds of different things. I sometimes say—perhaps a little bit provocatively—that it falls into two kinds of categories. One is that we often now say “misinformation” for what traditionally would have just been called a lie. Somebody just says something that is clearly untrue. We can prove that it is untrue. We should just call it either a lie—or perhaps if there’s some suspicion that the politician may not be aware of speaking the untruth, then that’s just a falsehood. That’s very clear and unambiguous. Then I think there’s this second category of things where people are making arguments we don’t like—they have a narrative that we think perhaps in some ways overstates the importance of something. But that very quickly becomes a way of just saying, I don’t like your point, I disagree with it. And so there’s this huge range of things under the term “misinformation.”
I sometimes wonder whether we should just dispense with the term and either say it’s a lie or it’s an untruth, and we can prove that. Or say, hey, I disagree with you on this. I don’t know if this is as important as you’re saying. I think you’re oversimplifying a little bit what the world actually looks like. Some of the utility of the term misinformation seems to come exactly from the fact that it sort of seems to imply it’s a lie, but you can use it sometimes to just apply to things where you have a disagreement.
Williams: I agree, and I think you’re right that this term bundles together lots of things that we’ve actually already got a vocabulary for—lies, bullshit, misleading, falsehood, et cetera. The way that I put it in my writings is that, in this misinformation discourse, you confront a kind of dilemma very quickly, which is that you can understand misinformation very narrowly so that it refers to something like very clear-cut falsehoods and fabrications. And actually lots of the focus on misinformation around 2016-2017 focused on a very specific thing, which was fake news in a sort of literal sense of that term—disreputable websites online publishing just made up news stories, like breaking news, the Pope endorses Donald Trump. That’s sort of fake news in the literal sense of that term. If you understand this information very narrowly in that kind of way, it definitely does exist and I think it can be harmful in some ways.
But overwhelmingly, the social sciences suggest now that it’s not as widespread as many people think it is. And it’s not that impactful, because for the most part that kind of really fringe content which gets discredited by fact-checking organizations and so on tends to preach to the choir. You’ve got people with strong pre-existing conspiratorial, anti-establishment views, they don’t trust institutions, so they’re seeking out content that’s in line with what they already believe. So if you define it very narrowly, I think it does exist, but it’s nowhere near as widespread or as impactful as many people think it is.
Mounk: Just to double click for a very brief moment on that. The other question, and that’s very plausible to me psychologically, is about who consumes this sort of stuff. If it’s ridiculous stories like Brigitte Macron used to be a man, or there was some young child found in the basement of Hillary Clinton’s house, then I think it’s very hard to think about the sort of reasonable person who’s a swing voter, who’s undecided which way to vote, and then they hear a story like that and say, well, I mean, now that I’ve learned this shocking truth, I’m not going to vote for Hillary Clinton. What’s much, much more likely is that there are some people who are political nuts, who have a consuming hatred of some political figure, and who then come across a story like that, and either as a troll or perhaps because they really do believe it, they sort of share that. But this is not somebody whose mind was open early on, who had a potentially positive view of this politician and then sees a story like that and says, now suddenly I realize this is an evil person. This is somebody who probably had pretty distorted political cognition to begin with, who was pretty consumed by a rage and hatred against some part of the political spectrum. And that’s why, in earnest or perhaps in a trolling way, they share that kind of news story.
Williams: Exactly. This is not a cross-section of the population engaging with that kind of content. And actually in many of these cases, as you say, it’s not even clear that they really believe it. It can even be to their sort of perceived political advantage when fact checking organizations come out and discredit that information, because they’re so hostile to these institutions anyway that it doesn’t make much of a difference. That’s not to say that this kind of content never has any harmful consequences. Like, it’s probably not a great thing that people with highly conspiratorial worldviews and really strong hostility towards establishment institutions are consuming lots of incredibly weird, low-quality content. It’s not a great thing. But the idea that this is behind, for example, Trump’s election victory or Brexit—it’s just not a credible theory of public opinion information.
Now, there’s a real push in misinformation research to go in the direction of broadening the term to include things like, as you say, people being wrong about things or having biased opinions or to capture communication that might be true but is nevertheless misleading because it’s cherry-picked or it’s selective or something like that.
The canonical example of that would be accurate reporting on vaccine-related deaths, for example. It could be totally accurate as a report, but nevertheless, sort of misleadingly convey the impression that vaccines are much more dangerous than they actually are. So there the idea is, why don’t we take this term “misinformation” and stretch it so that it encompasses all of these different ways in which communication can be misleading? I’d say it’s sort of trivially true that misinformation so defined is not rare and it’s not sort of pure preaching to the choir. I think it’s quite widespread and I think it’s quite impactful. But of course, as you say, that’s firstly so subjective, determining whether or not a true report is missing relevant context, determining whether an argument is biased or not. But also that kind of content in a broad sense is pervasive within the very institutions that sort of sanctimoniously pontificate about misinformation, that claim that they can detect misinformation with ease and objectivity. So I think the dilemma is that either you define it really narrowly, in which case it does exist, but it’s not that widespread and it’s not that impactful, or you define it really broadly, in which case it plausibly is quite widespread and it is quite impactful, but then it becomes subjective and it’s not so clear why fact checking organizations or The New York Times or whatever are in a perfectly objective position to detect it.
Mounk: We’ve talked a little bit about the narrow definition, which is sort of coherent and cohesive, but just doesn’t end up carrying the weight that people want to put in this idea of misinformation in our politics. So let’s talk a little bit more about this kind of broader conception of it. One problem with that is that it just becomes nearly a synonym for something like a partisan or partial point of view. If you are arguing that something is a giant problem because you are very motivated by it, because it is some general injustice in the world, then people could then say that that’s misinformation because actually this other set of problems is much bigger than that. I mean, plausibly, to take the sensitive and somewhat provocative example of police shootings of unarmed people in the United States in general, of unarmed black people in particular. I think it’s a very serious and shocking injustice when that happens, but you could say, well, but this is such a partial focus—what about all the people who die in other kinds of ways? Compared to the number of deaths in the United States, or compared to the number of violent deaths in the United States, this is a very small percentage. Is it sort of misinformation to focus on this to the exclusion of the other kind of things?
I think we would have a very strong instinct that calling that misinformation is wrong, that there can be reasons to focus on something out of your set of moral coordinates that put particular emphasis on something, because we feel it’s particularly salient, even if the numbers involved aren’t particularly large. But on a sort of broad enough definition of misinformation, we could then start to dismiss a lot of those concerns as misinformation. It’s perhaps happened less in this particular issue, but it does happen in other issues, where really this charge of misinformation just becomes the charge of, you’re over-emphasizing this. I mean, let’s say it’s something very different, but which in some ways has a similar structure, like crime committed by immigrants.
Immigrants commit crime, some very shocking and violent pieces of crime. On average, at least in the United States, it appears to be the case that immigrants aren’t more likely to commit crime than others. But if you say, hey, there’s an injustice that some people are in the country even though they don’t have a legal right to be here, and look at these particular crimes that are committed by people who shouldn’t even be in the country and if we’d really applied the law in the right way, there wouldn’t have been these particular avoidable deaths. I think a lot of people on the left would say, well, that’s misinformation because look, actually immigrants don’t commit more crime than others. And that is true. That is an important way of putting those kinds of concerns into context and of helping somebody understand some important piece of reality. But again, to say therefore that’s misinformation is not very helpful or clarifying about how public discourse actually works.
Williams: Exactly. It’s not clarifying. As I think those examples suggest, what you’re going to view as misinformation is going to be shaped by a whole range of complex considerations. Your values, your pre-existing beliefs, your broader ideology, and so on. The idea that that’s just going to be a technical judgment that we can delegate to professional fact-checking organizations or misinformation experts is, I think, very, very strange.
Another example would be fake news. I would argue fake news is not a significant part of the informational environment, and yet how much mainstream media reporting has there been on fake news? I would argue that’s been very, very misleading. Does that mean that we should classify that as misinformation? There are so many examples like that where it’s clearly going to be highly contentious and it’s going to be highly context sensitive and if you start getting into the area where you’re applying these sorts of very expansive definitions of misinformation, it’s just going to strike many people accurately as incredibly subjective and biased.
Mounk: What about this idea of elite misinformation? I think it’s a very nice phrase that I believe Matt Yglesias first came up with. At least he wrote one of the first big articles about that. How common is that? Again, obviously you’re skeptical about how useful term misinformation is in general, but if we take this broader definition of misinformation that perhaps is not very coherent, but which has sort of entered the political lexicon, do you think it’s obvious that political elites, social elites more broadly, are systematically better at avoiding that kind of misinformation than others? Or is this problem of elite misinformation, which Matt posited, a very serious one?
Williams: Just to say one thing on that terminological issue. I mean, I don’t have any issue with ordinary democratic citizens or journalists or pundits in their capacity as democratic citizens applying a term like misinformation with a kind of expansive meaning like Yglesias does in that article. My issue is when it comes to misinformation experts and policymakers who are applying this classification either to establish objective scientific findings about it or to enforce certain kinds of anti-misinformation policies. That’s where I think it’s very important to have a strict clear-cut definition. But on that point of elite misinformation, it’s just obviously true that within our mainstream knowledge-producing institutions, whether it’s science, whether it’s academia more broadly, whether it’s elite legacy media outlets, there is a lot of false and misleading communication. If you take a topic like climate change, for example, almost all of the focus on climate misinformation has focused on, in the broadest possible sense, climate denial. That’s almost exclusively associated with the political right, where people have called into question the existence of human-driven climate change or the risks that it poses. I completely agree that that phenomenon exists and I think it is dangerous and I think it’s important that people think carefully about why that exists and ways to address it.
But there’s also a lot of what you might call elite progressive misinformation. So the philosopher Joseph Heath had a really fantastic article on his Substack recently about what he calls “highbrow climate misinformation,” looking at the ways in which there’s lots of what you might call alarmist climate climate viewpoints on the sort of mainstream progressive side of the aisle which are simply not well-supported by empirical evidence, or they involve forms of communication that are not supported by empirical evidence, and which almost never get called out as misinformation.
Mounk: I remember a case from a number of years back where there was a study which suggested that a significant part of the territory of New York City might become uninhabitable because of climate change by 2100. Now when you look at the actual study, this is obviously a serious problem and climate change more broadly I believe is a very serious problem. But of the areas highlighted, some were not inhabited at all today because they’re so close to the sea, while some were inhabited but with quite low population density. Again, I’m not trying to underplay this problem. This obviously is a serious challenge. But by the time it made it to I believe the cover illustration at New York Magazine, what the illustration showed was the Empire State Building underwater. Nothing like that was suggested in the study. So that’s an interesting way in which broadly well-meaning people through this chain of transmission take something which is a serious research finding, which I have no particular reason to doubt, but present it to the public in a way that is very clearly misleading.
Williams: That’s exactly right. There are lots of examples like that. I would say even the idea that climate change is likely to pose an existential threat, you get that view all of the time from sort of progressive politicians. My understanding of the empirical literature is it doesn’t really support that view. I mean, it could be that there are tail risks that are sort of low probability events where it would be catastrophic, but the sort of standard forecasts aren’t that climate change is a significant threat to the continuous existence of humanity. And there are just many, examples like this across lots of different domains, whether it’s climate change, whether it’s the economy, whether it’s issues to do with youth gender medicine, whether it’s reporting around race and immigration.Whenever things sort of align with sacred values or connect to taboos among highly educated, liberal, progressive professionals, you get a lot of false or misleading or biased communication within these institutions.
I think that kind of biased communication is especially legible to those people who are very hostile to these institutions in a way that it’s often not so obvious to people that exist within them. And I think it makes these institutions seem very hypocritical. So when they say we’re going to police misinformation, what that often means is really we’re going to be highly selective in which examples of misinformation we’re going to focus on. It ends up I think just exacerbating issues of institutional mistrust, especially on the political right, which are the very sort of communities you want to reach if you care about dealing with misinformation in that part of the political spectrum.
Having said all of that, I think it’s also a bit of important context that even though there are problems within these sort of elite knowledge generating institutions, I think the problems there, at least in my estimation, really pale in comparison to the problems that you find in the kind of information environment of the populist right, especially in the United States. I think when you’re dealing with figures like Elon Musk, Tucker Carlson, and Candace Owens and figures like this, really the scale, the brazenness, the kind of egregious character of the falsehoods and the lies and the conspiracy theorizing is so much more extreme. So I think you need to be able to acknowledge that there are these deep problems within these institutions, whilst also retaining the capacity to see the broader forest as it were—that these problems are not as severe as what you’re seeing on the populist right, especially in the United States.
Mounk: I strongly agree with you on that point. There’s this meme saying I’m for the current thing and, you know, it depicts whatever collection of 20 causes, some people call it the omnicause, where all of these different ideas get lumped together, and if you’re on the right side of history you have to believe in all of them uncritically and a lot of them are quite stupid or a lot of them are worthy causes but the action that is supposedly demanded by these worthy causes is counterproductive or morally bad and so on. Now there’s a different meme that I think is also quite apt which is I’m against the current thing which captures the fact that because some people perhaps have been marked by reality, have recognized that some of the unthinking support for these causes turns out to be misplaced, they then flip to the opposite conclusion and say, if The New York Times and the Guardian and NPR want me to believe X, I’m just going to believe non-X. And I’ve always worried that outsourcing your views in this way is really bad. You can call it 180ism, where whatever you’ve decided is set by something you mistrust, so you’re just going to believe the opposite. And that unfortunately is going to lead you at least into the same amount of epistemological murkiness and probably into even greater problems. And I think that’s true about how to think about these knowledge-generating institutions.
I think that on some very important issues, my considered judgment is that those opinions are wrong. I have started to understand in a way that perhaps I did not in 2016 that people are sick of experts and more broadly sick of the demand to defer to experts blindly. But the solution certainly is not to say therefore all of the experts are wrong, all the consensus is wrong, the way to get closer to the truth is just A) to listen to whatever online figure has an interesting story to tell, and B) to assume that the opposite of whatever the institutions say must be right. That’s going to get you into even deeper epistemological trouble. What’s the upshot of all of that, do you think? I mean, one upshot presumably is that we should be very skeptical about political institutions that want to claim the right to tell us what information we can share and consume.
But how should we think about these subjects? How should we think about the general problems of falsehoods in our politics, of people who are spreading very clearly untrue narratives for their political or sometimes financial gain without reverting back to this overly loose use of terms like misinformation?
Williams: I think that’s a really difficult and complicated question. As you say, in a complex modern society, there’s simply no alternative to expertise and there’s no way of getting around the fact that we need experts—expertise and the sorts of knowledge which can only exist within complex institutions like modern science, like modern universities more broadly. But also if you’re thinking about knowledge production as a whole, you also need established, trustworthy media outlets. So how to think about the broad kind of information environment, given the constraint that there’s no alternative to that? This idea that you can just do your own research and pursue knowledge in complete independence of those institutions is sort of silly and I think not a very fruitful idea. The overarching lesson I think is that it’s so important to, on the one hand, improve these institutions as best we can. And partly that’s an issue of norms, I think. At the moment, my sense is you often get punished more for calling out elite misinformation than for propagating it. That kind of norm is utterly dysfunctional. Because if you call it out in the manner that we’ve been doing in our conversation, people will often treat that as, maybe you’re on the other side or maybe you’re attacking the institution in a particular way.
Mounk: I fully agree with that. And it’s one of my frustrations with journalism, that I think you can be wrong all of the time. But if you’re always wrong the same way as the kind of prevailing view at that moment, there’s never negative professional consequences. You can have said the stupidest thing unthinkingly for 20 years, but as long as you always move with the crowd, there’s no punishment for that. That’s perhaps unsurprising because you’re not easy to distinguish—you’re just part of a blob of other people who say similar things. What I think is more surprising and even more depressing, even more concerning, is that if you break with a consensus, if you break with where most people in journalism are, then not only do you sustain extreme attacks which disincentivize having the courage to disagree, but the really striking thing is that you’re not readmitted to the community of the rightful in retrospect. Even if what you argue turns out to be right, even if the mainstream view moves to join what you were saying at that time, you still are going to be marked as the kind of strange conspiratorial politically dubious malcontent who deviated from the mainstream view at a moment when that was the wrong thing to do. So that I think is what concerns me even more, because that really thoroughly disincentivizes the kind of scrutiny that these institutions need to work. And why is it that we believe in science, for example? Because there’s supposed to be an open conversation, there’s supposed to be mechanisms that encourage disagreement. If our social dynamics are set up in such a way those dynamics don’t work, then that actually undermines the reasons why we’re supposed to trust things like science in the first place.
Williams: Exactly. I couldn’t agree more. If you’re in that sort of culture where if you dissent from groupthink it can be really harmful to your reputation. It’s dysfunctional on so many different levels all at once. I think it hurts the performance of these institutions in all sorts of ways. It undermines their ability to perform their function. I think it also often results in people getting radicalized against these institutions, and as you say doing this 180 thing, where then they just develop an entire kind of anti-establishment worldview, which is much worse, actually, than the thing that they’re rejecting.
But connected to that, it also undermines public trust in these institutions. People looking at bizarre conspiracy theory content online, people engaging with anti-vax content online—there are many causes of that, but I think the overarching explanation of that is that they don’t trust institutions. They don’t trust science. They don’t trust medical authorities. They don’t trust public health officials. They don’t trust mainstream media, and so on. So the main thing is to try to regain trust or to increase trust in those institutions. And if it’s perceived that these institutions are politicized, if it’s perceived—sometimes accurately—that they are susceptible to forms of groupthink, that is devastating for that issue of trust. And then a symptom of that mistrust among large segments of the population is people start seeking out counter-establishment content. They start seeking out people who are in no way constrained by mainstream scientific knowledge or established expertise. And that’s where you get the sort of market for Tucker Carlson or Candace Owens. That’s where you get the situation where somebody like Elon Musk can just post falsehood after falsehood and not get caught out on it within that environment because so many people within that environment have completely lost trust in these mainstream institutions.
Mounk: I think that relates to another interesting article that you wrote and we published a version of that in Persuasion called The “Everyone is Biased” Bias. The idea here, I guess, is that it’s easy to fall into a form of cynicism about the world when you come to recognize that some of these elite institutions have these very deep forms of groupthink. It’s tempting to say, we can’t trust any of these institutions and we can’t trust any particular person. Any particular person probably has a bias and so we should mistrust everything. How do we protect ourselves against that kind of cynicism? And what more specifically do you mean by the “everyone is biased” bias?
Williams: The “everyone is biased” bias is not the belief that everyone is biased. I actually think that belief is true. And I think it’s very important for how we think about politics and how we think about the challenges of figuring out what’s true in politics. I think there are really two respects in which everyone is biased. There’s a sort of psychological aspect to it, and there’s what philosophers would call an epistemological aspect to it. The psychological aspect is just that human beings are not disinterested truth seekers. So when it comes to thinking about politics, we tend to engage in what psychologists call motivated reasoning. We’ve got many practical motivations, practical goals that are distinct from and come into conflict with the pursuit of truth. In an everyday sense, that can be things like self-interest and self-aggrandizement. But in politics, that often means things like tribalism. It means the ways in which advocating for a particular cause or a political coalition can sort of bias how we seek out and process information, for the most part unconsciously in ways that we’re not aware of.
That’s the psychological component to political bias. But there’s also I think a more complex and interesting component to it, which has to do with the fact that even if we were disinterested truth seekers, even if we were perfectly rational, the world that we’re forming beliefs about in politics is so complex and the truth about it is so uncertain. And we access that political universe in ways that are so indirect, we’re so reliant upon testimony that we get second-hand, third-hand, fourth-hand, fifth-hand and so on from reporters, from journalists, from pundits and so on that, even if we were perfectly rational as individuals, we should still expect to be in error and to have partial perspectives on reality in all sorts of different ways. And of course, if you combine those two things, the fact that reality is complex and the truth is uncertain, with the fact that we’re not disinterested truth seekers, your default assumption should be everyone is in some sense viewing the political universe in ways that are selective, partial, and subject to various sorts of errors.
At the same time, it’s also true that most people don’t instinctively appreciate that about their own situation and about their own political tribe. It feels to most people like the truth is just kind of self-evident. It’s just obvious what the truth is, such that anyone who disagrees with them must be either delusional or they’re lying or they’re crazy, et cetera. This is what psychologists call naive realism. And I think that’s really destructive at the individual level, because it encourages a kind of arrogance. I think it’s also really terrible when it comes to political polarization because we end up thinking the other tribe simply won’t recognize the truths which are completely self-evident to us so they must be nefarious or they must be very stupid. So that’s the sense in which everyone is biased and I think it’s an important truth which most people don’t instinctively appreciate and I think our political culture would be better if we did appreciate it.
Mounk: So there’s a kind of conventional wisdom and there’s a counter-conventional wisdom or something like that, right? I mean, a lot of people, and the really naive think, look, I’ve got everything right. Anybody who disagrees with me must be a bad person or they must be being paid. One really obvious illustration of this in social media is always accusing people of being a grifter. The only reason why you might be arguing this position that disagrees with mine must be that you’re doing it for money because obviously if you were a good and smart person you couldn’t possibly believe something so stupid and it’s sort of obvious there are lots of grifters online and that’s true of certain kinds of people but it’s just not acknowledging the possibility of good faith disagreement. Now what you’re saying is, this is an important insight and we should take that seriously. Even though I buy your point 100% intellectually and I’ve thought about it a lot in my life, in certain situations, it’s tempting to say, how can you possibly believe this? What’s wrong with you? But you then worry that if we run too far with that assumption, if we somehow become too obsessed with this important insight, that you just spent a good few minutes formulating, that itself is a danger as well. Why is that the case? Why is it that this important point which is formulated itself can become a source of error if we focus on it too uncritically?
Williams: I think the basic reason is because everything that I’ve just described applies to everyone, really. What that means is I think you can become so fixated on that universality of bias and also on exposing cases where people who present themselves as objective are actually hypocritical or they’re engaged in motivated reasoning and so on. So by fixating on that, you end up missing the fact that actually there are these profound differences in the degree to which individuals and institutions are committed to truth. And I think you need to be able to keep both of those ideas in your head at the same time. Everyone is biased. Everyone is fallible in how they view the world and that influences even our most sort of prestigious, elite, knowledge-producing institutions. At the same time, there’s bias and there’s bias. There’s the ordinary run-of-the-mill human bias. And then there’s, as I’ve already mentioned, someone like Elon Musk. And honestly, it was viewing Elon Musk’s behavior on X over the past year or two, the sheer scale and brazenness of his lying where it falls way outside the scope of ordinary bias. There’s a risk that if you’re too focused on the generality, if you’re too focused on the universality of bias, then you become blind to those really important differences. And that’s the “everyone is biased” bias—focusing too much on this general claim about human psychology and epistemic situation, as philosophers would put it, that you don’t acknowledge that actually there are really profound differences in the degree to which certain individuals and certain institutions are committed to truth.
In the rest of this week’s conversation, Yascha and Dan discuss the pitfalls of cultural relativism, how journalists can regain the public’s trust, and why standpoint epistemology won’t fix the issues in society. This part of the conversation is reserved for paying subscribers…