Over the past decades, many social science studies have promised simple answers to complex problems. In his latest book, The Quick Fix: Why Fad Psychology Can't Cure Our Social Ills, Jesse Singal describes how many of these solutions fail because the findings they are based on turn out to be wrong or misleading.
In this week's conversation, Yascha Mounk and Jesse Singal sit down to discuss the reproducibility crisis in social science, whether to be skeptical about implicit bias training, and how to distinguish between real solutions and illusory quick fixes.
This transcript has been condensed and lightly edited for clarity.
Yascha Mounk: In your book, you argue that fad psychology often worsens our social ills. Why is it that this field of social science has actually had a real impact on the world, and why should we worry about that?
Jesse Singal: It's really social psychology in particular that I zoom in on. Social psychology is everywhere; educationally, the boardroom, the military. We just had this phase that probably peaked around 2012 or 2010, when ridiculous ideas were just getting dumped into the mainstream and treated as serious.
Mounk: What is an idea that was taken seriously in the mainstream that did actual damage?
Singal: I talked about “power posing,” which is this idea that if you stretch your arms out, you'll feel more powerful. And the claim was this would help women achieve more in the office and help mediate gender inequity.
Even this relatively harmless idea brings with it all these assumptions about the nature of the gender gap. We have a pretty sophisticated understanding of the gender gap in the workplace. It is often the woman who has to leave the workplace and interrupt her career, and that is a complicated problem to solve. If instead you're told, the problem is women feel they don't have enough power at work, and you can address this by having them do power poses, that does cause the public to view this problem in a more simplistic way than it deserves.
Mounk: When we talk about the gender gap, a big problem in law, for example, is the partner track. There's a very limited number of years relatively early on in your career which are make-or-break. And if women still provide more of the childcare, and that is right at the decisive moment, that just makes them much less likely to make partner and this has some serious earning consequences down the line. Giving partners training about how to treat younger women with more respect is probably not going to solve this problem. You talk about that in the book as well, particularly the role of Implicit Association Tests.
Singal: Yes, I think this and the military PTSD we'll discuss are the two most harmful. The Implicit Association test measures your reaction time to different stimuli. And the short version is, if it's easier for you to connect good concepts with images of white faces, than good concepts with images of black faces, this has been taken as evidence that you have implicit bias. There are so many problems with this. One of them is, these tests don't appear to have any meaningful correlation with people's behavior, even in contrived lab settings.
If you get a high score on the Implicit Association Test -- meaning you're biased against black people — that high score doesn't actually mean anything about your real world behavior. There's something like a 1% correlation, or it accounts for 1% of the variance in behavior in lab studies. This has gotten to the point where in 2015, even the founders of the test acknowledged, the test is too noisy to predict individual behavior.
Mounk: In the example of gender and law, I think the right solution is to change the partner track. That's a harder change to make but it might actually contribute to a real solution.
Singal: A similar thing has happened when it comes to race, where it's just assumed all these differences in outcome are due to bias or discrimination. Whenever they do a big investigation of a police department — in Ferguson, or Chicago — they find a lot of explicit bias. There are genuinely racist cops; that's part of it. There are also these résumé studies. My gripe is [the treatment of] differential hiring patterns, especially in elite institutions like law firms or even newsrooms, as evidence of discrimination. You don't need individual actors making racist decisions along the career path to get that outcome.
Mounk The problem for measuring racist attitudes is that there's obviously social desirability bias. In 1960, you may have asked people, “Do you dislike African Americans?” and you would have actually gotten a shockingly high percentage of the population saying, “Yes.” You could just ask a straightforward question in order to see how those attitudes change over time, but of course they've come down extremely over the last fifty years.
So political scientists understandably have tried to say, “Well, look, what we should do instead, is to ask slightly more roundabout questions in order to get at what they call racial resentment.” For instance: “Irish and Italian immigrants to the United States succeeded without any help, the same should be true for African Americans, right?” And if you say “Yes” to that question then you code as racially resentful. Now, there's a really interesting paper from a couple of years ago by a few political scientists from Harvard where they used that same batch of questions, this time regarding groups that we have no reason to think there's any strong animus towards in the United States. It turned out that actually, Americans have a strong racial resentment against Estonians, and those answers still correlate with all kinds of other things. You’re still more likely to vote for Donald Trump if you have racial resentment against Estonians, as with racial resentment towards African Americans.
Singal: This is such a great example, because no one bothered to ask this very simple question, which tells you that what's driving [these responses] isn't anti-black racism, per se. Researchers did a version of a [résumé] audit study in the Chicago labor market, where they had “white names” and “black names,” and then they made up a third fake ethnicity that sounded vaguely Eastern European. And they found people were equally biased, if you want to call it that, against black sounding names as they were against this fake ethnicity. I think there are these validity questions everywhere.
Mounk: The most viral paper that argued that the nature of the support for Donald Trump is that his voters were more likely to have racist attitudes, actually used metrics that were economically based. It looked at how one feels about trade with China, in a study that was explicitly claiming to disentangle economic and more racially based arguments. If you thought that China is some kind of threat economically to the United States, that was then coded as a racial form of animus, and if people who hold that view more strongly were more likely to vote for Donald Trump, that means that Donald Trump’s voters were motivated by racism.
Singal: Yes, this was the Diana Mutz study, [published by the] Proceedings of the National Academies of Sciences. She relies on this panel of GOP voters in 2012 and 2016. This group became more open to a path to citizenship for undocumented immigrants in that span, by a statistically significant amount — that is sort of shunted aside, which I would view as the most important question about whether Trump's racist appeals work. There are a lot of reasons, if you're an American with an outsourced job, to feel threatened by China.
Mounk: Now, why is there reason to be concerned about the replication crisis?
Singal: I'm compressing a lot of recent history here, but basically, these organizations were set up to try to replicate experiments that were published in the past. They found that the overall replication rate of published psychological science is around 50%. So if I pull a random journal from 1996 and look at all of the experimental psychology studies published in it, there's probably only a coin flip’s chance that any of them point to a real effect, which is a profound threat to a body of research.
Mounk: We talked a little bit about the Implicit Association Test, and the power pose. In your book, you discuss how the military has been dealing with PTSD. Walk us through that story.
Singal: Around 2008, it was clear the military had a crisis with regard to PTSD. Many 18, 19, and 20 year-olds were coming back with serious problems. The military didn't know what to do, and so [they] hired a guy named Martin Seligman [who] runs a positive psychology school out of the University of Pennsylvania. He said that he could take one of the Penn Positive Psychology Center’s pre-existing offerings, named the Penn Resilience Program, and that he could adapt that to the military. There are multiple questionable claims in a causal chain. For one thing, by 2009, right as this military program was ramping up, there was evidence that the Penn Resilience Program didn't work on its target population, which was 10 to 14 year-olds in normal school settings. I shouldn’t have to tell you that there’s no comparison between this and someone who might have to go besiege Fallujah.
Mounk: What did this intervention look like?
Singal: A lot of it was based on cognitive behavioral principles. There's pretty good evidence [supporting] CBT in one-on-one therapy settings. There's this school of thought that's been around since the mid-20th century that you are exacerbating your own problems sometimes by adopting maladaptive responses to the world. So if someone breaks up with me, realistically, does that mean I'm fundamentally unlovable, or does that mean that, you know, love is complicated? There's usually some story we can tell ourselves that's more adaptive than the one we default to, and cognitive behavioral therapy tries to train people to do that. There was never evidence that you could adapt [the Penn Resilience Program] to a military setting, and it would work. And indeed, no evidence has ever emerged.
Mounk: I'm really struck by what you think about the diversity trainings that are now being rolled out, especially in the mode of people like Ibram Kendi and Robin DiAngelo. These start with the idea that we’ll never be equals, because of the deep way in which race determines everything.
Singal: It seems like a lot of present day diversity trainings, of all stripes, are moving against any concept of universalism. If you make every interaction about race, and if you're making these superficial differences incredibly salient, which is what DiAngelo does, to the point of giving different codes of behavior for black and white people — I think it's deranged. There's a good book called Race Experts, by Elisabeth Lasch-Quinn. She talks about how, over the years, there's been this retreat from the principles that animated the civil rights movement towards etiquette and consciousness building. I find that stuff not just useless, but actively harmful.
Mounk: I'm struck by one little detail in that, which is that DiAngelo believes that every time that a white person interrupts a black person that is bringing the whole of white supremacy to bear on them. [But] in every friendship, you will sometimes interrupt each other. That is the heart of what it is. That does not mean that there's not patterns in how men interrupt women more than women interrupt men, or that there may not be a racial dynamic. But if you're saying every time a white person interrupts a black person, that's a part of white supremacy coming to bear, you’re essentially saying you're never going to be able to have an equal friendship between a white and a black person.
Singal: When you have meaningful relationships with people, you compliment them on how they look, you interrupt them, you disagree with them. The other thing going on in a lot of liberal spaces is these very weird deference norms, where you can't really disagree with someone who's a person of color, which is weird because people of color disagree on issues of social justice wildly. Part of assimilation is people treating you the same way they treat everyone else.
Mounk: I believe in the tremendous contribution that science, including social science, can make to the world. And at the same time, through serious studies and the scientific method, we have discovered that a lot of what we took to be science has turned out to be wrong. Where does that leave us in terms of our relationship with science?
Singal: My greatest hope with the book is that it gives readers a basic tool belt of how to ask the right questions when it comes to scientific ideas. And you're right, we should value science, and we should understand the difference between the scientific method and other methods of knowing. And I would argue that you should never view science as infallible, because it is [fundamentally] a social thing that is done by humans, even if the ideals underpinning it are less susceptible to bias and so forth.
Mounk: One of my takeaways is that you always need to be evaluating the things you're reading and, certainly, the things you are acting on. That doesn't mean that you have to pretend that only things that are common sense are actually true; many things that today are common sense certainly didn't seem like common sense when they were first discovered. But it does mean that you should look at how any one scientific claim actually seems to fit into the overall body of scientific knowledge, especially if the claim is tempting to believe.
Singal: The studies of Trump voters are a great example, because any time you get an answer that is so self-satisfying, that to me is a warning sign.
Please do listen and spread the word about The Good Fight.
If you have not yet signed up for our podcast, please do so now by following this link on your phone.
Podcast production by John T. Williams and Rebecca Rashid
Learn more about your ad choices. Visit megaphone.fm/adchoices
Connect with us!
Youtube: Yascha Mounk
LinkedIn: Persuasion Community