Discover more from Persuasion
The Danger of Politicizing Science
Truth and justice are best served by more rigor, not less.
This article was originally published on September 14th by the Foundation for Individual Rights and Expression (FIRE).
Should academic journals appoint themselves social justice gatekeepers?
Nature Human Behaviour, a respected member of the Springer stable, thinks so. “Science has for too long been complicit in perpetuating structural inequalities and discrimination in society,” the editors declare in a recent manifesto. “With this guidance, we take a step towards countering this.”
The editors assure us that “advancing knowledge and understanding is a fundamental public good.” Okay. They say that research should avoid harming the individuals it studies; not a controversial proposition. But then, in a move that deserves to be very controversial, they broaden their definition of unacceptable harm to include negative social consequences for studied groups.
Researchers should “minimize as much as possible…risks of harm to the studied groups in the public sphere,” they say (my italics). “Research may—inadvertently—stigmatize individuals or human groups,” they add (again, my italics). “It may be discriminatory, racist, sexist, ableist or homophobic. It may provide justification for undermining the human rights of specific groups, simply because of their social characteristics.”
The phrases I italicized do a lot of work. A researcher might not have a discriminatory bone in her body, and she might take exquisite care to avoid biasing her research. Her evidence may be solid, her methods sound, and her conclusions actually true. Nonetheless, the editors may reject her article, require revisions, or even retract and repudiate it if they believe it “undermines the dignity or rights of specific groups; assumes that a human group is superior or inferior over [sic] another simply because of a social characteristic; includes hate speech or denigrating images; or promotes privileged, exclusionary perspectives.”
The insinuation of political agendas into science is nothing new; I wrote about it in my 1993 book “Kindly Inquisitors: The New Attacks on Free Thought.” Back then, factions like creationists, Afrocentrists, and Marxists were hawking alternatives to mainstream biology, math, and social science. Today, the political right is hard at work scrubbing school libraries and curricula of what they deem to be critical race theory (whatever that is) and LGBT “grooming” (whatever that is).
Meanwhile, on the left, scholars are calling for rethinking academic freedom so that it does not protect “some ideas [that] don’t deserve a hearing.” Just recently, the California state community college system directed its employees, including faculty, to contribute “to DEI and anti-racism research and scholarship,” in violation of academic freedom and possibly the Constitution (as the Foundation for Individual Rights and Expression points out). Anna Krylov, in her important article “The Peril of Politicizing Science,” gives other examples of social-justice activism masquerading as science. “I witness ever-increasing attempts to subject science and education to ideological control and censorship,” she writes, adding that she recalls similar efforts in the Soviet Union of her childhood.
Still, groundbreaking or not, Nature Human Behaviour’s manifesto deserves attention, because it represents an explicit endorsement of social-justice gatekeeping by a respected scientific journal. In its specifics, it is riddled with problems.
In Quillette, the social psychologist Bo Winegard does a masterly job dissecting them. He takes note of the guidance’s terminal vagueness. “Ambiguity is piled upon ambiguity to expand the capricious purview of the censor,” he writes. “It does not require clairvoyance to predict that these criteria will not be consistently applied.” He notes the tendentious ideological assumptions embedded in the document. He identifies some of the legitimate research that could be squelched and chilled.
Findings about group differences—sexual, racial, cultural, and so on—would be suspect. Winegard notes that a paper finding homosexual men to be more promiscuous on average than heterosexual men might be deemed unacceptably stigmatizing, even if the findings “might…lead to a reduction in the rate of sexually transmitted infection”—something the editors would have no way to anticipate.
A biologist might feel inhibited about stating that humans are sexually dimorphous, that male and female are biologically distinguishable, or that sex differences exist at all. Some of my own writing could be suspect, for instance on the value to children of two-parent families and the dangers of radical gender ideology. As Winegard points out, the guidelines are so vague and so broad that they are bound to be chilling.
I can’t improve on Winegard’s analysis of the guidance’s shortcomings, so in this essay I will take a different tack by steel-manning NHB’s manifesto. Here are what I think are three plausible arguments in its favor—and why they fail.
“Scientists and journals always consider social impact when they make research decisions. We’re just doing it explicitly.”
This is the strongest point NHB can make, because its premise is true. Researchers and editors are not machines, Vulcans, or sociopaths, and we wouldn’t want them to be. They cannot and should not abstract themselves from the societies of which they are part. They can and should think about the social implications of their work and guard against foreseeably bad consequences.
Every day, researchers, journals, and grant-makers consider the wellbeing of society, including effects on marginalized groups, when they decide what to work on, what to publish, and what to fund; if they did not, science would become sociopathic and reprehensible. I myself once urged a prominent researcher to excise a book chapter that, even if it were empirically sound, would irresponsibly damage race relations and his own reputation.
A dilemma, here, is fundamental. How can science consider social responsibility without politicizing research? This problem is hard. Over the centuries, science has worked out an imperfect but very functional answer: subsidiarity.
Subsidiarity is the notion of reducing centralized control over decision-making by pushing it down to lower levels, such as individuals, local governments, non-commissioned officers, franchise managers, and community groups. Subsidiarity taps local knowledge; it encourages experimentation and retards bureaucratic ossification; it encourages personal initiative; it deters systemic takeover by special interests.
To a great extent, science works on the same principle. Accrediting bodies, scientific societies, and professional organizations set general guidelines. Yet, in implementing those guidelines, they give broad discretion to universities, which give broad discretion to their faculties, which give broad discretion to individual researchers. For the most part, we trust trained professionals to make socially responsible research decisions. For the most part, they do. Questions about social harm and social justice are hashed out in conversations and debates among members of the research community, not settled peremptorily by a handful of editors.
In this disaggregated, decentralized system, journals play the essential role of middlemen. They assess research’s importance, vet its quality, and, on approval, usher it into the marketplace of ideas. Of course, they can’t be perfectly apolitical, because they’re human; but, traditionally, they aspire to be ideologically neutral so that the political inclinations of editors don’t supersede the scientific expertise of researchers. We want them to act as quality controls, not political checkpoints.
By explicitly making social justice an element of editorial policy, NHB breaks with this tradition. To the extent it does so, the results will be bad. However professional and well intentioned NHB’s editors may be, they are not qualified to decide on society’s behalf whether research is socially harmful or desirable. In fact, they have no idea how a piece of research will ramify.
From the Church’s attempt to suppress heliocentrism to modern efforts by the federal government to stymie research on gun violence and the health benefits of cannabis, authorities have consistently cited social harms as grounds to suppress research, and they have consistently been wrong. NHB’s editors’ crystal ball will be no clearer. In practice, they, too, will merely interpose their own guesses and prejudices between researchers and the larger community of scholars, prejudging and distorting the search for truth.
The editors do suggest an answer to this problem. Here it is, in full: “We commit to using this guidance cautiously and judiciously, consulting with ethics experts and advocacy groups where needed.” In other words, they will recruit political activists and non-specialist kibitzers as scientific advisers. As Winegard points out, this is not reassuring.
“We’re aware of the danger of politicization but we won’t succumb. As our editorial says: ‘Ensuring that ethically conducted research on individual differences and differences among human groups flourishes, and no research is discouraged simply because it may be socially or academically controversial, is as important as preventing harm.’”
Good luck, NHB, with your good intentions. We have 300 years of scientific tradition that helps researchers and editors understand what constitutes scientific merit. We know that Bayesian reasoning is more reliable than cherry-picking; that double-blind controlled trials are better than convenience samples; that equating correlation with causality is an error; and much, much more.
“Preventing harm,” by contrast, is a completely and inherently subjective criterion. The new policy invites activists and interest groups to veto “harmful” research. They will accept the invitation, claiming that whatever research offends them is oppressive, unequal, stigmatizing, traumatic, racist, colonialist, homophobic, transphobic, violent, and—you get the idea.
When they demand the rejection or retraction of whatever research offends them, NHB, having committed to preventing “harm,” will have nothing definite to fall back on. If the editors don’t cave in right away, they will soon.
Moreover, NHB’s guidance patently is political. Consider this criterion for problematic content: “Submissions that embody singular, privileged perspectives, which are exclusionary of a diversity of voices in relation to socially constructed or socially relevant human groupings, and which purport such perspectives to be generalizable and/or assumed.” If you can figure out what this gobbledygook means, you are smarter than I am. What it does unambiguously convey, however, is woke-left identity politics. The editors might as well post a sign that says, “Conservatives Not Welcome.”
According to the Pew Research Center, from 2015 to 2019 the share of Americans saying colleges and universities have a “negative effect” on the country rose from 28% to 38%: a startlingly (and depressingly) dramatic increase. A lot of that hostility is attributable to the perception that the academy is left-dominated and intolerant. Even if scholars and editors do not deliberately inject politics into their work (and most don’t), multiple surveys show that conservative viewpoints are so rare in some disciplines that progressive orthodoxies are simply taken for granted.
Everything about the NHB statement will make this problem worse.
“Well, don’t you agree that science has shown itself to be biased in ways that harm marginalized social groups? Shouldn’t we do something about that?”
Yes, and yes. But I have a better plan: more and better science.
I know a little bit about bigoted science. For decades, the U.S. psychiatric establishment categorized homosexuality as a mental illness. As a direct consequence, homosexual Americans were disqualified from jobs, stigmatized as deviant and dangerous, and subjected to “treatments” that included electroshock and lobotomies. This is not some distant, long-ago world for me; it was my world until I was 13 years old. Psychiatry meant well, but its characterization of me as sick was one reason, as my sexual desires emerged, I struggled desperately and futilely to suppress them.
In 1956, the psychologist Evelyn Hooker tested whether psychiatrists could distinguish homosexuals from heterosexuals based on anonymized personality assessments. The psychiatrists could not tell the difference. Other work confirmed Hooker’s. In 1973, by a vote of its full membership, the American Psychiatric Association formally corrected its mistake by removing homosexuality from its place in the canonic Diagnostic and Statistical Manual of Mental Disorders. Frank Kameny, the last century’s greatest advocate of gay and lesbian equality (and himself a Harvard-trained scientist), called the APA’s reversal the biggest mass cure in history.
It was an example of science’s most unique strength: its ability to self-correct.
A question I ask myself: In 1956—when it was a given that homosexuals are perverts who pose a danger to themselves and society—would Evelyn Hooker’s research have passed the equivalent of NHB’s guidance? Would the journal’s editors have published it? Or would they have smothered it because of the “social harm” it might cause?
You can conjecture as well as I. I’ll just say that, given history going back to Galileo and my own experience, I am pretty skeptical of offers by self-designated guardians to suppress socially harmful science.
Here is my counteroffer to Nature Human Behaviour.
Understand that it is not your job to stop science from “perpetuating structural inequalities and discrimination in society.” Go back to doing what you know how to do. Understand yourselves not as riding astride the stream of research, judging what does and does not advance justice or harm society, but as humbly serving a community of scholars who collectively have infinitely more knowledge, wisdom, and experience than you do.
Allow your thousands of researchers, reviewers, and readers to make their own various and diverse determinations of how research might ultimately benefit or harm groups, individuals, and the public good. Accept that it is arrogant and self-important for anyone, including yourselves, to set themselves up as visionaries capable of prejudging the scientific process. Apply the non-political standards of scientific merit and editorial excellence which have been honed over centuries.
Above all, remember that by far the greatest engine of social justice, human rights, and equality has been the advancement of knowledge, and the rolling back of ignorance, by a community of truth-seekers empowered to follow evidence wherever it leads. If you care about making society better and fairer, you will serve that community, not appoint yourselves to direct it.
Jonathan Rauch is a senior fellow at the Brookings Institution and the author of The Constitution of Knowledge: A Defense of Truth.