What Happens When the Scientists Disagree?
Scientific dissent should be engaged with, not suppressed.
By Zeynep Pamuk
In January 1976, there was a small outbreak of swine flu among army recruits at Fort Dix, New Jersey. One soldier died. The United States had not experienced a swine flu outbreak since 1918-19, and the possibility of another pandemic raised alarm. Amid uncertainty over the severity of the virus, the Center for Disease Control initially recommended the production of a vaccine, but not its administration. The CDC director, however, was persuaded of the need for a mass immunization program. He conducted a telephone poll of committee members to ensure they would not oppose the recommendation that all citizens be immunized within three months. Once he gained their assent, his action memo was quickly accepted by President Gerald Ford. By the end of the year, 40 million Americans had been vaccinated at a cost of $135 million dollars. But the pandemic never materialized. Worse yet, the vaccine turned out to be associated with increased risk of Guillain-Barré syndrome. The incident was widely regarded as a fiasco.
The swine flu affair shook trust in public health initiatives for years to come. A postmortem report concluded that the scientists were overconfident, and dissent within the committee had been suppressed in the name of consensus. The President and Congress had pushed forward with the program without investigating the doubts of scientists within the CDC.
The swine flu debacle should have been a chance to learn from mistakes. But in the age of Covid, we’re still struggling to deal with dissent in scientific advice. Many key facts about the pandemic have been subject to scientific disagreement. We’ve seen disputes over the virus’s mode of transmission, its risk to different demographics, the effectiveness of masks, and the utility of lockdowns. Yet, scientific advisers have repeatedly presented their guidance without acknowledging uncertainty within their ranks—only to reverse their position a few months later. Masks were ineffective in February, but mandatory by April. The lab origin hypothesis was a crazy conspiracy at first, but became difficult to dismiss six months later. School closures were absolutely necessary, then completely unreasonable, then necessary again. Mask mandates in schools have become the latest arena of controversy. The scientific community has been incredibly successful in developing vaccines. But it’s still struggling with communicating disagreement.
If we want to fix this problem, we need to think harder about alternatives to suppressing dissent. Scientific disagreements will surface in public debates sooner or later. In failing to address them, we risk confusion, distrust and conspiracy theories, which can seriously undermine public health initiatives. But this is also a missed opportunity to invigorate the democratic process. Open debate can subject the hidden assumptions in scientific advice to proper scrutiny, lead to better decisions and prevent mistakes. It can help citizens question scientific claims without accepting or rejecting them wholesale. And it can allow opposition parties to present informed alternatives to government policies and hold decision makers accountable.
In a series of articles written in the 1960s and 1970s, physicist Arthur Kantrowitz developed a proposal for a new institution to deal with scientific controversies in policy making. Kantrowitz complained that competing experts made contradictory technical claims, which did not get challenged in the public sphere. This left the public in confusion, weakened the scientific basis of public policy, and heightened mistrust of experts. His solution was to create an adversarial institution where rival experts would defend their case and then cross-examine each other in front of a panel of impartial scientist-judges. The judges would offer a verdict on the disputed scientific points and highlight agreement between the two sides. The proceedings would be open to the public, and the decision would serve an advisory role for Congress and the president.
Though the plan was never realized, Kantrowitz was onto something. To improve how we deal with dissent, we need better institutions to host it in the first place. I’ve proposed that we revive Kantrowitz’s science court today, but in a less elitist, more participatory form. The new science court would address policy questions with a scientific component. Competing experts would make the case for different sides of the issue, followed by a period of cross-examination. A jury made up of randomly selected citizens—instead of Kantrowitz’s scientist judges—would deliberate on the policy question, test the implications of expert views against their own values and priorities, and then deliver a policy recommendation. The court could take up questions such as whether to adopt strict lockdowns, whether to keep schools open and whether to institute (or lift) vaccine and mask mandates.
The goal of the court would be to normalize scientific disagreement in front of a public audience and facilitate critical scrutiny of expertise by nonexperts. The adversarial structure would reveal the assumptions and value judgments of each view, while clarifying their level of uncertainty and the scope of disagreement. Citizen jurors would deliberate amongst themselves and reach a decision. This would avoid the imbalance in expertise which occurs when lay people debate scientists directly, placing citizens in the seat of judgment.
The court would solve a number of problems. It would make up for the democratic deficit involved in deferring to unexamined expert opinion, while overcoming some of the shortcomings of uninformed public opinion. It would stop citizens feeling left out of decisions that affect their everyday lives, boosting the democratic legitimacy of policies. It would foster public understanding of the science behind those policies, as complex findings would be translated for non-expert jurors. That could prevent conspiracy mongering and enable a higher level of public discourse going forward. Finally, it would force scientists to engage publicly both with non-expert citizens and with their peers within the scientific community. Such a process could improve their reasoning, exposing holes in their logic and hidden moral assumptions in their calculations. At its best, the science court would benefit both scientists and laypeople.
A new institution, however, is only one part of a solution to the challenge of scientific dissent. The science court could enable more careful scrutiny of different points of view, but its decisions would be one input into public debate and decision making. Arguments among scientists will necessarily spill beyond the boundaries of the court, and scientists who differ from the official position may want to be heard in other venues. Thinking more broadly about expressions of scientific dissent in the public sphere is therefore vital, too. Here, the experience of Covid offers valuable lessons on what productive dissent might look like—and how it could go wrong.
The first lesson for dissenters is not to be too hasty. It may be tempting to think that the urgency of a crisis justifies challenging government advice based on quick findings. But the pressure to act quickly is all the more reason to submit findings to careful scrutiny first. Failure to do so risks spreading misinformation and delegitimizing valid disagreement.
In the early months of the pandemic, Stanford professor John Ioannidis and his co-authors gave antibody tests to residents of Santa Clara County, California. They found infection rates were far higher than believed, suggesting that Covid’s death rate must be much lower than previously thought—around the same as influenza. The authors used these findings to criticize strict lockdown policies as an unprecedented fiasco, shining a spotlight on the trade-offs they entailed. In doing so, they gave representation to a significant political view that was held by some citizens but frequently dismissed as anti-science.
There was, however, a hitch. The study was blasted by the scientific community, when it was revealed to be sloppily done and probably wrong. Worse yet, Ioannidis had used it to aggressively lobby the Trump administration against lockdowns, despite the fact the paper had yet to go through peer review. Ioannidis’s scientific standing gave the study disproportionate attention in the news, disguising his failure to follow the normal processes of quality control. The episode eroded trust and spread dubious findings. Dissenting science must submit to the same scrutiny as the best mainstream science.
The second lesson for dissenters is to be upfront about their value judgments. Scientific advice always involves trade offs about whose needs to prioritize, and democratic scrutiny is necessary for exposing these. But if scientists themselves dress up their moral judgments as scientific facts, this only muddies the waters.
Last year, three scientists from Harvard, Oxford and Stanford banded together to produce the Great Barrington Declaration. Eventually signed by over 50,000 medical and public health scientists and practitioners, it criticized strict lockdown policies. Among the devastating effects they claimed were lower childhood vaccination rates, worsening cardiovascular disease outcomes, fewer cancer screenings, and deteriorating mental health. They proposed a more targeted strategy of shielding the vulnerable, while allowing others to build-up immunity through natural infection.
But the Great Barrington Declaration also had flaws. The declaration offered no proof for its scientific assertions; in place of convincing evidence was an appeal to authority through its highly credentialed signatories. If dissenting opinions claim to be based on science, the evidence must be front and center. And, while Ioannidis violated the internal norms of the scientific community, the Barrington Declaration also blurred the line between norms of science and norms of advocacy. The authors’ message straddled its unsubstantiated scientific claims about the negative health impacts of lockdowns with value judgments on how better to distribute harms and benefits across different social groups. But the two issues should not be conflated. A different ethical view of which groups to prioritize, no matter how strongly felt, cannot constitute scientific fact.
The third lesson for dissenters is to enable debate first, and point score later. Dissenters themselves can turn out to be wrong; they too must open themselves up for public scrutiny. Their role is not to set themselves up as the true authority but to contribute to an ongoing discussion. The science court would be a key institutional venue for such a process. But we can look to other institutions for guidance too.
In May 2020, a group of prominent UK scientists established an alternative scientific advisory group in response to the failures of the government’s official Scientific Advisory Group for Emergencies (SAGE). Independent SAGE, as it was dubbed, focused less on challenging specific policies, and aimed instead to counteract official SAGE’s lack of accountability and the government’s mishandling of the pandemic response. Their main work involved putting scientific advice in the public domain to ensure that citizens could engage with the reasoning behind the government’s strategy. Some of their meetings were livestreamed on YouTube and all of their advice was shared openly. A key political contribution of Independent SAGE was that it supplied valuable scientific analyses for opposition parties, which used their work to criticize the government’s response and suggest alternatives.
Each of these dissenters took a clear public stance against the dominant scientific advice. They forced other scientists to engage with their challenges and to either rethink their positions or defend them more vigorously. These exchanges were valuable for democratic debate. At the same time, Ioannidis’s study and the Great Barrington Declaration also presented some inadequately supported scientific claims and undisclosed value judgments. Dissent is no less valuable for being wrong—but it works best when careful and institutionalized, rather than sensationalist.
Covid has shown we need to get better at questioning orthodoxy productively. Suppression of scientific debate provokes suspicion, constrains policy, and prevents democratic deliberation. And if open disagreement risks creating confusion too, the damage of a cover up is worse. Trust takes a long time to build, but can be destroyed instantly. Working out how to improve the relationship between science, democracy and disagreement through new institutions like the science court goes far beyond Covid. Our societies will soon face a vast array of challenges, from the risk of further pandemics and climate change, to the ethical dilemmas presented by powerful new technologies, like gene editing and artificial intelligence. Now more than ever, we need to learn how to live with—and offer—productive dissent.
Zeynep Pamuk is assistant professor in the Department of Political Science,
University of California, San Diego and author of the book Politics and Expertise: How to Use Science in a Democratic Society.
You criticize Dr. Ioannidis and the signatories of the Great Barrington Declaration for their tactics in saying that Covid had a lower IFR than the ridiculous early estimates used to justify lockdowns, and for saying that lockdowns have negative societal consequences, respectively. Yet you give little credit to these claims for being not only correct (as time has shown) but also rather obviously correct even at the time, appearing controversial only due to the intensely political nature of Covid policy. You criticize dissenters for being "hasty" yet the mainstream scientists were given *months* of essentially opposition-free time to impose unprecedented, near-total lockdowns on society. I wish the dissenters had been hastier. You talk about scientific quality, but ignore that the CDC regularly churns out non-peer-reviewed advocacy of the mainstream agenda on its MMWR channel, that gets picks up as "science" by the entire media apparatus. Finally you emphasize the importance of "scientific" debate, which is 100% needed, but most Covid policy is only marginally about science at all. Maybe cloth masks are 0% useful or maybe they are 10% useful, that's a scientific question, but in either case should they be mandated? That's a social and political question.
I am glad you are arguing for more vigorous and structured dissent in science and we do need it. But the dissenters on Covid were brave souls taking on a thankless task against a global hegemon. They were censored and shunned. Nitpicking their tactics is fair but kind of an odd thing to prioritize.
These are good-faith suggestions for dealing with real problems, but in the spirit of the essay itself, allow me to take issue with it:
First, I don't recall any discussion of roles played by academic and professional institutions and by the media. A dissenting scientist who becomes unemployable or suffers other professional consequences will likely censor himself -- and even if he doesn't, how will we know? He'll have no platform. The media have the same sort of gatekeeping power. I don't see how any plan that doesn't account for these factors can have much effect.
Next, it's not clear why having opposing scientists duke an issue out in public is more "democratic" than having them do so before the public's elected representatives. That is the function of those representatives, after all. That said, the idea of organizing adversarial proceedings before those representatives seems a good one.
And attention should be paid to the supporting function of creating materials to help laymen understand the scientific arguments. Furthermore, the organizers should emphasize ideas such as level-of-confidence and effect-size, without which there's not much basis for decision-making. It should, for example, be significant for one side of a debate to demonstrate simply that the other side hasn't made a conclusive case -- even if the first side has no case to make at all.
Along those same lines, the public and its representatives need to be educated about basic statistics and probability, common cognitive biases and other factors that should inform their decisions. It's not that difficult, since we're not talking about calculation but a general sense of proportion, of what it means to know something and of common pitfalls.
Lastly (for now), we should work to make it socially unacceptable for academic and professional organizations to engage in anything outside their narrow purviews. There should be no letters signed by hundreds of doctors/lawyers/chemists/hairdressers on climate change, minority rights or immigration. All the members are encouraged to add their voices to the groups already organized around these topics, or to create new ones if it suits them, 𝘣𝘶𝘵 𝘯𝘰𝘵 𝘢𝘴 𝘮𝘦𝘮𝘣𝘦𝘳𝘴 𝘰𝘧 𝘵𝘩𝘦𝘪𝘳 𝘱𝘳𝘰𝘧𝘦𝘴𝘴𝘪𝘰𝘯𝘴. Not only can no good come of it, it automatically casts their professional opinions into doubt, as well it should. Judges should administer the law, doctors should maintain health, scientists should seek and disseminate the truth. If they're doing anything else at the same time, they're not really doing their jobs.