America’s Pre-Election Polls Were a Hot Mess (Again)
It's past time for pollsters to make a truth-in-polling pledge.
This article is brought to you by American Purpose, the magazine and community founded by Francis Fukuyama in 2020, which is now proudly part of the Persuasion family.
by John J. DiIulio, Jr.
Writing here in late October, I asked whether it was “time for truth in polling.” Pollsters, I suggested, need to become far more transparent about their methods and their limitations. As I explained, they need to admit that pre-election polling was abysmal in 2016 and even worse in 2020; be clear about what constitutes a good poll, and how little even a perfect poll can tell us about the present let alone about the future; acknowledge why good polling has become so very rare; and come clean regarding the novel to nutty ways that pollsters now try correcting for faulty samples.
I also suggested that pollsters need to highlight, not hide, polls’ margins of error; disclose all polling results as two ranges, not one spread; confess that there is, strictly speaking, no such thing as a “tied” or “neck-and-neck” polling result; and consider whether “poll aggregators,” like the “polling averages” that Real Clear Politics, The New York Times, and other polling and media outlets publicize incessantly, are too much like market-friendly gimmicks that defy all logic.
Writing now a week after the 2024 national election, I withdraw that suggestion in favor of this outright declaration: it’s time for pollsters to make a truth-in-polling pledge.
The actual 2024 presidential election results are Trump with 312 Electoral voters versus Harris with 226, and Trump with about 50% of the national popular vote versus Harris with about 48% of it. While a few pollsters seemed to get close to the actual results, and though a few analysts came close to making spot on forecasts, overall, the 2024 polling was once again way off, oversimplified, and oversold. For instance, the day before Election Day, the Silver Bulletin gave Kamala Harris a 50% chance of winning the Electoral College and Donald Trump a 49.6% chance of winning it. Likewise, on Election Day eve, The Economist’s polling model gave Harris a 56% chance of victory and Trump a 43% chance of victory.
State-level polling was no better. For instance, a poll released a few days before Election Day by Selzer & Co., a polling organization that is widely considered to be an industry leader, garnered enormous media attention. In 2020, Trump had bested Biden 53% to 45% in Iowa. The poll found that Harris was now leading Trump by 47% to 44% in Iowa. In the end, Trump bested Harris in Iowa by about 56% to 43%; the firm’s principal, Ann Selzer, is “reviewing the data” that figured in the massive miscall.
Here’s the TIPP
To their better-late-than-never credit, in the few weeks just before Election Day, many pollsters, including prominent ones, began to come clean about how the actual results might once again deviate more than a little from what the polls would entreat one to expect.
For instance, a week or so before Election Day, Nate Silver, who some consider to be the nation’s top pollster, went so far as to cite his “gut” as telling him Trump would win even though his much-touted model had it as a pure toss-up. And a few days after the election, Silver’s “excuses” for his model’s faulty forecasts were being mocked by another prominent polling “nostradamus.”
But, as I have always believed, most pollsters are well-meaning. In my view, Silver is among the most responsible pollsters in the business; besides, recriminations are not going to cure what ails polling.
I would urge a different tack; namely, a four-point Truth in Polling Pledge (TIPP) that polling firms and media outlets might opt to make and honor.
First, the TIPP would commit the pollster to spotlighting margins of error (MOEs) in ways that make it virtually impossible for media outlets to ignore or obscure. As I explained in my preceding essay, the MOE requires addition and subtraction on both sides of any spread. So, if a poll finds candidate Smith with 51% and candidate Jones with 49%, with a MOE of +/-3.0, the pollster would never report the poll’s single-number spread—in this example, 2 points—without also reporting its two MOE-begotten ranges.
To wit, in this example, that would be Smith with 53% versus Jones with 46%, and Smith with 48% versus Jones with 52%. Rather than report “It’s Jones up 2 points over Smith,” the pollster would report “It’s a 2-point spread, with a 3.0 MOE, and a range of Smith up by 7 points to Jones up by 4 points.”
Second, in addition to providing the poll’s dates conducted, sample size, sample type (registered voters, likely voters, etc.), and interview method (telephone, online, etc.), the pollster would also promise to express the sample size as a percentage of all individuals contacted; supply easily accessible information explaining what, if anything, was done to correct for over- or under-representation of given subgroups by this or that weighting method (“raking,” “matching,” “propensity weighting,” etc.); and illustrate how different weighting protocols would have shifted the poll’s results.
Third, the TIPP would require a disclaimer: “This poll’s results estimate how people might have voted had the election been held on the day(s) that the poll was conducted. It furnishes no support for inferences regarding what the results might be at any later point(s) in time.”
Fourth and finally, the TIPP would promise to render “poll aggregators” transparent by including with each a “source note” summary specifying the different times, different sample populations, different sample sizes, different weighting protocols, different interview methods, different MOEs, and different question-wordings that were baked into the polling average and its spread number.
Aristotle Polling
In the first book of The Nicomachean Ethics, Aristotle observes that it is “the mark of the educated person to seek precision in each class of things just so far as the nature of the subject admits.”
Good polling is a blessing in a democracy, but it doesn’t have to be perfect to be so, and it can never be highly precise. Following three straight poor to awful showings on presidential polling (2016, 2020, and now 2024), admitting as much and proceeding accordingly is what any ethically well-meaning and public-spirited pollster should do.
The TIPP, or something like it, can and should become like the good-housekeeping seal of approval for pollsters. Embracing and following the TIPP would not mean that the poll is well-constructed and well-executed; but it would mean that the pollster has voluntarily tipped us off about how the poll is inherently limited, why its results won’t be precise, and just how imprecise they might be.
If even a few notable polling organizations adopt the TIPP or a TIPP-like commitment, what should one think about pollsters who don’t?
Trust your gut.
John J. DiIulio, Jr. taught American government for 35 years across three different universities and co-authored a leading American government textbook.
Follow Persuasion on Twitter, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:
Thanks for your earlier piece on the problems of presidential polling and it presentation to the public. And thanks, now, for proposing how to keep the public informed on polling insights while also understanding what’s behind the information. Needed work!