The Peculiar Persistence of the AI Denialists
The history of the world will be split into a pre-AI and a post-AI era. Too many people are still in denial.

Some momentous historical events, like the French Revolution or the demise of communism, come with little warning. Few contemporaries were able to predict that they were about to happen, or to foresee how fundamentally they would transform the world.
Other momentous historical events, like the fall of the Roman Empire or the Industrial Revolution, loudly announce their imminent arrival. Once those first factories in the north of England started to appear, the productive capacities of the spinning jenny and the steam engine were so evident that they augured disruption on a mass scale. Any contemporary observer who treated these technological developments as but one among many interesting social, cultural and political developments taking place in early 19th century Europe was, in a manner of speaking, so busy studying molehills that he failed to notice the sudden appearance of a towering mountain.
What we are going through at the moment is, at a conservative estimate, analogous to the Industrial Revolution. The rapid emergence of sophisticated models of artificial intelligence has enormous implications for the future of the human race. If they are harnessed for good, they could liberate humans from hard toil, end material scarcity, and facilitate enormous breakthroughs in areas from medicine to the arts. If they are harnessed for ill, they could lead to mass immiseration, cause war or pestilence on an unprecedented scale, or even make obsolete the human race.
But while all of this is as obvious as the significance of the Industrial Revolution should have been in the Manchester of the early 19th century, an astonishing number of people are choosing to keep studying their little molehills. Yes, every fashionable conference has some panel on AI. Yes, social media is overrun with hypemen trying to alert their readers to the latest “mind-blowing” improvements of Grok or ChatGPT. But even as the maturation of AI technologies provides the inescapable background hum of our cultural moment, the mainstream outlets that pride themselves on their wisdom and erudition—even, in moments of particular self-regard, on their meaning-making mission—are lamentably failing to grapple with its epochal significance.
A recent viral essay in The New Yorker provides an extreme, but not an altogether atypical, illustration of the problem. “A.I. is frankly gross to me,” its author, Jia Tolentino, avows. “It launders bias into neutrality; it hallucinates; it can become ‘poisoned with its own projection of reality.’ The more frequently people use ChatGPT, the lonelier, and the more dependent on it, they become.” At least Tolentino has the honesty to acknowledge the astonishing fact that “I have never used ChatGPT.”1 Though the author considers herself a progressive, her basic attitude to new technologies resembles that of a reactionary 19th century priest who denounces the railways as the devil’s work—before proudly mentioning that he himself has, of course, never engaged in the sin of riding one.
Mainstream outlets from The New York Times to NPR do have some smart assessments of the state, the stakes, and the likely future of artificial intelligence. But a depressingly large share of the AI coverage you are likely to encounter in those storied publications comes in three graduated forms of what I’ve come to think of as “AI denialism.”
There are the articles which dismiss AI as incompetent, portraying chatbots as perennially prone to hallucinations and incapable of delivering on basic tasks like fact-checking. Then, there are the articles which claim that, far from being truly intelligent, AI is merely a pattern-matching machine, a sort of “stochastic parrot.” And finally, there are the articles which argue that the impact of AI on the economy has been vastly overstated, since its promised productivity gains have not yet materialized.
Hear no progress, speak no progress, see no progress.
“AI is incompetent.”
The first of these three genres constitutes the purest form of denialism, in that it at this stage has to stipulate things which are plainly wrong (as anyone who has actually bothered to use ChatGPT or Claude or Grok or Gemini or Deep Seek would well know). It just about remains true that there are certain specific tasks at which AI chatbots remain surprisingly inept. If you are searching for a particular quote you half remembered (as I often do), it is usually a mistake to ask them for help. For if they are unable to locate the true quotation, they somehow cannot resist the temptation to please you by making up a perfect—albeit fake—little soundbite.
But in most fields of endeavor, AI engines now rival all but the most gifted humans. They are astonishingly good at translating texts and at playing chess, at writing poetry and at teaching you new skills, at coding and at making illustrations, at diagnosing a medical condition and at summarizing a technical research paper in the form of a podcast. To dismiss this astonishing box of varied wonders on the basis of a few tasks the technology has not yet cracked is reminiscent of the well-worn joke about two old Jews who go to the circus. An acrobat crosses a high wire on a unicycle while juggling seven flaming torches and playing a virtuoso piece on the violin. Dismissively, one Jew turns to the other and laments: “Paganini, he isn’t.”
“AI is just a stochastic parrot.”
The second genre of denialism is at once more sophisticated and more hollow. It invokes a supposedly profound technical insight about the nature of AI—but ultimately amounts to little more than dismissive sloganeering, shrewdly disguised behind the cover of a half-understood incantation.
According to an influential 2021 paper, the problem with large language models is that they don’t truly understand the world; rather, they are merely parroting back human language based on a stochastic model of which words are usually associated with which other words in the large data sets on which they are trained. Far from being “intelligent,” AI chatbots turn out, upon further inspection, to be mere “stochastic parrots.”
The idea that AI chatbots are merely “stochastic parrots” is rooted in an uncontested truth about the nature of these technologies: the algorithms really do draw on vast data sets to predict what the next word in a text, or pixel in a painting, or sound in a piece of music might be. But evocative though the invocation of this fact may sound, it does not magically make the prodigious abilities of artificial intelligence disappear. If chatbots fulfill tasks in the blink of an eye over which skilled humans used to labor for weeks, this advance will transform the world—whether for good or ill—irrespective of how the bots are able to do so.
Nor is the observation that chatbots use stochastic reasoning as disqualifying as it first appears. We are about as far from understanding how the human mind works as we are from understanding what exactly makes ChatGPT tick. But there is good reason to believe that our own astonishing ability to comprehend and manipulate the world is itself rooted in our pattern-matching abilities. Indeed, the pattern-matching that supposedly makes artificial intelligence a mere “stochastic parrot” might actually make it more similar to humans than its high-minded critics want to admit.
In May 1997, Garry Kasparov, then the best chess player in human history, lost to Deep Blue, a vast IBM machine spanning several refrigerator-sized cabinets. As he later recounted, he was particularly shaken by one move made by the machine. Kasparov believed that Deep Blue would make a move which offered a big tactical advantage even though he could sense, based on his vast experience, that doing so would ultimately weaken its position. But Deep Blue, which was but a giant calculating machine playing out as many scenarios as far out as possible, did not fall for the trap. Its move was shocking to Kasparov because he realized that a machine was able to come up with the intuitively best option—something that felt quintessentially human—by mere calculation.
Now, what’s fascinating about today’s chatbots, which vastly outperform Deep Blue, is that they work in a completely different manner. Deep Blue “knew” the rules of chess, which allowed the machine to consider millions of possible scenarios through brute-force calculation, and arrive at the right conclusion through a sheer act of calculative might. Today’s large language models, by contrast, draw on their vast database of past chess games to predict which move feels right. In other words, the fact that, unlike Deep Blue, ChatGPT operates like a “stochastic parrot” makes it more, not less, similar to the way in which astonishingly accomplished humans like Garry Kasparov play the game.
“AI won’t have that much impact, anyway.”
The final form of denialism is about the economic impact of this technology. When OpenAI released ChatGPT 3.5 in November 2022, some observers predicted an immediate and devastating effect on white-collar jobs. A few industries have already been hard hit. While economists over the last decade urged career-minded students to learn coding in order to future-proof their careers, computer programmers have rapidly gone from commanding astonishing wages to being more likely than recent graduates of far less “safe” fields like art history or philosophy to be out of a job. But on the whole, the wholescale disruption of white-collar workplaces is so far conspicuous by its absence—as are the promised gains in productivity.
This makes it tempting to predict that the invention of artificial intelligence will, at least in economic terms, turn out to be much less important than meets the eye. Some distinguished economists argue that the job market will for the foreseeable future hardly be impacted by AI. Others argue that the sky-high valuations of companies like OpenAI will prove to be a giant mistake, with the ever-growing costs of training ever-more sophisticated AI models not sufficiently offset by future revenues. In the end, they argue, this moment will be remembered for the irrationality of its collective hubris, just as the DotCom Bubble of 2000 was.
The obvious way to rebut this argument is to point out that the DotCom bubble turned out to be but a temporary downturn. Yes, plenty of useless companies were vastly overvalued before the bubble burst in March 2000. But the hype about the internet has since turned out to be fully justified. A quarter century on from the “bubble,” the NASDAQ is four times higher than it was before it burst, and tech companies make up a huge share of the world’s stock market capitalization. It has become undeniable that the world economy has been fundamentally transformed by digital technology.
The deeper way to rebut skepticism about the economic impact of AI is to point out that technology-induced improvements in productivity require a combination of two things: new technologies which can augment or substitute for human labor; and the organizational changes which allow firms to harness them. Technologies which produce incremental increases in productivity in particular industries are often easy to implement, in part because they tend to be the result of concerted efforts by incumbent firms to expedite existing production processes. Technologies which produce large increases in productivity across industries are often hard to implement, in part because they—as in the case of artificial intelligence—usually come from outside the existing industrial structure and require much more fundamental organizational changes before they can be implemented.
Take one example: Studies suggest that AI bots are now as effective as the most skilled doctors at many key medical tasks, such as interpreting sophisticated test results or diagnosing a patient’s condition based on a diffuse set of symptoms. But because of the extremely strict regulations which govern the health care system—and the power of medical professionals, who have every incentive to avoid being replaced—the actual practice of medicine has so far changed little. This tells us less about the long-term potential of new technologies than it does about how slow complex systems are to adapt to them, especially when the salaries of well-connected professionals in highly protected industries are on the line. As in many previous instances of technological disruption, these forces are proving capable of containing the rising tide for a surprisingly long period of time; but it would be foolish to predict that the dam can hold forever.
Ten years ago, the conventional wisdom held that technological advances would imperil many blue-collar jobs, like those of truck drivers. Now, the astonishing advances in text-based AI have convinced many commentators that white-collar professionals, from paralegals to HR professionals, will be the first to lose their job. But it is worth noting that there is another very large hammer which has not yet fallen. While it has turned out to be more difficult to build robots which can maneuver around the physical world with dexterity than to build chatbots that can perform high-level cognitive tasks, there will come a time in the relatively near future in which machines capable of doing both tasks simultaneously will be produced in large numbers. At that point, both white-collar and blue-collar jobs will be imperilled en masse.
This makes me skeptical of the argument that even sophisticated economists now seem to fall back on to downplay the likely impact of artificial intelligence. They like to point out that, despite dire predictions by contemporaries, past technological transformations from the invention of the printing press to the automatization of factory work did not lead to mass unemployment. While certain categories of workers were indeed decimated by these developments, they also gave rise to the need for wholly new categories. There may no longer be scribes who copy books by hand; but (as the state of my inbox can attest) there are now plenty of marketing professionals who earn their living by pitching authors to podcast hosts. Similarly, the number of coal miners may have plummeted over the last decades; but there is now a significantly greater number of professional yoga teachers in the United States.
That argument has so far proven correct at every historical juncture. But that is because we have never before in the history of humanity been faced with an embodied form of general intelligence that outshines the vast majority of humans at the vast majority of tasks. Whether the principle of historical replacement of job categories which has held for past technological innovations can persist in the face of this unprecedented innovation remains at best an open question. Personally, I suspect that the people now claiming that the impact of AI on the job market will resemble that of the steam engine will suffer the same fate as Malthus, whose theory about the dangers of unchecked population growth proved astonishingly informative in describing every historical moment up until the very juncture at which he wrote—but turned out to be badly wrong about everything that happened after.
I have an admission to make.
Intellectually, I have become deeply convinced that the importance of AI is, if anything, underhyped. The sorry attempts to pretend we don’t stand at the precipice of a technological, economic, social and cultural revolution are little more than cope. In theory, I have little patience for the denialism about the impact of artificial intelligence which now pervades much of the public discourse.
But in practice, I too find it hard to act on that knowledge. I am not a computer programmer, so I don’t have all that many useful things to say about the technology. I am not deeply enmeshed in tech circles, so I struggle to identify the best people with whom to talk about these topics. Most articles we publish in Persuasion don’t touch on AI, and the ones that do often get surprisingly little pickup.
But if there is one thing I have learned in my writing career so far, it is that it eventually becomes untenable to bury your head in the sand. For an astonishingly long period of time, you can pretend that democracy in countries like the United States is safe from far-right demagogues or that wokeness is a coherent political philosophy or that financial bubbles are just a figment of pessimists’ imagination; but at some point the edifice comes crashing down. And the sooner we all muster the courage to grapple with the inevitable, the higher our chances of being prepared when the clock strikes midnight.
Fantastic piece Yascha.
“The problem is this: we have Paleolithic brains, medieval institutions, and godlike technology” - EO Wilson
The printing press ended the thousand year reign of Papal Rome in medieval Europe. Thanks to Gutenberg and Luther. But not before tens of thousand perished in the reformation and counter reformation .
Humanity survived. Democracy emerged. Things got better.
Large digital networks + AI connected to “the internet of things” will similarly obsolete democracy and send it to the museum of ancient governance artifacts next to monarchies and theocracies. Humanity cannot organize democratically side-by-side with this social solvent that atomizes each person into “right for me” individualism. Like Burger King, we all demand “have it your way” (You Rule).
Our godlike technology will rip our medieval institutions and Paleolithic brains to shreds. We will be forced to update our social/civic OS.
In the words of Demis Hassibis, AI is overhyped in the short-run, and underhyped in the long-run.