The Third Humbling of Humanity
What will be left of our self-conception once artificial intelligence becomes better than us at writing poems or making movies?
Not quite ready to outsource your reading to ChatGPT yet? Support those old-fashioned human creators trying to help you make sense of the world by becoming a paying subscriber today!

Recently, an enthusiast of the classics posted four translations of a passage from The Odyssey on X, asking his followers to pick their favorite. Three were by some of the most prominent translators to have undertaken the task of rendering Homer’s beautiful language into accurate and idiomatic English: Robert Fitzgerald, Richmond Lattimore, and Emily Wilson. The fourth was by one of the leading artificial intelligence models: OpenAI’s GPT4o.
The result of the straw poll was unambiguous: GPT4o was the clear winner, attracting over twice as many votes as its human runner-up. At least as judged by the taste of the masses, chatbots now outperform the most skilled human translators.
This is no aberration. In a vast array of creative endeavors that were once thought to be quintessentially human, artificial intelligence now rivals the abilities of the most talented artists.
Take poetry. According to a paper recently published in Nature by Brian Porter and Edouard Machery, two professors at the University of Pittsburgh, respondents proved incapable of recognizing the provenance of poems they were asked to rate. They failed to distinguish between the works of famous human creators like Sylvia Plath, Walt Whitman, and Lord Byron and imitations generated by a chatbot—ChatGPT’s 3.5 model, which isn’t even at the cutting-edge of the technology any longer.
Worse, participants in the study inadvertently betrayed their species. When they were told about the provenance of the poems they were rating, they preferred the human-written originals. When they did not know who wrote which poem, they consistently preferred the AI-written imitations. “People prefer AI-generated poetry to human-authored poetry, consistently rating AI-generated poems more highly than the poems of well-known poets across a variety of qualitative factors,” Porter and Machery report.1
Poetry is closer to being the rule than the exception. Indeed, it is, as the study’s authors note, getting hard to find a human endeavor in which artificial intelligence does not yet outperform humans: “AI-generated images have become indistinguishable from reality. AI-generated paintings are judged to be human-created artworks at higher rates than actual human-created paintings. AI-generated faces are judged to be real human faces at higher rates than actual photos of human faces, and AI-generated humor is just as funny as human-generated jokes.”
One urgent set of questions that these astonishingly rapid advances in artificial intelligence have raised concerns the future of white-collar work, especially in artistic fields.
Recently, OpenAI released a model whose ability to generate images is vastly improved. Overnight, the arrival of this pocket illustrator has allowed companies to accomplish many tasks for which they would previously have had to hire designers or illustrators. Even tasks that would previously have involved large teams of models, photographers, and account executives, such as creating a professional ad, could soon be replaced by AIs. Will millions of creatives soon be out of a job?
One of the ironies of this moment may be that the unexpectedly rapid advance of AI will invert skills hierarchies we have long taken for granted. A few years ago, the common wisdom held that new technologies would come for the jobs of truck drivers and construction workers; “low-skilled” members of such menial professions were condescendingly advised to learn how to code. Now, a new reality is dawning: The most “high-skilled” white collar professionals—including doctors, lawyers, and computer programmers alongside illustrators and copywriters—may prove to be even more replaceable.
These economic transformations are likely to prove deeply disruptive. Depending on how they are handled, they may set humanity free or impoverish us all. And yet, I have over the past weeks been ruminating about an intangible change that seems to me, in its own way, just as important. What does it do to our self-understanding as a species if we will, one rapidly-approaching day, be less good than machines at activities—from writing a poem to composing a song—that we have long thought of as quintessentially human?
Technology has allowed humans to submit much of the planet to their will.
In 1500, the globe contained fewer than five hundred million human beings. The area dominated by human settlements was limited to a small fraction of habitable land. Life expectancy in the richest parts of the world was less than 35 years. About one in two newborns did not get to celebrate their 15th birthday.
Half a millennium later, humans have fundamentally transformed their habitat. There are now over eight billion human beings on Planet Earth. Outside of oceans and rainforests, of deserts and mountaintops, human settlements dominate the vast majority of the earth’s landmass. Even in Africa, which remains by far the world’s poorest continent, the average person can expect to live longer than sixty years. The number of babies who succumb to illness, starvation or violence is a fraction of what it once was, with fewer than one in twenty newborns dying before their 15th birthday. It is easy to see why some scientists have suggested that we should call this geological age the Anthropocene, the era in which humans have come to be the most important factor in shaping the environment.
And yet, there is a sense in which humanity has, during its five hundred years of triumph, been humbled. For the very same scientific insights that allowed humans to submit the earth to their will also demonstrated to humans that their place in the universe is not nearly as central as they had once assumed. As Sigmund Freud argued in his Introductory Lectures on Psychoanalysis, “Humanity has in the course of time had to endure from the hands of science two great outrages upon its naive self-love.”
The first humbling of humanity came in the 16th century, when Copernicus argued that the universe does not revolve around the earth. This forced humans to grapple, in Freud’s words, with the fact “that our earth was not the center of the universe, but only a tiny speck in a world-system of a magnitude hardly conceivable.”
The second humbling of humanity came in the 19th century, with Charles Darwin’s theories of evolution. As Freud put it, this “robbed man of his peculiar privilege of having been specially created, and relegated him to a descent from the animal world, implying an ineradicable animal nature in him.”
Freud also postulated a third humbling of humanity, one that he considered the most “bitter blow” for our species. His own work on the subconscious, he claimed—not without a healthy dose of Ego—is proving to “each one of us that he is not even master in his own house, but that he must remain content with the veriest scraps of information about what is going on unconsciously in his own mind.”
The recognition that humans are not fully conscious of all of their thoughts and desires is surely an important one. But it doesn’t bear comparison with the discoveries of Copernicus and Darwin. And in the form in which Freud presented it—with his emphasis on the Id and the Super-Ego, on the subconscious and the interpretation of dreams—it has gone the way of much other science: discredited and debunked by more rigorous research.
The third humbling of humanity is yet to materialize. But rapid advances in artificial intelligence suggest that it may be around the corner. For what should count as the third, and most fundamental, humbling of humanity if not the fact that machines may soon be better than us at the creative endeavors long thought to be most characteristically human?
A lot of smart people are still in denial about the abilities and the significance of artificial intelligence. They point to limitations like its occasional tendency to hallucinate. They claim that, far from being a genuine form of intelligence, it is merely a “stochastic parrot,” a soulless algorithm that is trained to predict the next word in some sentence or the next pixel in some photograph.
It is perhaps inevitable that, threatened with such rapid progress in some of the domains that feel especially intimate to us humans, some people will desperately latch onto any explanation for why these miracles are somehow less-than-fully-real. But sadly, the most commonly adduced arguments ultimately amount to little more than doomed attempts at self-soothing. In a few decades or perhaps a few years, they will be widely acknowledged as a form of “cope”—one that stands in the tradition of the priests and moralists who refused to accept that humans might be descended from apes.
Yes, AI models still have some basic limitations, and their tendency to hallucinate is one of those. Especially when you ask ChatGPT to give you a specific fact or to find a particular quote, it may make something up to please you. But real though these problems are, they are to be expected at such an early stage of development. For decades, automobiles had motors that refused to start up much of the time, and broke down every few miles; today, you can drive your car tens of thousands of miles without worrying about its giving up on you. To assume that AI chatbots will never solve problems like their tendency to hallucinate a few short years into their astonishingly rapid development is vastly premature.
The insistence that AI chatbots can’t be truly intelligent because they are simply algorithms that are able to recognize and replicate common patterns in the vast troves of data on which they are trained is even less convincing. This objection rests on a fundamental confusion: it conflates how a system functions with what it is capable of doing.
Neuroscientists don’t yet fully understand the human brain. Catchphrases like “neurons that fire together wire together” are, at best, oversimplifications of a still-mysterious process. But it is clear that the miracle of our minds is ultimately produced by an intricate arrangement of physical matter. And yet no one would deny Shakespeare’s intelligence on the grounds that his sonnets were produced by neurons exchanging electrochemical signals in ways we only partially understand. To reduce his creativity to the mechanics of his biology would be to miss the point; it is, in the language of philosophy, a category mistake.
The same holds true for AI. It is true that chatbots are engineered systems and that their operations can, in principle, be described algorithmically. It is also true that the nature of their intelligence is likely different from that of humans; one friend, a distinguished neuroscientist, believes that they are distinct from both mammalian and cephalad forms of reasoning, making them only the third type of intelligence on earth. But as in the human case, it would be a mistake to think that a description of the physical process involved in their operation somehow diminishes what they are capable of producing.
Words like “intelligence” don’t have a fixed meaning. If it soothes somebody’s anxiety about the rapid progress of AI to insist that an entity that is based on algorithms doesn’t qualify as “intelligent,” that argument is ultimately unfalsifiable. But the challenge to humanity’s self-conception stems from AI’s ability to produce works of art, not from the way in which it does so. Even if, arbitrarily, we collectively decide to deny intelligence to an entity capable of producing poems as moving as those written by Whitman or Wordsworth, we won’t be able to escape the humbling recognition that algorithms are capable of producing creative feats that we had once believed to be reserved to humans.
This leaves a final refuge.
AI models may soon prove capable of writing poems that even the greatest experts judge to be superior to their human competition. It is likely but a matter of time until some prankster manages to publish a poem written by ChatGPT, but supposedly penned by a human, in The New Yorker. (Or perhaps the culprit will, more prosaically, be an established poet with writer’s block and rent to pay.) And where poems lead the way, short stories and novels and perhaps movies won’t be far behind.
But writing is an act of communication. Activities like poetry are meaningful to humans because they express genuine emotions that AI models, for all of their ability to mimic the verbal form that such emotions take, really do seem to lack. When we read the words of Shakespeare or of Whitman, part of the thrill is that it allows us to be on extremely intimate terms with a great mind that lived in a time and place so distant from our own. All of these are reasons why our preference for artistic works we believe to have been created by humans—whether or not they really were—may prove to be permanent.
But will producing such works of art still feel as meaningful then as it did before the rise of AI? Will poets and novelists and screenwriters—and, yes, humble authors of Substack posts—still be as motivated to pour their souls into filling that empty space on their screens if the machines we all carry in our pockets can get the job done better and faster?
Humanity has come to terms with the fact that the earth is not the center of the universe. We have made peace with our animal origins. Surely, we can somehow get to grips with the fact that the machines we created will soon surpass our abilities at some of the artistic endeavors that most make us, well, us. But the blow to our collective self-esteem, when it comes, as likely it soon will, could be the most severe humanity has yet had to endure.
One common response to this is that this merely reflects the ignorance of most people; those who truly know and care about Homer or about Whitman would surely be able to tell the difference. Perhaps. In the Nature study, for example, most respondents freely admitted to having little expertise on the subject. But while more empirical work is needed on the ability of genuine experts to distinguish between AI- and human-authored works of creativity, the existing evidence seems to point in the other direction. In the poetry study, for example, respondents with greater knowledge about poetry were not any better at recognizing AI-authored content: as the study’s authors note, “none of the effects measuring poetry experience had a significant positive effect on accuracy.”
One common response to this is that this merely reflects the ignorance of most people; those who truly know and care about Homer or about Whitman would surely be able to tell the difference. Perhaps. In the Nature study, for example, most respondents freely admitted to having little expertise on the subject. But while more empirical work is needed on the ability of genuine experts to distinguish between AI- and human-authored works of creativity, the existing evidence seems to point in the other direction. In the poetry study, for example, respondents with greater knowledge about poetry were not any better at recognizing AI-authored content: as the study’s authors note, “none of the effects measuring poetry experience had a significant positive effect on accuracy.”
This is a beautiful essay Yascha. What draws me to art and philosophy is the connection with others across time and space that they provide. Reading Plato, Aristotle, and Saint Augustine, among thousands of others, is not about form or style, but the satisfying knowledge that we are not alone. I find comfort in the interior monologues provided by writers because I too struggle with the same suffering from life quotidien and the great questions of justice and meaning that they did. Yes, I can be tricked by a machine that mixes and mimics the work of others, just as conjurors have always fooled people. Yet, truth exists as does the search for it. Vast quantities of greater falsehoods makes the struggle more difficult and more important. Isn’t that the point of what you do?