I
Since the release of OpenAI’s ChatGPT-3.5 large-language model (LLM) in November 2022, AI has been all over the news. Media coverage typically communicates the same message: AI is smart and getting smarter. Sometimes stories hype its growing intelligence as ushering in a future utopia, and sometimes they warn against it as leading to doom, à la Skynet. In fact, the hype and doom are two sides of the same “criti-hype” coin, to borrow Lee Vinsel’s term, both conveying that AI will become smarter than human beings.
As a history professor at a state university, my concern is the opposite. It isn’t that AI is becoming smarter than us. It’s that AI is making us—and particularly students—as dumb as it.
To be clear, I’m not saying that AI is dumb in all ways. AI is very smart at the cognitive tasks of calculating and processing. In fact, machines have been good at those things for longer than many realize. I just wrote a book about early 20th century analog computers that outperformed human beings at the tasks they were designed to perform and were regarded at the time as “intelligent.”
But the tasks that AI excels at—calculating and processing—comprise only one part of cognition, and cognition comprises only one part of human intelligence. Cognition also includes reasoning and judging, and intelligence also includes emotions. One does not need a degree in neuroscience or psychology to grasp that mathematical genius differs from the ability to read social cues, and that wisdom is not the same as the ability to parse a sentence. The same person can be stupid in one way and gifted in another.
So it is with AI. It can calculate and process superbly well, but it cannot reason, judge, or empathize at all. If it were human, we might call it autistic—but that would be unfair to autistic people, who possess forms of intelligence that AI lacks. Accordingly, the hype-or-doom discourse about AI’s growing intelligence misrepresents both the limits of artificial intelligence and the extent of human intelligence.
In so doing, I fear, this discourse threatens the welfare of the young. It justifies educational practices that stunt children’s cognitive and emotional development and that will worsen the ongoing collapse in educational standards. If you don’t have a school-age child, you may not be aware of just how widely screens and other digital technologies have proliferated in classrooms, or how much money schools have spent on them. You may have heard about Ivy League students unable to read complete books; you may not realize how many college students can barely write. And they are getting worse.
The spread of digital technology in education bears a hefty amount of the responsibility for the collapse in standards. It has caused a “great rewiring” of childhood inside the classroom like the one that Jonathan Haidt has tracked outside the classroom—though it may be more appropriate to speak of a “great unwiring” given the relative poverty of the neural connections that form during an education which over-uses digital technology compared to an education which does not. Not only do purveyors of digital technology refuse to take any responsibility for this unwiring, but they make claims for AI that encourage children to under-value their own humanity and the humanity of others. Low educational standards coupled with a loss of humanistic intelligence are a recipe for social instability.
To understand why AI in education poses such a threat to students—the future leaders of society—it is necessary to understand what young people should be learning in school, and how what they learn should contribute to individual and collective human flourishing. In particular, it is necessary to understand the crucial role of a liberal arts education, as distinct from vocational education, in advancing human freedom and maturity. Actually intelligent responses—rather than artificially intelligent responses—to the prospect of more powerful AI require an intelligent understanding of the problem, so we must begin at the beginning and go back to first principles.
II
The sort of thinking that humanities teachers like me are trying to help our students learn involves both cognitive and emotional aspects. That is, humanistic thought involves a sensibility as much as it does reasoned analysis.
For instance, my own discipline of history requires, on the one hand, the analysis of information that is simultaneously overwhelming, insufficient, contradictory, and misleading. On the other hand, historical thinking involves imaginative empathy: that is, putting yourself in the shoes of people widely separated from you in time and space and trying to understand how and why they made sense of the world as they did.
Especially (but not only) when you find the people you’re studying repugnant, historical empathy requires emotional maturity. You have to understand both their worldview and the feelings they provoke in you. The same ability, incidentally, is required for mature interpersonal relations. As the historian and philosopher of history R. G. Collingwood put it, “If it is by historical thinking that we re-think and so rediscover the thought of Hammurabi or Solon, it is in the same way that we discover the thought of a friend who writes us a letter, or a stranger who crosses the street.”
While the historian’s analysis of information bears a superficial similarity to the calculating and processing of artificial intelligence, it is in fact very different. For one thing, the act of understanding others’ humanity, for the historian, is also a self-conscious act of understanding one’s own. In other words, historical understanding is not mere cognition but meta-cognition: cognition that is aware of itself. In Collingwood’s words, “it may thus be said that historical inquiry reveals to the historian the powers of his own mind.”
By contrast, when AI calculates and processes, it is not revealing its intelligence to itself or conscious of any self-revelation. It is simply executing an algorithm. Through iteration, AI may improve its ability to deliver results that please the human beings using it, but the AI has no awareness that it is improving. It lacks meta-cognition; indeed—Zuckerberg’s branding to the contrary—it lacks meta-anything.
For another thing, the cognitive aspects of historical analysis that appear most similar to the calculating and processing performed by AI cannot be separated from the emotional aspects—the cultivation of a humane sensibility—which lack even a superficial similarity to AI. This inseparability is most evident in the practice that historians call source criticism. Source criticism is predicated on the understanding that sources do not have self-evident meanings and cannot be taken at face value; they require critical analysis. The text of sources can take on different meanings depending on the context. In analyzing sources, accordingly, historians want (or should want) as much knowledge about the contexts in which sources were produced as they can find. Knowledge of context is necessary both to recognize the shared humanity of the people who produced the sources and to evaluate them. (Please note that word “evaluate.”)
These aspects of a humanities education explain why the humanities belong to the liberal arts—the arts of freedom. They teach students their own freedom, lodged in the capacity for critical judgment and empathy, as well as the freedom of others. In so doing, they engage the richest and most complex parts of students’ humanity—their minds and souls—not the narrower and simpler part of their humanity as workers (or consumers).
This contrasts with a vocational education, which approaches students as workers and tries to teach them the nuts-and-bolts of their chosen trade so that they can have a successful career practicing that trade. In electrician school, for instance, students are not encouraged to re-test the ideas of Michael Faraday, but in a liberal arts education, they are, because the goal is to enable them to recognize their own capacity for cognition. Similarly, students don’t read Plato in electrician school, whereas in a philosophy class, they not only read Plato but are encouraged to test his ideas for themselves. Vocational training locks students into a particular career path, while a liberal arts education gives them the foundation to pursue any career they want.
To be clear, the point here isn’t to knock vocational training. Skilled manual labor typically involves skilled intellectual labor—hence the concept of the “mindful hand”—and can be deeply creative. It is good for the soul in ways that manual or intellectual drudge work is not (as well as an always welcome rebuke to pretentious technofeudalist fantasies of dematerialized digital life in the cloud). Moreover, society needs people who know trades, which can provide dignified, meaningful work.
But it is nevertheless important to insist on a distinction between vocational training and a liberal arts education. The latter encourages students to question the world, not for the sake of nihilistic skepticism, but for the sake of discovering their own human freedom and their responsibility to exercise it with due regard for the humanity of others. This is why the wealthy and powerful in the most stratified societies, like white southerners during Jim Crow, try to deprive the victims of the status quo of liberal arts educations and to restrict them to vocational training: the beneficiaries of injustice want cogs in a machine that works well for them, not people who ask questions about the machine. It is also why powerful political and economic actors seek to transform the liberal arts into the “neoliberal arts,” and why believers in liberal democracy should resist them.
III
To explain the threat posed by AI to humanities education, it is also necessary to explain the importance of language therein, and why it differs from the use of language by AI.
By way of illustration, let me continue to use my own discipline. On the one hand, much of the intellectual and emotional labor performed by historians and history students happens at an intuitive, sub-conscious level and never finds expression in words. On the other hand, intuition can be honed, and language plays a fundamental role in honing it. The act of communicating ideas helps to constitute them. And vice versa—ideas and communication feed each other. One puts words to ideas not merely as an act of self-expression but also as an act of self-discovery and self-invention.
As forms of communication, both speaking and writing about ideas are essential, but writing is especially important, because sloppiness and imprecision are easier to detect on the page than by the ear. One can get away with things in speaking, even with a conscientious and skilled interlocutor, that one can’t get away with in writing, because one can simply re-read the text. Given that ideas and the language expressing them shape each other, writing style is not independent of intellectual substance. Changing words and syntax alters, however subtly, the ideas being expressed. Hence the many variants of the same basic insight expressed by many distinguished writers: that they write in order to know what they think. So intertwined are the two activities that I would go further and say that one cannot think well if one cannot write well.
On the surface, the role of language in historical study is no different than its role in computer programming—but the similarity disappears upon scrutiny. Language, whether it consists of words, mathematical notation, or a binary string of ones and zeroes, is inherently abstract and symbolic. It seeks to represent reality, not to reproduce reality. Accordingly, writing about the past comes with all the same promise and peril as any form of linguistic production, including computer code. In both cases, writing involves a dilemma: one must reduce the complexity of reality to something simpler in order to change it in some way deemed desirable, but the act of reduction necessarily does violence to reality. Humanistic intelligence, as expressed in language, cannot escape this dilemma any more than the calculating and processing aspects of intelligence can.
But humanistic intelligence differs from other forms of intelligence in its awareness that reductionism involves trade-offs and in its commitment to minimizing the downside. The calculating and processing aspects of intelligence engage human beings as inanimate objects, like the rest of reality, to be manipulated and controlled. For that type of engagement, abstraction and symbolism don’t involve trade-offs; they offer only upsides—the violence is the point, so to speak. By contrast, humanistic intelligence engages human beings as fellow animate subjects to be understood and respected—as “molecules with minds of their own,” to borrow a phrase from the historian John Gaddis—just as oneself would want to be understood and respected. From this perspective, abstraction and symbolism involve weighty and even frightening downsides, because they do violence to human subjects in all their complexity. Hence humanistic engagement with other human beings, which is simply the golden rule, checks the calculating and processing type of engagement.
Calculating, abstracting, and the impulse to control, which together give rise to our technological instincts and capabilities, are not bad things in and of themselves. They are necessary for our survival and can be good. Some results of them that I personally enjoy are electricity and not dying of cholera. I don’t want to go back to living in caves; I like having a measure of control over the environment, because that control affords the security to pursue the things that, to me, make life most worth living: love, truth, beauty. I don’t agree with romantics that technology and the civilization it makes possible are inherently illegitimate.
But neither are the most technological aspects of our intelligence sufficient for human flourishing. We need the humanistic side of our intelligence to connect with others. Technology facilitates the pursuit of the things that matter most, but it does not enable their achievement. Nor does it address the most difficult questions in life. It grimly amuses me when STEM types think that engineering is the hard part, and once that’s solved, the politics, social relations, culture and so forth will easily fall into place. It’s like they’ve never read a history book. If the hardest questions in life were technological in nature, we’d have had world peace a long time ago. The reason we haven’t is that the humanities aren’t rocket science: they’re harder. Molecules with minds of their own are more complex than other molecules.
Furthermore, while humanistic intelligence won’t deliver peace on Earth any time soon—and indeed cautions against the dangers of utopianism—it nevertheless plays a crucial role in liberal democracy. Where the part of human cognition that involves calculating and processing tends towards a politics of control and domination, the whole of human cognition—which includes reasoning, criticizing, and judging, as well as the emotions—lays the groundwork for a politics of mutual tolerance, if not respect. Liberal democracy requires us to see our fellow citizens as possessing humanity equal to our own; we should not treat them as we would not wish to be treated, meaning that we should not seek to control and dominate them.
I am aware that the humanities education that I have just described is an ideal, not a reality. For reasons explained below, tech corporations specifically and hostile actors outside academia more generally bear much of the blame for the gap between ideal and reality. But external forces don’t deserve all of the blame, and I would not want this essay to be read as an anti-humanistic dodge to let the humanities off the hook for their own institutional complicity. Too many humanities teachers treat students as objects to be ideologically programmed rather than as subjects to be taught a body of scholarly knowledge and educated to think for themselves. Similarly, too many humanities scholars think algorithmically, approaching the human beings they study as objects to be manipulated rather than as subjects to be understood and existing scholarly knowledge as a database to be plundered rather than as the accumulation of painstaking intellectual labor by human beings. By failing to uphold standards of humanistic excellence within our own ranks, we have compromised our institutional authority to defend the humanities just when liberal democracy most needs them.
IV
AI is antithetical to humanistic intelligence. LLMs like ChatGPT process data in order to calculate probabilistically about what word should follow the previous word, according to a human-produced algorithm and the human-produced data on which they were trained.
That’s it.
They are not understanding, intuiting, thinking, reasoning, or judging: they are processing and calculating. In the inspired phrase of a well-known article on LLMs, they are “stochastic parrots”: stochastic because they calculate probabilities, parrots because they merely mimic rather than understand. Even so-called “reasoning” models like OpenAI’s o1 and Deepseek’s R1 do nothing of the sort. They too iterate calculations from underlying human-produced data while “explaining” what they’re doing by executing human-produced algorithms. They are forcibly trained to yield “chain of thought” outputs that appear to resemble human reasoning—with undeniably impressive results—but fundamentally they are still just processing and calculating.
Anyone who confuses what AI does with human intelligence does not understand AI or human intelligence. Data is not knowledge; executing an algorithm is not reasoning. If you don’t want to take the word of a historian on this, take the word of OpenAI. It acknowledges—indeed, from the perspective of legal liability, trumpets—the limits on artificial intelligence with the following statement in its terms of use for ChatGPT: “You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services.”
The key word here is “evaluate.” AI needs human users to evaluate its output because it is incapable of evaluation itself.
The reason it cannot evaluate its own output is that its calculating is ahistorical and anti-humanistic. Evaluation requires source criticism, but AI performs source destruction. Instead of analyzing sources in their original context, it rips them out of their original contexts in order to average them. In other words, it converts sources into data. That conversion alienates—or, to use another Marxist term, commoditizes—sources produced by particular people at particular times in particular places from the context in which the sources were produced, as though the producers never existed. It flattens sources into data the way atom bombs flatten cities into rubble. It does not minimize the violence of interpretation but maximizes it. It does not care about the damage because it lacks the emotional intelligence and moral commitment to recognize the humanity of the people who produced the sources it consumes as data.
It requires little imagination to understand AI’s commoditization of data as communistic, or to detect a certain hypocrisy in tech titans’ denunciations of communism. My own liberalism includes a belief in the concept of private property. I believe in private property so sincerely, rather than cynically, that I don’t reflexively equate it with what’s best for the bottom lines of large tech firms. So I can see that AI’s corporate creators systematically take the output of intellectual labor—sources—which a classical liberal would understand as property and do not pay for it. If a government were taking the intellectual property of tech corporations without compensation, the corporations would shriek loudly about communism. But when corporate communes take intellectual property and turn it into data, they call it free enterprise, declare a proprietary interest in the AI trained on the pirated property, and privatize the profits—while dumping the costs and risks the way chemical factories dump toxic waste into rivers. Welcome to the public sphere as Superfund site, courtesy of Big Tech.
The conversion of sources into data explains both the power of AI and the limits of its power. AI thrives on a large quantity of data, but it has no sense of the quality of data. Such qualitative judgments as programmers may encode into AI belong to the consciousness of the programmers. AI cannot produce qualitative judgments; all it can do is find quantitative proxies for quality, such as, say, the number of citations an article has received. In other words, it tries to make up for its lack of qualitative intelligence through brute quantitative force. In so doing, it rewards virality, which, to put it mildly, is not a reliable proxy for quality. The average of lots of garbage is still garbage.
The inability to distinguish between high-quality and low-quality information is a sign of stupidity, not of intelligence. Again, if you doubt a historian’s word on this, take the word of Richard Feynman, the Nobel-winning physicist. While serving on a commission overseeing the selection of a new math textbook in California, Feynman found that six members of the commission had rated a book that had only blank pages between its covers without bothering to read it, but the commission treated the average of these six ratings as reliable, because six data points seemed like a high number. Of this idiocy, Feynman wrote:
This question of trying to figure out whether a book is good or bad by looking at it carefully or by taking the reports of a lot of people who looked at it carelessly is like this famous old problem: Nobody was permitted to see the Emperor of China, and the question was, What is the length of the Emperor of China’s nose? To find out, you go all over the country asking people what they think the length of the Emperor of China’s nose is, and you average it. And that would be very “accurate” because you averaged so many people. But it’s no way to find anything out; when you have a very wide range of people who contribute without looking carefully at it, you don’t improve your knowledge of the situation by averaging.
In other words, AI has the intelligence of the California math textbook commission (without Feynman, of course).
V
Relying on AI for writing has the same problems as relying on it for research. To generate prose in response to an essay prompt, the machine relies on the same probabilistic reasoning about the same flattened data as it does to generate prose in response to a research question. Its output is the linguistic equivalent of Soylent Green. If good writing consisted solely of good technique, then it might be said that AI is capable of writing well; its writing can be fluid and obey the formal rules of language. But good writing does not consist only of good technique; it consists of using good technique to express oneself, as informed by human cognition and emotions, as precisely and accurately as possible. AI cannot write well because it has no self—no ideas or emotions of its own—to express.
Unless it becomes sentient I cannot see how it ever will. No matter how much data it devours in its maw or how rapidly it can iterate its calculations, it will always be limited to calculating and processing. If one imagines human intelligence as a radio spectrum, AI picks up the calculating and processing frequencies with clarity that human beings cannot match. But human intelligence can pick up, with greater or lesser clarity, a vast array of frequencies that AI cannot pick up at all. Its deafness to those frequencies makes it almost comically literal-minded—one might say stupid.
As the writer Andrew Smith explains, “If I say to a dinner guest, ‘Can you pass the salt, please?’ they are probably moving before the question is finished, with multiple processes happening synchronously and no need for further elucidation.” But if the dinner guest were a computer, the request would have to sound something like this:
[W]ould you please access an image of a salt shaker from memory; scan the table for something similar; henceforth identify that object as ‘salt_shaker’; calculate whether salt_shaker is within reach; if no, then wonder why I asked you rather than Beyoncé to pass it; else, calculate whether it is nearer your left or right hand; assess whether there is any cutlery in that hand; if yes, then lay it on your plate; move your hand in the direction of salt_shaker; stop the hand over it; lower hand; grasp shaker; etc….
A form of intelligence that has to be led by the nose to perform a task so simple that a toddler can do it almost without thinking strikes me as having an awfully long way to go before it can achieve intelligence in the human sense.
Perhaps AI’s ability to mimic, through ever-improving calculating and processing, the other aspects of human intelligence may improve to the point that its mimicry becomes impossible to distinguish from the real thing, even for the most humanistically intelligent. Perhaps one day the bland, flat style of AI-generated prose will no longer be blindingly obvious to me in student papers. But fooling me would require students with sufficient sensitivity to language to prompt AI to write in ways that would fool me. The fact that they appear to regard my detection abilities as a form of witchcraft—even after I explain the “magic” to them—does not augur well for their linguistic sensitivity.
VI
Far more likely than AI gaining sufficient intelligence to fool humanistically intelligent experts is that humanistic intelligence erodes sufficiently that, first, students will not be able to program AI to fool experts until, second, humanistic expertise itself disappears. The students’ intelligence will go first, and then their teachers’ intelligence will go. Who then will be left to comply with OpenAI’s charge to “evaluate” the output of ChatGPT? The future I see looks a lot more like Idiocracy than Terminator.
The threat from AI is not that it will gain self-consciousness or a soul. The threat is that we will destroy ours—that, bewitched by the appearance of AI-consciousness and AI-souls, we will lose interest in cultivating and honoring our own. In doing so, we will cease to produce students capable even of cheating intelligently, let alone of writing coherent papers themselves. At the same time, we will cease to produce teachers capable of telling the difference. In short, the threat is not that AI will become more like us. The real threat is that we will become more like it.
In that sense, perhaps, AI is nothing new. We have been becoming more like machines for a long time. One of my favorite pieces on AI in education warned against apocalyptic predictions about it destroying students’ ability to write on the grounds that students long ago lost the ability to write. Which is true. I started teaching in 2011, and the broken English of so many of my native English-speaking students already horrified me. A couple of years before the pandemic, I noticed a sharp further decline; I don’t know what caused it, but it coincided roughly with the arrival in college of the first cohort whose middle school years overlapped with the proliferation of smartphones and tablets and whose high school years overlapped with the metastasizing of social media. Needless to say, since the pandemic, their writing (and reading) has only gotten worse. And now there is AI.
I don’t blame my students for the decline. Sure, there’s the occasional bad apple who was given opportunities and failed to take advantage of them. But if you have formed your impressions of college students today from reading horror stories about coddled Ivy Leaguers, put them out of your mind. I like my students, and you would too—overwhelmingly, they are hard-working (albeit often in jobs at the expense of their college classes), earnest, and intellectually open-minded. Kids born in New Jersey between, say, 2002 and 2006 were not born with fewer brain cells or weaker work ethics than kids born a decade earlier.
Rather, the educational system has changed, and it has changed in ways that both reflect and feed changes in society and the economy. My students are growing up in language-poorer households with more screens and fewer books. Even when they are with friends in person, they are often looking at a screen together. They are not being made to write poetry or memorize speeches. They are being assigned excerpts rather than whole books. They are not being taught cursive, which would enable them to write more quickly by hand, and they are not being made to write essays by hand, which requires a different (and more demanding) type of cognition than writing with a word processor, because the greater difficulty of moving words and paragraphs around forces writers to have a better sense of their logic chain before they begin writing. Many teachers cannot write or read cursive themselves, and they cannot teach writing or close reading because they do not know how to write or read closely themselves. My students simply are not getting the healthy linguistic diet that they need in order to access the riches of their human potential.
I feel very angry on their behalf. Society has failed them, and comprehensively so. It has failed to build guardrails that young people desperately need to feel secure—guardrails like a ban on screens in classrooms—and it has failed to supply rigorous humanistic instruction within the guardrails. In short, it has failed to help them develop their own humanity—and then it blames them for acting like robots.
VII
AI is making everything worse. I can see many unsound pedagogical reasons to permit its use in school work. Specifically, I can see how permitting it would appeal to over-burdened, lazy, or untalented teachers who are happy to outsource their own jobs while still collecting their salaries—like the English teacher at my younger daughter’s previous school who provided only a rubric with point values and no written feedback on papers, or the social studies teacher whose homework assignments consisted of filling in the blanks on Powerpoint slides. I can also see many sound corporate reasons to encourage the use of AI in school work. Capitalists gotta capitalize.
But, with perhaps a few exceptions in the sciences, I cannot see a single sound pedagogical reason to permit the use of AI in school before the advanced graduate level. When students use AI too early, they stunt their own intellectual development by getting the machine to perform tasks that they need to learn how to perform themselves. Among other deleterious consequences, this stunting means that they do not develop the capacity to use AI intelligently. Recall the terms of use for ChatGPT: “You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate.” Students cannot evaluate and review the output of ChatGPT when their use of it prevents the development of their capacity for evaluation and review.
Thus the recent executive order from the White House on “Advancing Artificial Intelligence Education for American Youth” (which appeared after I had drafted this essay) has things exactly backwards. The way to foster “AI competency” is not to encourage “early exposure to AI concepts and technology” but to delay exposure to AI until students have acquired the ability to evaluate its output. Rather than AI competency, early exposure fosters AI incompetency. It makes students tools of AI rather than AI a tool of students.
Cultivating the capacity to evaluate the output of AI is what liberal arts teachers do (or should do), and it is immensely difficult. We need all the time we can get. Even the understandable desire to spend class time on teaching students about the limitations of AI has pedagogically damaging opportunity costs—the time I spend on the negative task of warning my students off AI is time I cannot spend on the positive task of teaching them history. I already have to spend a ridiculous amount of time as a remedial English instructor. What would I need to cut in order to explain AI as well?
A fun fact about the people who invented all the digital technology they are now pushing on schools as “helpful” or “necessary” is that they somehow managed to get the education that enabled them to invent whatever they invented without whatever they invented. (Spare a thought here for poor Steve Jobs, whose deprived childhood without Apple products meant that he never amounted to anything.) The notion that kids need a lot of technology in their education so that they become technologically literate or innovative is nonsense—pure, unadulterated nonsense. On the contrary: the more advanced technology gets, the more kids need to learn the things that technology is least likely to be able to do.
Because these things are precisely the things that are taught in a liberal arts education, and especially in the humanities, the growing power of technology makes humanistic education more valuable, not less.
Part of its value lies in what it does for students’ future employment prospects. To maximize the chance of not having their jobs replaced by machines, students need to learn how not to think like machines. I understand perfectly why the rich and powerful would want humanities education restricted to themselves—but I see no reason why they should have their way. Of course governments serving the interests of the powers that be cut “useless” language departments: mastery of language is crucial to coding, and the winners in the current status quo want the losers to be the coded, the ones who take orders and lose their jobs, not the coders, the ones who give orders and keep their jobs. Again, don’t take my word for it. “Besides a mathematical inclination,” according to the programming pioneer Edsger Dijkstra, “an exceptionally good mastery of one’s native tongue is the most vital asset of a competent programmer.” He won a Turing Award.
The greater part of the value of humanistic education will remain how it helps students to develop as whole human beings, with minds and souls, not as future members of the workforce. Both personal maturity and civic responsibility require the ability to evaluate sources, to understand oneself and others, and to communicate one’s ideas and feelings orally and in writing. When AI, like screens more generally, is introduced into education too early or in the wrong contexts, it prevents students from developing their own human intelligence—including the ability to calculate and process, but also the ability to criticize, reason, intuit, and empathize. Students can build these competencies only by putting in the work. If they try to outsource the work to AI—which, as I’ve described, can’t do most of the work anyway, due to its cognitive and emotional deficiencies—they will not build the competencies. Silicon Valley types know this, which is why they won’t let their own children use their products. What kind of human being knows that a product is terrible for children but markets it to them anyway? Not the kind that deserves deference.
VIII
The argument I’ve laid out here isn’t anti-technological but pro-human. I have zero problem with adults who have achieved intellectual and moral maturity using AI and can readily see how it could be beneficial in such hands, from the mundane context of using it to save time by writing the first draft of a grant application that one has the expertise to check for oneself, to the less mundane context of using it to try to find cures for terrible diseases. Maturity means understanding the trade-offs involved in any use of technology.
Children don’t understand the trade-offs—they can’t. Neither, apparently, does the tech industry, which is filled with adults who could understand the trade-offs but choose not to because understanding would be financially inconvenient. Then they have the gall to complain that the educational system isn’t producing enough skilled potential employees, and to stigmatize teachers who resist their products for sound pedagogical reasons as troglodytes.
The reason I don’t want digital technology in my classroom is that I believe the liberal arts have immense value and deserve excellence. I have higher aspirations than turning out “accredited bullshitters,” to borrow a phrase from Michael Ignatieff. I have done my best to maintain standards of reading and writing appropriate for college-level history classes because I believe it betrays my fiduciary obligations to my students and a wide array of interested stakeholders, culminating in the public, to lower them. I have resisted intense and growing pressure to replace traditional exams and papers with smaller multiple-choice quizzes, podcasts, video essays, and role-playing games not out of knee-jerk conservatism, but because I believe the traditional assignments best test students on how well they have learned what they should have learned in a college history class, while the new assignments test them less well.
Not coincidentally, the new assignments are more entertaining and less intellectually demanding. While pretending not to, they lower standards in order to adjust to students’ ever-declining attention spans, sensitivity to language, and persistence in the face of intellectual challenge; they insult students’ intelligence where traditional assignments respect it. Students’ declining preparation for serious intellectual work, in turn, causes them to mistake the insult for respect and the respect for unfairness. Given that screens and social media—and now AI—have contributed so much to this decline, digital technology belongs in classrooms the way drug paraphernalia belongs in classrooms: it doesn’t. Its presence in them reflects addiction to tech products that diminish human intelligence and social cohesion.
AI is not going to make human intelligence obsolete; on the contrary, it is going to make human intelligence more necessary than ever. But human beings might well let AI make them stupid enough to believe that their own intelligence is obsolete. It might make them stupid enough to believe that tech firms have their best interests at heart, or that teachers who inflate grades are showing respect for them, or that the richest way to understand themselves is as drones rather than as human beings. It might make them stupid enough to believe that the little robot in all of us—the calculating, processing, abstracting aspects of our humanity—which is necessary for survival, is sufficient for flourishing.
Here is the future I would like to see. I’d like a restraining order against tech companies to stop them from endangering children. I have zero doubt that digital technology, including but not limited to AI, causes damaging biochemical changes in young people in much the same way that cigarettes and alcohol do. Still more than a medico-legal definition of and response to the problem—because our tendency to think of problems in medico-legal terms is impoverished and arguably pathological—I’d like a moral definition of the problem, in the form of a powerful stigma against digital technology in education and against tech companies for pushing it on children. I’d like that stigma reflected in a ban on the use of AI in humanities classes through college.
If professors want to make exceptions, so be it—it is not impossible that there are legitimate reasons for incorporating AI into pedagogy—but they should think very carefully about the opportunity costs. I’d like that ban communicated at all levels of educational institutions, including the top, and I’d like institutions to possess the moral authority to enforce the ban. So I’d like the many, many academic administrators who will not understand the reasons for the ban fired, on the grounds that they do not belong in institutions the basic mission of which they fail to comprehend. I’d also like faculty members who cannot understand when and why they are lowering standards for students to find different lines of work (note that I target the lack of understanding as the sign of unfitness, not the lowering, in recognition of the immense pressure that faculty members are under to lower standards).
Is this impossible? Probably. I can see how over-determined the problem is, and at least on this issue, I can calculate probabilities as well as any computer. I know that the type of country and type of world I want to live in isn’t likely to come about. But as a human being, I can do more than calculate probabilities. I can imagine a different future, articulate it, strive for it. And I can understand that not doing these things is a sign of roboticism. To settle, even in the freedom of our own minds, for half-measures and rearguard actions is to do no more than our programmers tell us to do.
I don’t want to be a robot, and neither should you.
Kate Epstein is associate professor of history at Rutgers-Camden. Her latest book is Analog Superpowers: How Twentieth-Century Technology Theft Built the National-Security State (University of Chicago Press, 2024.)
Follow Persuasion on X, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:
This does not demonstrate strong understanding of the philosophical arguments around AI early enough to make reading it a good bet for me. Is there a shot at buffing the intro up with something that makes me think the article isn't the following claims?:
1. AI can't really think; it only executes algortithms. Humans are special and do more than that.
2. AI can do superficial analysis, but not deep analysis, which humans can. Humans are often trained in deep analysis by starting with shallow analysis, but the easy path of using AI to cheat has broken this on-ramp.
Claim 2 is interesting and worth discussion, but claim 1 is better suited for an article with a technical background instead of just being repeatedly asserted. Claim 2 could use a summary-style inteoduction at the start to build reader trust, in my opinion.
AI's ability to mimic will only improve. Prose that might seem lifeless and banal today will become more sophisticated. We will eventually need AI tools to determine whether Humanities work is AI-generated or not.
AI has already had an enormous impact on STEM classes, with faculty reviewing Physics and Math homework to determine if it has been AI-generated. I have heard of students voluntarily retaking STEM classes they were acing because they felt they had become overly reliant on AI to complete their assignments, without understanding the fundamental principles of the course.
At the same time faculty are trying to encourage students to use AI tools appropriately, since they are rapidly changing the way we work. AI note taking at meetings already summarizes rambling discussions into salient points, often quite brilliantly, even if there are still occasional howlers. I have also heard of professors encouraging students to use AI tools to summarize lectures and notes into crib sheets that they can use to revise for exams.
I appreciate the cri de coeur on dumbing down, especially as it relates to the Humanities, but it's assertion-based. Schools and colleges urgently need more evidence on the impact of AI, not least as it relates to plagiarism and the effective use of revolutionary tools.