Gen Z Is Not as Besotted With AI as You Think
The shortcuts it offers speak more to education’s failures than the technology’s benefits.

If everybody is struggling to figure out what AI really is, or how it will affect our lives as a whole, one thing would seem to be certain: Gen Z is defined by AI in the way that flappers might have been by the radio or Boomers by the electric guitar. But the equation of AI and Gen Z misses a great deal of nuance—it misses both the more skeptical approach that many young people are quietly taking towards AI; and it misses that AI is exacerbating, not causing, an educational crisis that is affecting Gen Z more deeply than any particular technology and which necessitates urgent reform.
Many studies and graphs are flying around depicting how AI is rotting our brains and students are using it to cheat on assignments, but there’s another side to that data—and, with it, more hope than you might expect for the mental health of Gen Z. Around 50% of teens, Gen Z, and millennials haven’t used AI, and many of them don’t trust it. A Substack Post report found that, of 2,000 Substack publishers, those under age 45 were less likely to use AI: only 38% used it, while over 50% of those over age 45 used it.
I suspect the reasons for this surprising finding are two-fold. One is, admittedly, a selection bias—the type of young people who write on Substack are more likely to value independent writing and thinking without AI. The other, though, is that Gen Z has seen the wreckage that social media and technology have reaped on our generation. We feel how our social skills and sense of meaning have atrophied as communities migrated to online spaces, and we can see the way AI could accelerate those harms.
Those over 45 have been around the block. They’ve sharpened social skills and professional ones and, at this stage in their career, may prioritize the efficiency that AI offers. Meanwhile, Gen Z is entering the workforce for the first time and more of us than you might think recognize that the AI genie will not go back into the bottle. If we use AI shortcuts now, we’ll likely never go back and learn to write well and think through problems on our own from beginning to end.
So, Gen Z tends to divide up into three distinct groups, each with their own approach to AI and each—like in a Choose Your Own Adventure story—with a life path that follows from that decision. There are, first, those in Gen Z who don’t use AI at all; then, there are those who use AI prolifically, above all to cheat on assignments, in job interviews, etc.; and there are those who use AI only when useful and appropriate, such as to ideate or rework their resume. I happen to belong to the group of Gen Zers who don’t use AI at all, but many of my friends use it to help them edit resumes and portfolios, and that seems reasonable to me.
The better question, though, is why one group would choose one approach to AI as opposed to another, and this, to me, seems related to a deeper issue about how we feel our educational institutions have prepared (or misprepared) us for life in the adult world. If it might be a relief to readers to hear that we are not all hopelessly addicted to AI, our attitude towards our education may be more of a bitter pill to take.
The truth is that, when people complain about the “Gen Z stare,” “quiet cracking,” and Gen Z being difficult to work with, those issues started long before the workplace. We went through school feeling like we were being taught one set of rules that applied to our pedagogy and another that belonged to the actual world and workforce. All my life I’ve surrounded myself with ambitious people, but I noticed that their ambitions often didn’t align with the hoops they were expected to jump through. One thing I noticed about my friends in high school and college is that they were always half-assing assignments and quizzes so they could do something that they felt mattered. They were exhausted. They might sleep through math class so they could teach underprivileged children robotics or skip meetings so they could build their nonprofits. In that environment, it seemed very natural to look for shortcuts—to, for instance, have AI summarize their assigned reading so they could instead read a book by their upcoming podcast guest. Teachers saw us coming in tired with half-baked work and lowered their expectations accordingly—AI became a marriage of convenience for everybody, with students using it in situations (like their coursework) where they could skirt the rules easily and felt it didn’t matter, and then the AI work was just believable enough to get them As and to open the doors to the elite colleges that could secure their future.
I went to college to learn, but the same dynamics repeated themselves there. In my classes, I was often left unchallenged. At one point, I worked three part-time jobs and ran three student organizations alongside the maximum number of credit hours. I wouldn’t have done all of that if my classes occupied and challenged me appropriately. I can easily imagine using AI to write a seven-page paper on Aristotle because no one ever tried to convince me that it mattered. I was bored by it; professors didn’t emphasize that the essays were important to our education or that they were excited to read them, and I knew I could easily spend that time elsewhere, building things in the world that I felt mattered.
Frankly, it’s obvious that many teachers and professors don’t believe their own bullshit anymore. It was an open secret that we weren’t getting a good education in college, and the students were not entirely to blame. Everything became about meeting the next deadline, passing the class, and getting the credits. The professors were often buried in deadlines for their latest “publish or perish” project. I don’t think anyone ever asked if I learned anything. While I didn’t use AI, I frequently reused final papers for multiple classes. One professor even had us assign our own grades, which he said proudly that he never rounded down.
The education system hasn’t measured real learning in a long time. In academia, measures have become goals—or in the case of the professor who had us assign our own grades, measures were thrown out entirely. For generations, students have been telling professors what they want to hear, but it’s been getting worse, and AI is hijacking existing flaws in the education system, allowing students to outsource their time-honored ways of bullshitting class to machines that are optimized to generate bullshit.
Many public schools are banning access to AI, but that sort of misses the main point. While it is true that AI may have negative effects on our ability to think clearly and connect with one another, coddling an already-coddled generation—and training us to fear AI, as opposed to using it properly—is just as damaging. I know the impacts of such a culture of fear because I’ve already seen it happen to Gen Z. I see the way older generations look at us and either feel sympathy for what social media has done to us or shame us for not being able to do things they perceive as simple, like making phone calls. As I look down the barrel of the next technological revolution, I see an opportunity to not shame nor coddle the next generation, but invite them into the broader conversation, to basically say: AI is neither a scourge nor a miracle-worker, it may help with certain specific tasks as part of a broader workflow but it is never an excuse to shut off critical reasoning entirely.
AI is disruptive. It’s moving much faster than any of us can keep up with. But it’s also an invitation to get serious about our measures of success. It’s a nudge to create something better. Instead of brainstorming ways to avoid AI use by having students do homework in class or having them handwrite everything (as has become the new retro trend among teachers), what if we created more meaningful and holistic metrics for success? Convince students that their ideas matter; ask them what they think; and listen, not for a correct answer, but an original one. Teach them how to build research projects and business plans from scratch. Ask them to provide feedback and revise their work more than once. Take this as the opportunity to see where the education system is failing and to embark on wholehearted reform.
Most of us did not have a say in whether we came of age in a world with social media and AI, but we have choices and responsibilities regardless as a result of living in that world. One such responsibility is determining what ethical AI use looks like for ourselves and our communities of practice. We can never fully predict the ramifications of AI, but if you think Gen Z is difficult to work with due to the culture we were raised in, now is the chance to set standards around AI earlier than we did with social media. We can learn from my generation—not just get an A and move on, but apply what we’ve discovered and revise our assumptions about what it means to live and work well with AI. We really feel that the education system hasn’t worked for us. AI may not have helped, but it’s not the essence of the problem.
Clare Ashcraft writes The Mestiza where she makes observations about identity, psychology, and culture. She is a proud Ohioan.
Follow Persuasion on Twitter, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:
I did a brief survey walking thru a major city during school hours. Funny, but I did not find a single person who had cut class to teach underprivileged children robotics or build their nonprofits. Their reasons were far less altruistic. One of us is on a different planet.