Thank you for writing this. I don't care how many times people call me a luddite, I'm just not willing to permanently atrophy my hard-won cognitive abilities just because it's easy and fashionable.
Edit: the piece's title changed from "Why I Still Boycott AI" with no notice a moment ago, similarly to the other piece recently that switched the name after it hit my inbox.
I agree with Mr. Kahn that saying "Write me a good essay for Persuasion" would be a bad use of his time (and the readers'), but this comes off like people bragging they don't use a search engine. It's either extremely incurious & uncreative or perhaps just dishonest.
What you spent 5 minutes researching a few years ago you can now spend 1 minute typing into a textbox and recieve specifically sources representing a wide gamut of views to spend the remaining 4:45 to read! Any question you though "Ah I'd like to know that but not enough to spend 30 minutes on a deep dive" now can be done in a minute!
Want certain computer tasks done? Ask the robot to step you through using the command line and to teach you the meaning of every step! Want to know if a book in the store is a good for you? Ask it to check reviews to see if they're similar. Curious about the author's past intellectual accomplishments? Just ask. Find a dead link to a paper? Paste the link and context into Claude and it'll find it 9/10, even if the paper isn't on Internet Archive.
I'm confused why we should be proud to not be able to think of these things. Obviously the tech can't do anything previously impossible, but neither could the dishwasher and I haven't read 1% as many essays boasting the author can't find a use for one.
It feels like there's meat here. There are a zillion dumb usecases for AI, and many of them are apparently not *obviously* dumb, and we should warn people about that. But the fact that dishwashers are a stupid way to cook eggs shouldn't scare us off of using them to wash dishes.
I share almost every concern in this piece -- the extraction model, the outsourcing of thinking, the students who’ve stopped trying because they believe the bot is better than they are. All real problems.
But boycotting consumer AI is precisely backwards on the agency question. AI is already in your spam filter, your search results, your bank’s fraud detection, your phone’s camera. The embedded AI in systems you can’t see or control is where you have the LEAST agency. The consumer tools -- the ones you’re boycotting -- are the one place where you could actually direct it, shape it, and hold it to your values.
The piece assumes a binary: either you exert your own agency, or you turn it over to the machine. But there’s a third option the boycott makes invisible. I’ve been building AI tools where the AI doesn’t replace your thinking -- it holds you accountable to it. “You said these three things mattered this week. None of them have moved.” That’s not outsourcing agency. That’s amplifying it.
A music composition tool where you say “something bittersweet in 7/8” and the AI knows what that means -- then you shape, direct, accept, reject, refine. The composer isn’t less involved. They’re engaging with musical ideas at the level of intent and feeling. And the person who hears music in their head but never learned theory? Their creative agency is created by the tool, not replaced by it.
The version of AI that addresses every concern in this piece -- human agency strengthened, values made explicit and readable, growth over extraction -- is possible. Some of it is already being built. But you’d have to actually engage with AI to find it. Critiquing AI from a boycott is like writing a detailed review of Hawaii having never gone -- based on the assumption that the whole state is Waikiki.
Fabulous essay. One problem, at least for me, is that AI as many people use it makes the world more fake, even fraudulent. The letter one receives is not from anyone. The person sending it has made no effort to put something of himself or herself into connecting with others. Everything has the depth and intimacy of a greeting card.
All participants are cheated. The “writer” loses out on the cognitive satisfaction of crafting for the benefit of others something that comes exclusively from within. Lost is the intellectual development and clarity that comes from the act of writing. We think better when we make the effort to write better.
AI is a typical hyped product in many ways but it still has a lot of value for people... It's not for everyone and I think people should not shortcut learning by using AI as a crutch... but Claude and other code bots like Google Colab are fine in writing software and doing a lot of the gruntwork in software development or even creating digital art! YMMV as they say!
We don’t need a technology that automates banality and lies. LLM’s have their uses, in medicine for instance, but using them to replace work that should leave room for thought and creativity is stupid.
I think I share similar concerns with handing things off to AI. I never let them write for me, unless the audience is another LLM. I don't ask them for personal advice. I don't ask them to plan my vacation.
But I'm having a hard time squaring your arguments, as I understand them, with the obvious fact that automation has substantial benefits. The problem is that I imagine I could write a very similar essay about washing machines, calculators, or using a computer to search for a library book, and the arguments wouldn't change all that much. But I think my life is better with those things, not worse, and I don't feel the same deep worries about handing tasks off to those machines as I do with AI. I don't mean to suggest there are not important differences between those things and LLMs, but I think there's an additional argument here that's needed to show why the reasons for embracing washing machines and calculators do not (in general) apply to LLMs.
That said, it is hard to pinpoint what the difference is, and I'm pretty confused about it. You talk about turning oneself over to the AI, and you can't really do that with a washing machine, so maybe that's it. But I dunno, it seems like technology in aggregate is fairly general, and we did turn an awful lot of what we used to do all day to machines over the past few centuries (it's insane how much time people spent just making food and clothing), and I think that's basically good? So maybe the problem is _which_ activities we're handing off to AI. Like, I'm pretty horrified by having a machines do all of my deep thinking, life prioritization, and relating to other people for me. But I still don't feel like I have a principled way of drawing a line between "things that are okay to hand off to machines" and "things I need to do myself". For example, I have a hard time explaining why I shouldn't have an LLM do some thinking for me, in addition to all the thinking I normally do.
In response to several comments comparing AI to past technologies, a critical distinction is that past technologies did not put the onus on the user to draw the line in functionality. Your dishwasher does not offer to make your eggs, etc. Gen AI will dutifully attempt any cognitive request - and more likely dutifully nudge and shape users to off load more and more cognitive function (attention/clicks/tokens = $). That is a very slippery slope in a very gray area. Who among us could be so confident to resist the power of the (emerging) almighty algorithm?
"This is in human nature; and we have no reason to think ourselves wiser, or better, than other men. " Alexander Hamilton, The Continentalist No. III, [9 August 1781]
"one by one, sooner or later, according to their native strength and to the good or evil of their wills in the beginning, they fell under the thralldom of ... the One ." J.R.R. Tolkien The Silmarillion, Of the Rings of Power and the Third Age
Sam, I tried my own hand at this (essentially an answer to Matt Shumer's provocative "It's inevitable, so use it, get good at it, build your life around it..." essay): https://whowhatwhy.org/culture/god-damn-ai/.
But I was just too damned angry, so mine reads more like a screed. Thank you for keeping yours, well, persuasive.
I, too, see it as a question of agency, at what is a deceptively existential level. Though I am having no trouble at all, so far, drawing a red line where I want it -- Google Translate, for example, being on this side of it, and Claude on the far side -- it seems clear I'm in a dwindling minority. My daughter's young doctoral program colleagues can't tell her where they ate breakfast without help from ChatGPT.
I'm curious: You write of optimization -- always an unsettling concept in my book -- but not of speed, which I see as a huge part of AI's seductive allure. I ask AI enthusiasts this: Where exactly IS it that we're trying to get, and why are we so hell-bent on getting there faster, even if it means trampling each other along the way? Eyes glazed or bright, *no one* has been able to answer that question cogently and convincingly.
Here are a couple of poems of futile resistance:
"I Asked Alice"
On your mark, get set, go!
Where?
I don’t know!
Does it really matter
As long as we’re the first ones there?
Queried the Mad Hatter.
Racing everywhere
Like the White Rabbit!
I’d look into the habit
But I’ve no time to spare.
"Ad Astra"
So we all headed off to the stars.
Of course, some would falter,
Some would put on bursts of speed,
Only to slip, out of kilter,
Lose their grip, drop their load,
Be buried in need, or trampled
In the scramble up the road.
It depends on many things:
Where you start from, your footwear, your will,
What each day brings, whom you exceed.
None shall arrive, but all,
Having been, having been,
Shall know, when they are old, that they have seen
The stars, painted high, faint, and cold.
PS - Frank Fukuyama was my factotum at Telluride, 1973. Alas, I was a confirmed anti-Straussian.
In the past month, I've found myself using AI a lot more. It has made me uneasy, and reading this column made me uneasier still. It's good food for thought.
Two of the tasks I used it for were work things. I don't regret one of these (writing up something brief in corporate-speak that I hate trying to write in myself anyway). The other was a more complex analysis of two books that I feel a little uneasy about, partly because, as a practical matter, checking it for hallucinations will take up as much time as researching it myself might have. But such analysis/research is also a core function of my job--I really should have done this myself. Another task, this one personal--figuring out what was actually behind the local car dealership's attempts to get us to sell them our old 2016 car--was really helpful. No uneasiness there at all. It felt like getting help getting a clear eye on what's basically a legal scam that would have resulted in a car loan we don't need. Finally, the third task--trying to help me cut through the noise about diet and get those last 10 pounds off....that's the one where I come closest to feeling maybe that was a little too personal. On the other hand, it was incredibly helpful, and I've lost two pounds using its advice and recipes.
I'm not going to reject it out of hand yet, but I will try to be more thoughtful about how I use it. The comparison with social media--on which I wasted much time and about which I now feel deeply embarrassed--is well taken.
Thank you for writing this. I don't care how many times people call me a luddite, I'm just not willing to permanently atrophy my hard-won cognitive abilities just because it's easy and fashionable.
Edit: the piece's title changed from "Why I Still Boycott AI" with no notice a moment ago, similarly to the other piece recently that switched the name after it hit my inbox.
I agree with Mr. Kahn that saying "Write me a good essay for Persuasion" would be a bad use of his time (and the readers'), but this comes off like people bragging they don't use a search engine. It's either extremely incurious & uncreative or perhaps just dishonest.
What you spent 5 minutes researching a few years ago you can now spend 1 minute typing into a textbox and recieve specifically sources representing a wide gamut of views to spend the remaining 4:45 to read! Any question you though "Ah I'd like to know that but not enough to spend 30 minutes on a deep dive" now can be done in a minute!
Want certain computer tasks done? Ask the robot to step you through using the command line and to teach you the meaning of every step! Want to know if a book in the store is a good for you? Ask it to check reviews to see if they're similar. Curious about the author's past intellectual accomplishments? Just ask. Find a dead link to a paper? Paste the link and context into Claude and it'll find it 9/10, even if the paper isn't on Internet Archive.
I'm confused why we should be proud to not be able to think of these things. Obviously the tech can't do anything previously impossible, but neither could the dishwasher and I haven't read 1% as many essays boasting the author can't find a use for one.
It feels like there's meat here. There are a zillion dumb usecases for AI, and many of them are apparently not *obviously* dumb, and we should warn people about that. But the fact that dishwashers are a stupid way to cook eggs shouldn't scare us off of using them to wash dishes.
I share almost every concern in this piece -- the extraction model, the outsourcing of thinking, the students who’ve stopped trying because they believe the bot is better than they are. All real problems.
But boycotting consumer AI is precisely backwards on the agency question. AI is already in your spam filter, your search results, your bank’s fraud detection, your phone’s camera. The embedded AI in systems you can’t see or control is where you have the LEAST agency. The consumer tools -- the ones you’re boycotting -- are the one place where you could actually direct it, shape it, and hold it to your values.
The piece assumes a binary: either you exert your own agency, or you turn it over to the machine. But there’s a third option the boycott makes invisible. I’ve been building AI tools where the AI doesn’t replace your thinking -- it holds you accountable to it. “You said these three things mattered this week. None of them have moved.” That’s not outsourcing agency. That’s amplifying it.
A music composition tool where you say “something bittersweet in 7/8” and the AI knows what that means -- then you shape, direct, accept, reject, refine. The composer isn’t less involved. They’re engaging with musical ideas at the level of intent and feeling. And the person who hears music in their head but never learned theory? Their creative agency is created by the tool, not replaced by it.
The version of AI that addresses every concern in this piece -- human agency strengthened, values made explicit and readable, growth over extraction -- is possible. Some of it is already being built. But you’d have to actually engage with AI to find it. Critiquing AI from a boycott is like writing a detailed review of Hawaii having never gone -- based on the assumption that the whole state is Waikiki.
Fabulous essay. One problem, at least for me, is that AI as many people use it makes the world more fake, even fraudulent. The letter one receives is not from anyone. The person sending it has made no effort to put something of himself or herself into connecting with others. Everything has the depth and intimacy of a greeting card.
All participants are cheated. The “writer” loses out on the cognitive satisfaction of crafting for the benefit of others something that comes exclusively from within. Lost is the intellectual development and clarity that comes from the act of writing. We think better when we make the effort to write better.
AI is a typical hyped product in many ways but it still has a lot of value for people... It's not for everyone and I think people should not shortcut learning by using AI as a crutch... but Claude and other code bots like Google Colab are fine in writing software and doing a lot of the gruntwork in software development or even creating digital art! YMMV as they say!
We don’t need a technology that automates banality and lies. LLM’s have their uses, in medicine for instance, but using them to replace work that should leave room for thought and creativity is stupid.
I think I share similar concerns with handing things off to AI. I never let them write for me, unless the audience is another LLM. I don't ask them for personal advice. I don't ask them to plan my vacation.
But I'm having a hard time squaring your arguments, as I understand them, with the obvious fact that automation has substantial benefits. The problem is that I imagine I could write a very similar essay about washing machines, calculators, or using a computer to search for a library book, and the arguments wouldn't change all that much. But I think my life is better with those things, not worse, and I don't feel the same deep worries about handing tasks off to those machines as I do with AI. I don't mean to suggest there are not important differences between those things and LLMs, but I think there's an additional argument here that's needed to show why the reasons for embracing washing machines and calculators do not (in general) apply to LLMs.
That said, it is hard to pinpoint what the difference is, and I'm pretty confused about it. You talk about turning oneself over to the AI, and you can't really do that with a washing machine, so maybe that's it. But I dunno, it seems like technology in aggregate is fairly general, and we did turn an awful lot of what we used to do all day to machines over the past few centuries (it's insane how much time people spent just making food and clothing), and I think that's basically good? So maybe the problem is _which_ activities we're handing off to AI. Like, I'm pretty horrified by having a machines do all of my deep thinking, life prioritization, and relating to other people for me. But I still don't feel like I have a principled way of drawing a line between "things that are okay to hand off to machines" and "things I need to do myself". For example, I have a hard time explaining why I shouldn't have an LLM do some thinking for me, in addition to all the thinking I normally do.
In response to several comments comparing AI to past technologies, a critical distinction is that past technologies did not put the onus on the user to draw the line in functionality. Your dishwasher does not offer to make your eggs, etc. Gen AI will dutifully attempt any cognitive request - and more likely dutifully nudge and shape users to off load more and more cognitive function (attention/clicks/tokens = $). That is a very slippery slope in a very gray area. Who among us could be so confident to resist the power of the (emerging) almighty algorithm?
"This is in human nature; and we have no reason to think ourselves wiser, or better, than other men. " Alexander Hamilton, The Continentalist No. III, [9 August 1781]
"one by one, sooner or later, according to their native strength and to the good or evil of their wills in the beginning, they fell under the thralldom of ... the One ." J.R.R. Tolkien The Silmarillion, Of the Rings of Power and the Third Age
Sam, I tried my own hand at this (essentially an answer to Matt Shumer's provocative "It's inevitable, so use it, get good at it, build your life around it..." essay): https://whowhatwhy.org/culture/god-damn-ai/.
But I was just too damned angry, so mine reads more like a screed. Thank you for keeping yours, well, persuasive.
I, too, see it as a question of agency, at what is a deceptively existential level. Though I am having no trouble at all, so far, drawing a red line where I want it -- Google Translate, for example, being on this side of it, and Claude on the far side -- it seems clear I'm in a dwindling minority. My daughter's young doctoral program colleagues can't tell her where they ate breakfast without help from ChatGPT.
I'm curious: You write of optimization -- always an unsettling concept in my book -- but not of speed, which I see as a huge part of AI's seductive allure. I ask AI enthusiasts this: Where exactly IS it that we're trying to get, and why are we so hell-bent on getting there faster, even if it means trampling each other along the way? Eyes glazed or bright, *no one* has been able to answer that question cogently and convincingly.
Here are a couple of poems of futile resistance:
"I Asked Alice"
On your mark, get set, go!
Where?
I don’t know!
Does it really matter
As long as we’re the first ones there?
Queried the Mad Hatter.
Racing everywhere
Like the White Rabbit!
I’d look into the habit
But I’ve no time to spare.
"Ad Astra"
So we all headed off to the stars.
Of course, some would falter,
Some would put on bursts of speed,
Only to slip, out of kilter,
Lose their grip, drop their load,
Be buried in need, or trampled
In the scramble up the road.
It depends on many things:
Where you start from, your footwear, your will,
What each day brings, whom you exceed.
None shall arrive, but all,
Having been, having been,
Shall know, when they are old, that they have seen
The stars, painted high, faint, and cold.
PS - Frank Fukuyama was my factotum at Telluride, 1973. Alas, I was a confirmed anti-Straussian.
In the past month, I've found myself using AI a lot more. It has made me uneasy, and reading this column made me uneasier still. It's good food for thought.
Two of the tasks I used it for were work things. I don't regret one of these (writing up something brief in corporate-speak that I hate trying to write in myself anyway). The other was a more complex analysis of two books that I feel a little uneasy about, partly because, as a practical matter, checking it for hallucinations will take up as much time as researching it myself might have. But such analysis/research is also a core function of my job--I really should have done this myself. Another task, this one personal--figuring out what was actually behind the local car dealership's attempts to get us to sell them our old 2016 car--was really helpful. No uneasiness there at all. It felt like getting help getting a clear eye on what's basically a legal scam that would have resulted in a car loan we don't need. Finally, the third task--trying to help me cut through the noise about diet and get those last 10 pounds off....that's the one where I come closest to feeling maybe that was a little too personal. On the other hand, it was incredibly helpful, and I've lost two pounds using its advice and recipes.
I'm not going to reject it out of hand yet, but I will try to be more thoughtful about how I use it. The comparison with social media--on which I wasted much time and about which I now feel deeply embarrassed--is well taken.