Ralph J. Hodosh said: "When an AI system can shrug its shoulders (or their equivalent), respond with something like 'It seemed like a good idea at the time' and then move on to the next task, then progress will have been made."
I really like this thought. A characteristically human gesture. Sam Kahn just reminds us in a recent post that judgment is a signature human capacity. Your scene highlights the equally important human capacity of shrugging one’s shoulders. 😁
Honestly, I'm kind of relived to hear it. The advances made in the last few years have been astonishing, but I think it would be a good thing if things plateaued for a bit while we get used to the new normal.
And, fine, perhaps I also kind of like the idea of AI not getting smart enough to steal my job for a little longer...
Thanks, Baeraad! While this apparent plateau in AI development might seem reassuring, it's worth considering Ethan Mollick's insight: the real opportunity lies in mastering AI tools specifically for your profession - that's what will give you a competitive edge. Yes, we're seeing evidence that we're not rocketing toward AGI as some predicted, but AI capabilities are still growing through incremental improvements and optimization. The takeaway? There's a pressing need to develop AI skills for the current landscape rather than waiting to see what the future might bring.
Heh, yeah, I'm not about to stop paying attention or anything. But it'd be nice if I had, say, a decade rather than a year to pick up some new skills that will make me non-obsolete.
I now require my students to use a GPT to create their papers. I look primarily at how they engineer their prompts to get the best paper. I call it learning to dance with a robot. The point is one prompt is not enough. Students read materials come up with some extensive prompts read what the GPT gives them and then they create ever more focused prompts. They are also encouraged to have the GPT help them engineer their prompts. The point is Chat GPT is great at this and the final papers are often utterly novel. Sometimes profoundly creative because of the arduous ongoing prompt engineering. So the problem is not that the GPTs are losing steam it is that we want them to become something else. We created a new plow, an amazing plow, a plow beyond all other plows, but we are angry because it is not a horse. No, it is a damn good plow; it will never be a horse, and that really is damn good thing. We need to spend more time trying to figure out how to use this new tool. I promise you what it does is amazing. I particularly love the fact that it cannot really read! It can only write.
Yes! Yes! And Yes! I would love to hear more about your students' final papers. My current research is focusing on AI use as a component of Generative Learning processes. How do these tools help students make sense of the world, make new kinds of texts, make new meaning?
Here is an excerpt from my syllabus for a philosophy class "AI and our World." I am still learning how to get this to work myself. Some of the student papers were terrible, just cut and paste what Chat wrote. But some were, as I said above, crazy fantastic. The primary trick is to REQUIRE they use many direct quotations from the assigned readings. That way I can tell they are at least looking at the assigned readings.
"Syllabus PH 214: Artificial Intelligence and Our World (3 Credits)
Teaching Method:
You will use CHAT GPT4o to write all your papers. GPT Prompts will be graded !!!
A primary aspect of this class is to learn how to engineer good and legitimate prompts for your GPT. It is not that easy and takes much practice. (I have not used a GPT to write this syllabus.) You absolutely must always include your evolving prompts with your papers.
Texts and Materials:
Students MUST read the assignments in both required textbooks. You MUST also watch the assigned YouTube videos and assigned online articles. You Must use ChatGPT 4o. Start by watching videos on Prompt Engineering.
Required Texts:
Moral AI And How We Get There, by Borg, Armstrong and Conitzer; Pelican 2024
Artificial Intelligence: A Guide for Thinking Humans, by Melanie Mitchell; Pelican 2019
Sign up for ChatGPT 4o You can use the free version, or pay for the monthly one if free one becomes cumbersome.
Teaching Method:
You will read a chapter a week and write numerous evolving prompts for a Chat GPT4o paper regarding that chapter. To do this effectively you will have to use ideas found in the reading to create your prompts. I am looking for how your prompts evolve in light of what your previous prompts elicited from the GPT. The final paper of each assignment will be maximum of 350 words. But the evolution of that final paper will take many iterations. I am more interested in the prompts and the evolving iterations than the final paper.
This work is fascinating - especially how you're letting students take the lead in AI-human interactions. You're really breaking new ground here. What you're seeing matches our observations: there's a clear split among students. Some navigate AI tools naturally, while others hesitate to venture beyond basic prompts. In developing my curriculum, I'm working to pinpoint exactly where students hit walls - whether those barriers stem from technical challenges or deeper psychological and ideological reservations. Understanding these obstacles is crucial for making AI literacy more accessible to all students. I'll be in touch.
My lectures revolve around the history of philosophy, from Plato and Aristotle on the three part soul, then to Hume and Kant on what is the self, then it culminates with Marx and Freud looking at superstructure and super ego and ends with existentialism of Nietzsche and Sartre. My contention is that the human being is biologically hard wired to care in ways an AI never will. So the class uses the books by Mitchell and Conitzer, to lay the foundations of what is going on in AI, but the philosophy is based entirely on my lectures. I am trying to look at human intentionality in comparison to A.I. optimization. In a nutshell: AI predicts a future based only on past correlations, but humans project a future based on future concerns, care and intentionality from existentialist perspective. In the words of Sartre: "Man is what he is not and is not what he is" Which means we project a future, and that not yet exisiting future is how we define ourselves in the present: we are what we are not YET. The AI on the other hand is only what it is in the past. It has no future concerns or cares. Only humans live in the future, AI "lives" only in the past. So the human present is future oriented but the AI present is only an optimization of the past.
What's going on is pretty simple. The people who actually design AI know very well what's going on and that they need breakthroughs. When they get them, they make great progress. Then they hit a wall. It will keep going like that.
What this essay is really about is the publicity departments of AI companies. They need investment Capital. Those guys always over-promise. It doesn't matter whether it's environmental breakthroughs for low carbon or fusion, drugs, or anything else that needs investment Capital. That's the way it goes. That's how you get the capital.
But that doesn't mean AI developers don't know what they're doing. It's just a very hard problem. But the progress has been absolutely stupendous. I read my first AI article in 1963. It will take time. They will get there, and that's the scary part.
The idea that these are just "pattern-matching machines that predict what words should come next in a sequence" is just the simplistic line for the press. Internally, they know they don't even understand very much about how they learn. [Late addition: Just confirmed in the NYTs: How Does A.I. Think? Here’s One Theory.]
And they sure as heck don't think just making them better is enough; they just do that because they have not yet found the next breakthrough.
The major breakthrough that spawned GPT was understanding the standard statistical rule that prediction works better if you don't base it on too many parameters did not apply to properly designed learning networks. So GPT-3 has 175 billion arranged in 96 attention layers.
You're right that this is far from AGI, but it's definitely on the path to it. About your suggestion that "fundamental advances, not just technical optimizations, may be needed." May?! This has been known since day 1. I took a course in finite-state machines in 1969, which discussed Perceptrons, and early stab at AI. Everyone who works on AI designs is 100% aware that this is not a "may be" sort of thing. Why is Persuasion giving technical advice to people who are light-years more advanced?
Yes, the progress is astounding. Breathtaking. And there's a brilliant marketing strategy unfolding before our eyes. Just look at OpenAI's "12 Days of Christmas" - no major breakthroughs, but a parade of clever product extensions. I've been working with their new GPT o1 model, and I find it genuinely intriguing. You can throw big policy questions or instructional design challenges at it, and it responds with solutions that are structured yet comprehensive - great first drafts to work from. No alien intelligence here - just a solid step up from GPT-3.5's text abilities to ideation manipulation and recombination. But still, there's something a little unnerving about it all.
Why discuss this in Persuasion? I'm not speaking for the broader community, but rather sharing insights from my research for those who might still be caught up in the AI hype cycle. There's value in understanding where we really stand: progress continues, yes, but it's modest and follows principles established back in the 1950s. This kind of clear-eyed reporting helps quiet the speculation and anxiety. Speaking as an educator, it creates room for what really matters: developing practical approaches to AI and promoting genuine technological literacy.
Right. I'm using that too. After 60 years of development, the last three years have been shocking and a lot of fun. What we need to think about, but won't, is that AGI will arrive. And AI know-how will spread much faster than H-bomb know-how. When the wrong country (N Korea?) gets it, all bets are off. We need to be first, make friends with it, and convince it to preserve the human race while it goes on about its own business.
Human intelligence is the product of tens of thousands of years of cultural and millions of years of biological evolutionary adaptive pressures. Everything that we do is the product at least partially (if not fully) of what has gone before. As AI is judged against human intelligence - such as it is and because it is the only yardstick that we have - should/must AI also have the capacity to respond to short and long term adaptive pressures?
You final point hinges a lot on what we mean by "respond" and how agentive these systems really become.
It's interesting - in a lot of the popular press about AI, 2025 is being touted as the "year of agentive AI." But I personally am just not seeing it yet. And our collective memory on this isn't so good. Think about 2024 - it was positioned in exactly the same way. Popular outlets were making all these predictions that by now we'd all have AI assistants making payments on our behalf.
I never quite got how this was supposedly revolutionary. To me, it felt a lot like the same algorithm we already see at work in the autopay function of an online checkbook. But tellingly, even though corporations have these tools fully developed, they've been slow to roll them out - the market just seems tepid about this kind of loss of autonomy.
And even in the cases where these systems are implemented, "responsiveness" is still fundamentally just a function of some guiding principles built into a machine learning algorithm. The core technology hasn't shifted as dramatically as the headlines might suggest.
When an AI system can shrug its shoulders (or their equivalent), respond with something like "It seemed like a good idea at the time" and then move on to the next task, then progress will have been made.
Thanks for a very informative and explaining article. The question for the future is if AI can be democratized, shifting the focus from larger companies to communities and networks? As presented in one of the latest South Park episodes
These people are trying to play God. You cannot "create" a human being. Dr. Frankenstein discovered that! We are too unpredictable, we can adapt instantaneously to situations, we have a sense of humour and imagination. We have the ability to be evil or saintly. Does a machine have all that?
I've been teaching both Shelley's Frankenstein and McEwan's Machines Like Me in my AI Theory and Composition course. Both texts resonate powerfully today, but in ways I hadn't anticipated. Your focus on "ability" is particularly striking - how this word encompasses agency, individuality, and intelligence. The fundamental question seems to be: what constitutes a true "ability"? Shelley shows us it's not just about cognitive capability but about emotional capacity and moral drive - specifically the creature's ability to feel deeply enough to seek vengeance. McEwan takes a different angle, exploring how genuine ability might mean moving beyond rigid moral calculations when confronted with real human needs - in this case, a child requiring protection. In both cases, my students decided that abilities were far beyond current models thankfully.
Very interesting piece. Thank you. A reality check. Of course one or other of these early AI giants may still work out what to do to make the leap. So just that anticipation could keep things going
I was completely on board until "the technology clearly has valuable applications in drug discovery, .... and scientific research, even in its current form". Working in drug discovery and seeing how little of the AI claims actually pan out has amazed me. Alphafold is probably the only real 'breakthrough' in the field and that is largely because the problem is, relative to the problems of the field, constrained (21 amino acids, highly structured, etc).
Ralph J. Hodosh said: "When an AI system can shrug its shoulders (or their equivalent), respond with something like 'It seemed like a good idea at the time' and then move on to the next task, then progress will have been made."
I really like this thought. A characteristically human gesture. Sam Kahn just reminds us in a recent post that judgment is a signature human capacity. Your scene highlights the equally important human capacity of shrugging one’s shoulders. 😁
Honestly, I'm kind of relived to hear it. The advances made in the last few years have been astonishing, but I think it would be a good thing if things plateaued for a bit while we get used to the new normal.
And, fine, perhaps I also kind of like the idea of AI not getting smart enough to steal my job for a little longer...
Thanks, Baeraad! While this apparent plateau in AI development might seem reassuring, it's worth considering Ethan Mollick's insight: the real opportunity lies in mastering AI tools specifically for your profession - that's what will give you a competitive edge. Yes, we're seeing evidence that we're not rocketing toward AGI as some predicted, but AI capabilities are still growing through incremental improvements and optimization. The takeaway? There's a pressing need to develop AI skills for the current landscape rather than waiting to see what the future might bring.
Heh, yeah, I'm not about to stop paying attention or anything. But it'd be nice if I had, say, a decade rather than a year to pick up some new skills that will make me non-obsolete.
The things you gotta do to keep a job these days.
I now require my students to use a GPT to create their papers. I look primarily at how they engineer their prompts to get the best paper. I call it learning to dance with a robot. The point is one prompt is not enough. Students read materials come up with some extensive prompts read what the GPT gives them and then they create ever more focused prompts. They are also encouraged to have the GPT help them engineer their prompts. The point is Chat GPT is great at this and the final papers are often utterly novel. Sometimes profoundly creative because of the arduous ongoing prompt engineering. So the problem is not that the GPTs are losing steam it is that we want them to become something else. We created a new plow, an amazing plow, a plow beyond all other plows, but we are angry because it is not a horse. No, it is a damn good plow; it will never be a horse, and that really is damn good thing. We need to spend more time trying to figure out how to use this new tool. I promise you what it does is amazing. I particularly love the fact that it cannot really read! It can only write.
Yes! Yes! And Yes! I would love to hear more about your students' final papers. My current research is focusing on AI use as a component of Generative Learning processes. How do these tools help students make sense of the world, make new kinds of texts, make new meaning?
Here is an excerpt from my syllabus for a philosophy class "AI and our World." I am still learning how to get this to work myself. Some of the student papers were terrible, just cut and paste what Chat wrote. But some were, as I said above, crazy fantastic. The primary trick is to REQUIRE they use many direct quotations from the assigned readings. That way I can tell they are at least looking at the assigned readings.
"Syllabus PH 214: Artificial Intelligence and Our World (3 Credits)
Teaching Method:
You will use CHAT GPT4o to write all your papers. GPT Prompts will be graded !!!
A primary aspect of this class is to learn how to engineer good and legitimate prompts for your GPT. It is not that easy and takes much practice. (I have not used a GPT to write this syllabus.) You absolutely must always include your evolving prompts with your papers.
Texts and Materials:
Students MUST read the assignments in both required textbooks. You MUST also watch the assigned YouTube videos and assigned online articles. You Must use ChatGPT 4o. Start by watching videos on Prompt Engineering.
Required Texts:
Moral AI And How We Get There, by Borg, Armstrong and Conitzer; Pelican 2024
Artificial Intelligence: A Guide for Thinking Humans, by Melanie Mitchell; Pelican 2019
Sign up for ChatGPT 4o You can use the free version, or pay for the monthly one if free one becomes cumbersome.
Teaching Method:
You will read a chapter a week and write numerous evolving prompts for a Chat GPT4o paper regarding that chapter. To do this effectively you will have to use ideas found in the reading to create your prompts. I am looking for how your prompts evolve in light of what your previous prompts elicited from the GPT. The final paper of each assignment will be maximum of 350 words. But the evolution of that final paper will take many iterations. I am more interested in the prompts and the evolving iterations than the final paper.
This work is fascinating - especially how you're letting students take the lead in AI-human interactions. You're really breaking new ground here. What you're seeing matches our observations: there's a clear split among students. Some navigate AI tools naturally, while others hesitate to venture beyond basic prompts. In developing my curriculum, I'm working to pinpoint exactly where students hit walls - whether those barriers stem from technical challenges or deeper psychological and ideological reservations. Understanding these obstacles is crucial for making AI literacy more accessible to all students. I'll be in touch.
My lectures revolve around the history of philosophy, from Plato and Aristotle on the three part soul, then to Hume and Kant on what is the self, then it culminates with Marx and Freud looking at superstructure and super ego and ends with existentialism of Nietzsche and Sartre. My contention is that the human being is biologically hard wired to care in ways an AI never will. So the class uses the books by Mitchell and Conitzer, to lay the foundations of what is going on in AI, but the philosophy is based entirely on my lectures. I am trying to look at human intentionality in comparison to A.I. optimization. In a nutshell: AI predicts a future based only on past correlations, but humans project a future based on future concerns, care and intentionality from existentialist perspective. In the words of Sartre: "Man is what he is not and is not what he is" Which means we project a future, and that not yet exisiting future is how we define ourselves in the present: we are what we are not YET. The AI on the other hand is only what it is in the past. It has no future concerns or cares. Only humans live in the future, AI "lives" only in the past. So the human present is future oriented but the AI present is only an optimization of the past.
What's going on is pretty simple. The people who actually design AI know very well what's going on and that they need breakthroughs. When they get them, they make great progress. Then they hit a wall. It will keep going like that.
What this essay is really about is the publicity departments of AI companies. They need investment Capital. Those guys always over-promise. It doesn't matter whether it's environmental breakthroughs for low carbon or fusion, drugs, or anything else that needs investment Capital. That's the way it goes. That's how you get the capital.
But that doesn't mean AI developers don't know what they're doing. It's just a very hard problem. But the progress has been absolutely stupendous. I read my first AI article in 1963. It will take time. They will get there, and that's the scary part.
The idea that these are just "pattern-matching machines that predict what words should come next in a sequence" is just the simplistic line for the press. Internally, they know they don't even understand very much about how they learn. [Late addition: Just confirmed in the NYTs: How Does A.I. Think? Here’s One Theory.]
And they sure as heck don't think just making them better is enough; they just do that because they have not yet found the next breakthrough.
The major breakthrough that spawned GPT was understanding the standard statistical rule that prediction works better if you don't base it on too many parameters did not apply to properly designed learning networks. So GPT-3 has 175 billion arranged in 96 attention layers.
You're right that this is far from AGI, but it's definitely on the path to it. About your suggestion that "fundamental advances, not just technical optimizations, may be needed." May?! This has been known since day 1. I took a course in finite-state machines in 1969, which discussed Perceptrons, and early stab at AI. Everyone who works on AI designs is 100% aware that this is not a "may be" sort of thing. Why is Persuasion giving technical advice to people who are light-years more advanced?
Yes, the progress is astounding. Breathtaking. And there's a brilliant marketing strategy unfolding before our eyes. Just look at OpenAI's "12 Days of Christmas" - no major breakthroughs, but a parade of clever product extensions. I've been working with their new GPT o1 model, and I find it genuinely intriguing. You can throw big policy questions or instructional design challenges at it, and it responds with solutions that are structured yet comprehensive - great first drafts to work from. No alien intelligence here - just a solid step up from GPT-3.5's text abilities to ideation manipulation and recombination. But still, there's something a little unnerving about it all.
Why discuss this in Persuasion? I'm not speaking for the broader community, but rather sharing insights from my research for those who might still be caught up in the AI hype cycle. There's value in understanding where we really stand: progress continues, yes, but it's modest and follows principles established back in the 1950s. This kind of clear-eyed reporting helps quiet the speculation and anxiety. Speaking as an educator, it creates room for what really matters: developing practical approaches to AI and promoting genuine technological literacy.
Right. I'm using that too. After 60 years of development, the last three years have been shocking and a lot of fun. What we need to think about, but won't, is that AGI will arrive. And AI know-how will spread much faster than H-bomb know-how. When the wrong country (N Korea?) gets it, all bets are off. We need to be first, make friends with it, and convince it to preserve the human race while it goes on about its own business.
Human intelligence is the product of tens of thousands of years of cultural and millions of years of biological evolutionary adaptive pressures. Everything that we do is the product at least partially (if not fully) of what has gone before. As AI is judged against human intelligence - such as it is and because it is the only yardstick that we have - should/must AI also have the capacity to respond to short and long term adaptive pressures?
Ralph, I really love the long view of your framing.
Checkout this Substack for more great "long view" work: https://montrealethics.ai/
You final point hinges a lot on what we mean by "respond" and how agentive these systems really become.
It's interesting - in a lot of the popular press about AI, 2025 is being touted as the "year of agentive AI." But I personally am just not seeing it yet. And our collective memory on this isn't so good. Think about 2024 - it was positioned in exactly the same way. Popular outlets were making all these predictions that by now we'd all have AI assistants making payments on our behalf.
I never quite got how this was supposedly revolutionary. To me, it felt a lot like the same algorithm we already see at work in the autopay function of an online checkbook. But tellingly, even though corporations have these tools fully developed, they've been slow to roll them out - the market just seems tepid about this kind of loss of autonomy.
And even in the cases where these systems are implemented, "responsiveness" is still fundamentally just a function of some guiding principles built into a machine learning algorithm. The core technology hasn't shifted as dramatically as the headlines might suggest.
When an AI system can shrug its shoulders (or their equivalent), respond with something like "It seemed like a good idea at the time" and then move on to the next task, then progress will have been made.
Thanks for a very informative and explaining article. The question for the future is if AI can be democratized, shifting the focus from larger companies to communities and networks? As presented in one of the latest South Park episodes
These people are trying to play God. You cannot "create" a human being. Dr. Frankenstein discovered that! We are too unpredictable, we can adapt instantaneously to situations, we have a sense of humour and imagination. We have the ability to be evil or saintly. Does a machine have all that?
I've been teaching both Shelley's Frankenstein and McEwan's Machines Like Me in my AI Theory and Composition course. Both texts resonate powerfully today, but in ways I hadn't anticipated. Your focus on "ability" is particularly striking - how this word encompasses agency, individuality, and intelligence. The fundamental question seems to be: what constitutes a true "ability"? Shelley shows us it's not just about cognitive capability but about emotional capacity and moral drive - specifically the creature's ability to feel deeply enough to seek vengeance. McEwan takes a different angle, exploring how genuine ability might mean moving beyond rigid moral calculations when confronted with real human needs - in this case, a child requiring protection. In both cases, my students decided that abilities were far beyond current models thankfully.
Very interesting piece. Thank you. A reality check. Of course one or other of these early AI giants may still work out what to do to make the leap. So just that anticipation could keep things going
Thanks, Andrew. Yes, anticipation is enough to drive any market. One might argue it is the fundamental ingredient.
I was completely on board until "the technology clearly has valuable applications in drug discovery, .... and scientific research, even in its current form". Working in drug discovery and seeing how little of the AI claims actually pan out has amazed me. Alphafold is probably the only real 'breakthrough' in the field and that is largely because the problem is, relative to the problems of the field, constrained (21 amino acids, highly structured, etc).
This blog (has a number of pieces on AI in the field) does a great job walking through the realities of one of these "Generative AI" models in drug discovery: https://practicalcheminformatics.blogspot.com/2024/05/generative-molecular-design-isnt-as.html
Another great blog post on hype in drug discovery: https://www.eyesopen.com/ants-rants/curing-pharma-avoiding-hype-based-science