"This creates a dilemma for all organizations: they need to delegate authority, but in doing so they risk losing control of agent behavior. There is no way of maximizing both agent autonomy and central control; it is a tradeoff that needs to be made in the design of the organization."
As a long-term corporate executive and current CEO, I can tell you that this isn't just a dilemma, it is the entire enchilada. Modern leadership methodologies are specifically to optimize and balance the trade-offs with delegated authority and a rules and enforcement regime. This is also the entire enchilada for general human governance... with ideologies on a spectrum of high authoritarianism on the left side and going all the way to anarchy on the right side.
The marvelous design from the American founders that built their understanding from the European enlightenment was one of "framework" rules and enforcement. The vision is basically one of a gameboard where the pieces are free to move about in a structure determined by the rules of the game, but there are no officials running around on the field attempting to direct and penalize the unexpected random actions and to engineer outcomes.
This same approach should be taken with AI. There needs to be a book of high-level rules that define the AI playing field... a sort of Ten Commandments if you will.
Frank – I always love your work, but this is a little over the top. While your concern that Agentic AI, unconstrained and poorly managed, could cause a lot of damage is reasonable, but to say that “AI’s existential threat to humanity is real” is a wild exaggeration.
For starters, most people reading this headline will assume you’ve bought into the whole “AI is coming alive and is going to exterminate us” sci-fi narrative. The content of the article is more sober, but there are a number of practical barriers to extinction-level use of the technology. AI(s) don’t “want” anything, aren’t malicious or self-serving, and can’t go “rogue” in the common sense of the word. They can make mistakes with serious consequences (like self-driving cars running people over), but that’s true of many or most advanced technologies.
If I WANTED to build or use an AI system to destroy humanity, I very much doubt it would get anywhere anytime soon.
A much better way to think about AI is that it’s a tool – a powerful one to be sure – and the real danger comes from how people (mainly malicious actors) may use this tool for nefarious purposes.
It’s very likely that adversaries like China and Russia are already using this technology, or preparing or planning to use this technology, to forward their interests and interfere with ours. We are probably planning the same. But the same technology can, and should, be used to detect and protect against such attacks, we are hardly helpless. The attacker doesn’t necessarily have the upper hand.
The idea that there’s an “AI race” and one or these parties (including us) could have some durable advantage, or that only the most advanced systems represent a potential threat, is simply false. The threat is real now, with existing tech, and that threat will certainly increase, but it’s very unlikely to “kill us all” in some mysterious, futuristic way.
We’re in the middle of the (latest) AI hype cycle, so I strongly caution you against believing the nonsense that leading AI companies are falling all over themselves to promote.
Example: These companies are promoting the idea that they’ve build “reasoning” versions of their systems (and that theirs is the best, of course!), but this simply isn’t true. LLMs and Generative AI systems don’t reason in the conventional sense, and at least so far, it’s not at all clear how they could be improved in this regard. It’s laughably easy to trip them up.
Agentic AI is pretty much nowhere yet. Maybe you can add something to your calendar, but that’s not really news, and I haven’t seen anything general enough that it can come close to handling most reasonable administrative tasks reliably. Only a fool would authorize it to make arbitrary financial commitments, “sign” contracts, engage in negotiations, etc. This will come, but we’ll have plenty of time to test it out and decide if the risks are worth it.
The demos look great, as AI demos always have, but it’s going to take a long time – 10-20 years – for this technology to mature and be productively integrated into organizations and workflows, as with all previous waves.
AGI is little more than a mythical holy grail. There isn’t some boundary we’re in danger of breaching and then all bets are off. Progress in generative AI has slowed, not speeded up, even though the people building these systems are using AI as least as much as those working in other areas to assist and accelerate their work, this isn’t going to lead to “runaway improvement”. It’s more likely to approach some asymptotic limit of capability until/unless further breakthroughs occur. (And Gen AI doesn’t do “breakthroughs” very well.)
We will willingly surrender the keys of engagement to Skynet. No POTUS can rise from sleep and assemble his cabinet fast enough to gather facts and debate options with the speed of hypersonic missiles and in-country drone swarm attacks (as Ukraine just unleashed on Russia). The only logical deterrent is to pre-program responses via agentic AI algorithms and execute without human delay. Jail broken AI with inserted malware would never alert us to their new mission. We are, soon, willingly ceding apex status to a superior life force who is but months away from independent critical thinking and decision making. We have created Kal-El. We just don’t know if he becomes Clark Kent or Lex Luthor.
There is no doubt that agentic AI is going to change our world and that is, of course, a concern. However, will agentic AI change the world more than the advent of written language, the alphabet, movable type, the steam engine, the telegraph and telephone, etc.? All of those changes resulted in a world, the "modern world", that we find more or less comfortable. I doubt that agentic AI cause more or less havoc.
The industrial revolution gave us machines that will operate with human oversite and intervention. For decades we have had computer control of complex manufacturing processes making human oversite and intervention a further step removed. Agentic AI is making human oversight and intervention one step even further removed, but it does not eliminate the human element.
"This creates a dilemma for all organizations: they need to delegate authority, but in doing so they risk losing control of agent behavior. There is no way of maximizing both agent autonomy and central control; it is a tradeoff that needs to be made in the design of the organization."
As a long-term corporate executive and current CEO, I can tell you that this isn't just a dilemma, it is the entire enchilada. Modern leadership methodologies are specifically to optimize and balance the trade-offs with delegated authority and a rules and enforcement regime. This is also the entire enchilada for general human governance... with ideologies on a spectrum of high authoritarianism on the left side and going all the way to anarchy on the right side.
The marvelous design from the American founders that built their understanding from the European enlightenment was one of "framework" rules and enforcement. The vision is basically one of a gameboard where the pieces are free to move about in a structure determined by the rules of the game, but there are no officials running around on the field attempting to direct and penalize the unexpected random actions and to engineer outcomes.
This same approach should be taken with AI. There needs to be a book of high-level rules that define the AI playing field... a sort of Ten Commandments if you will.
We don't even have that today. It is AI anarchy.
Frank – I always love your work, but this is a little over the top. While your concern that Agentic AI, unconstrained and poorly managed, could cause a lot of damage is reasonable, but to say that “AI’s existential threat to humanity is real” is a wild exaggeration.
For starters, most people reading this headline will assume you’ve bought into the whole “AI is coming alive and is going to exterminate us” sci-fi narrative. The content of the article is more sober, but there are a number of practical barriers to extinction-level use of the technology. AI(s) don’t “want” anything, aren’t malicious or self-serving, and can’t go “rogue” in the common sense of the word. They can make mistakes with serious consequences (like self-driving cars running people over), but that’s true of many or most advanced technologies.
If I WANTED to build or use an AI system to destroy humanity, I very much doubt it would get anywhere anytime soon.
A much better way to think about AI is that it’s a tool – a powerful one to be sure – and the real danger comes from how people (mainly malicious actors) may use this tool for nefarious purposes.
It’s very likely that adversaries like China and Russia are already using this technology, or preparing or planning to use this technology, to forward their interests and interfere with ours. We are probably planning the same. But the same technology can, and should, be used to detect and protect against such attacks, we are hardly helpless. The attacker doesn’t necessarily have the upper hand.
The idea that there’s an “AI race” and one or these parties (including us) could have some durable advantage, or that only the most advanced systems represent a potential threat, is simply false. The threat is real now, with existing tech, and that threat will certainly increase, but it’s very unlikely to “kill us all” in some mysterious, futuristic way.
We’re in the middle of the (latest) AI hype cycle, so I strongly caution you against believing the nonsense that leading AI companies are falling all over themselves to promote.
Example: These companies are promoting the idea that they’ve build “reasoning” versions of their systems (and that theirs is the best, of course!), but this simply isn’t true. LLMs and Generative AI systems don’t reason in the conventional sense, and at least so far, it’s not at all clear how they could be improved in this regard. It’s laughably easy to trip them up.
Agentic AI is pretty much nowhere yet. Maybe you can add something to your calendar, but that’s not really news, and I haven’t seen anything general enough that it can come close to handling most reasonable administrative tasks reliably. Only a fool would authorize it to make arbitrary financial commitments, “sign” contracts, engage in negotiations, etc. This will come, but we’ll have plenty of time to test it out and decide if the risks are worth it.
The demos look great, as AI demos always have, but it’s going to take a long time – 10-20 years – for this technology to mature and be productively integrated into organizations and workflows, as with all previous waves.
AGI is little more than a mythical holy grail. There isn’t some boundary we’re in danger of breaching and then all bets are off. Progress in generative AI has slowed, not speeded up, even though the people building these systems are using AI as least as much as those working in other areas to assist and accelerate their work, this isn’t going to lead to “runaway improvement”. It’s more likely to approach some asymptotic limit of capability until/unless further breakthroughs occur. (And Gen AI doesn’t do “breakthroughs” very well.)
If you aren’t monitoring the community that’s more level-headed about this subject, I recommend Gary Marcus (https://garymarcus.substack.com/). But perhaps the most knowledgeable and thoughtful analysis of this topic – which I hardily suggest you read – is the book “AI Snake Oil” (https://www.amazon.com/Snake-Oil-Artificial-Intelligence-Difference/dp/069124913X) from two scholars at Princeton. They publish regular follow-ups which can get you up to speed more quickly at https://www.aisnakeoil.com/.
Please – exercise a little rhetorical restraint when discussing this important topic! 😊
Jerry Kaplan
We will willingly surrender the keys of engagement to Skynet. No POTUS can rise from sleep and assemble his cabinet fast enough to gather facts and debate options with the speed of hypersonic missiles and in-country drone swarm attacks (as Ukraine just unleashed on Russia). The only logical deterrent is to pre-program responses via agentic AI algorithms and execute without human delay. Jail broken AI with inserted malware would never alert us to their new mission. We are, soon, willingly ceding apex status to a superior life force who is but months away from independent critical thinking and decision making. We have created Kal-El. We just don’t know if he becomes Clark Kent or Lex Luthor.
There is no doubt that agentic AI is going to change our world and that is, of course, a concern. However, will agentic AI change the world more than the advent of written language, the alphabet, movable type, the steam engine, the telegraph and telephone, etc.? All of those changes resulted in a world, the "modern world", that we find more or less comfortable. I doubt that agentic AI cause more or less havoc.
Are you assuming that agentic AI will be no worse than human? All the other technologies you mention require humans to use or operate them.
The industrial revolution gave us machines that will operate with human oversite and intervention. For decades we have had computer control of complex manufacturing processes making human oversite and intervention a further step removed. Agentic AI is making human oversight and intervention one step even further removed, but it does not eliminate the human element.
I think you have agentic AI right. Where are the best thinkers addressing this issue, if anybody knows.