Frank, this is a valuable reflection on agentic AI and the dangers of over-delegation. But I think the existential risk is not only about AI agents going rogue. We don’t need AGI to crash the commons—what we need to watch is how discourse itself accelerates, distorts, and exceeds containment.
That’s what my Coverton Bandwidth Theory tries to capture. Instead of treating “agency” as the danger, I argue that the real vulnerability lies in the zone of discursive legitimacy where power is exerted by modulating the speed, reach, and persistence of ideas. When panic cascades across networks, it doesn’t not necessarily take an AGI—sometimes a boisterous 12-year-old with the right meme can destabilize whole global public square.
The framework has three components:
• Discursive Velocity: how fast ideas move and overwhelm institutional buffers.
• Strategic Legibility: how clarity and obfuscation are calibrated to direct attention or confusion.
• Operative Containment: how discourse is constrained through shadow banning, narrative laundering, or soft suppression to prevent “runaway” effects.
Seen this way, emergent AI or Agentic Ai is just another actor in the bandwidth. The real danger isn’t a killer robot but the runaway modulation of narratives that outpace human or algorithmic containment.
If you or your readers are interested, I’ve written it up in more detail here:
This bring up a great point as we look at the use of drones in the Public Safety field of State and Local Government. The timely nature of decision in many of the PS instances are life and death but the due care and liability is something insurance companies haven't fully consider in policy reviews.
Decades ago I read several of Isaac Asimov's I Robot series where he posits the 3 rules of robotics and how those work in interaction with humans. Am I happy to be living out those fantasies? Let me delegate that question.....
Frank, this is a valuable reflection on agentic AI and the dangers of over-delegation. But I think the existential risk is not only about AI agents going rogue. We don’t need AGI to crash the commons—what we need to watch is how discourse itself accelerates, distorts, and exceeds containment.
That’s what my Coverton Bandwidth Theory tries to capture. Instead of treating “agency” as the danger, I argue that the real vulnerability lies in the zone of discursive legitimacy where power is exerted by modulating the speed, reach, and persistence of ideas. When panic cascades across networks, it doesn’t not necessarily take an AGI—sometimes a boisterous 12-year-old with the right meme can destabilize whole global public square.
The framework has three components:
• Discursive Velocity: how fast ideas move and overwhelm institutional buffers.
• Strategic Legibility: how clarity and obfuscation are calibrated to direct attention or confusion.
• Operative Containment: how discourse is constrained through shadow banning, narrative laundering, or soft suppression to prevent “runaway” effects.
Seen this way, emergent AI or Agentic Ai is just another actor in the bandwidth. The real danger isn’t a killer robot but the runaway modulation of narratives that outpace human or algorithmic containment.
If you or your readers are interested, I’ve written it up in more detail here:
https://open.substack.com/pub/alkoch55/p/coverton-bandwidth-a-heuristic-of?r=kmlt&utm_medium=ios
This bring up a great point as we look at the use of drones in the Public Safety field of State and Local Government. The timely nature of decision in many of the PS instances are life and death but the due care and liability is something insurance companies haven't fully consider in policy reviews.
Decades ago I read several of Isaac Asimov's I Robot series where he posits the 3 rules of robotics and how those work in interaction with humans. Am I happy to be living out those fantasies? Let me delegate that question.....