Discussion about this post

User's avatar
Allen Zeesman's avatar

What this piece captures well is the scale of the risk. What it only partially captures is the structure of the problem.

The issue is not simply that AI companies are moving too fast or behaving recklessly. It is that decisions with civilizational consequences are now being made by actors who do not—and perhaps cannot—claim authority on behalf of those who will bear the costs.

Anthropic didn’t just discover something dangerous. It decided—on its own—what to build, what to release, what to conceal, and who to include in the response. That is not just a technical governance gap. It is a legitimacy gap.

People are not only worried. They feel increasingly outside the “we” in whose name these decisions are made. And when that happens, the problem is no longer just risk management—it is the breakdown of obligation itself. Why should anyone accept the consequences of decisions made by entities they neither authorize nor meaningfully constrain?

This is why calls for regulation feel both necessary and insufficient. Regulation can allocate responsibility. But it cannot by itself restore the missing condition: a structure in which those who bear the costs recognize the authority making the decisions as acting on their behalf.

Until that problem is addressed, the system will continue to generate not just technological risk, but a deeper and more destabilizing outcome: a growing refusal to accept the burdens imposed by a world no longer experienced as a shared one

John W Dickerson's avatar

A disjointed and raucous governance structure — one with a long record of mismanaging medicine, education, welfare, pandemics, and immigration, just to name a few — is now supposed to take on a new, highly technical responsibility in an arena it barely understands? That is your proposal?

Before we rush to build a new bureaucracy for AI, we might consider curtailing the ones we already have. Expanding the administrative state in the name of “oversight” has rarely produced competence. It has far more often produced delay, confusion, and the diffusion of responsibility. The idea that the same machinery that struggles with its existing duties should now be entrusted with governing a technology it does not understand deserves far more skepticism than it is getting.

History offers a counterexample worth remembering. Underwriters Laboratory, created by the industries themselves rather than by government mandate, built an extraordinary track record of electrical safety in the United States. It worked because the people with the most at stake — and the most knowledge — were the ones setting the standards. Self‑interest, when properly structured, can be a powerful regulator of other self‑interests.

5 more comments...

No posts

Ready for more?