What this piece captures well is the scale of the risk. What it only partially captures is the structure of the problem.
The issue is not simply that AI companies are moving too fast or behaving recklessly. It is that decisions with civilizational consequences are now being made by actors who do not—and perhaps cannot—claim authority on behalf of those who will bear the costs.
Anthropic didn’t just discover something dangerous. It decided—on its own—what to build, what to release, what to conceal, and who to include in the response. That is not just a technical governance gap. It is a legitimacy gap.
People are not only worried. They feel increasingly outside the “we” in whose name these decisions are made. And when that happens, the problem is no longer just risk management—it is the breakdown of obligation itself. Why should anyone accept the consequences of decisions made by entities they neither authorize nor meaningfully constrain?
This is why calls for regulation feel both necessary and insufficient. Regulation can allocate responsibility. But it cannot by itself restore the missing condition: a structure in which those who bear the costs recognize the authority making the decisions as acting on their behalf.
Until that problem is addressed, the system will continue to generate not just technological risk, but a deeper and more destabilizing outcome: a growing refusal to accept the burdens imposed by a world no longer experienced as a shared one
This is superbly put. I liked the article a lot, but this comment adds a missing dimension. I think it also points to a broader problem in US politics--especially on the left, I suspect--which is a tendency to conceive of better policy as both necessary and sufficient to solve our problems. Obviously, I prefer better policy to worse policy, and good policy is necessary, but it isn't sufficient. Cultural responses are necessary too, in order for policy responses to have legitimacy, as this comment emphasizes.
A disjointed and raucous governance structure — one with a long record of mismanaging medicine, education, welfare, pandemics, and immigration, just to name a few — is now supposed to take on a new, highly technical responsibility in an arena it barely understands? That is your proposal?
Before we rush to build a new bureaucracy for AI, we might consider curtailing the ones we already have. Expanding the administrative state in the name of “oversight” has rarely produced competence. It has far more often produced delay, confusion, and the diffusion of responsibility. The idea that the same machinery that struggles with its existing duties should now be entrusted with governing a technology it does not understand deserves far more skepticism than it is getting.
History offers a counterexample worth remembering. Underwriters Laboratory, created by the industries themselves rather than by government mandate, built an extraordinary track record of electrical safety in the United States. It worked because the people with the most at stake — and the most knowledge — were the ones setting the standards. Self‑interest, when properly structured, can be a powerful regulator of other self‑interests.
I am not sure the piece argued for slow bureaucracy to solve this; it asked for mandatory oversight to solve this. Could you mandate certain tasks be performed by an industry consortium and let them handle it? I'm not sure, but I think this is a previous way we've handled things that move too fast for government.
The 2008 global financial crash was largely due to private financial market corporations getting way in front of their skis... creating products and practices while dismissing mounting risks. Regulators were either missing or participating.
The 2019 global pandemic was largely due to private big pharma corporations getting way in front of their skis... creating products and practices while dismissing mounting risks. Government regulators were either missing or participating.
Now AI is repeating this cycle.
Meanwhile our government demonstrates its competency to keep us safe while half of it focuses on trans rights and forcing us to use pronouns.
What this piece captures well is the scale of the risk. What it only partially captures is the structure of the problem.
The issue is not simply that AI companies are moving too fast or behaving recklessly. It is that decisions with civilizational consequences are now being made by actors who do not—and perhaps cannot—claim authority on behalf of those who will bear the costs.
Anthropic didn’t just discover something dangerous. It decided—on its own—what to build, what to release, what to conceal, and who to include in the response. That is not just a technical governance gap. It is a legitimacy gap.
People are not only worried. They feel increasingly outside the “we” in whose name these decisions are made. And when that happens, the problem is no longer just risk management—it is the breakdown of obligation itself. Why should anyone accept the consequences of decisions made by entities they neither authorize nor meaningfully constrain?
This is why calls for regulation feel both necessary and insufficient. Regulation can allocate responsibility. But it cannot by itself restore the missing condition: a structure in which those who bear the costs recognize the authority making the decisions as acting on their behalf.
Until that problem is addressed, the system will continue to generate not just technological risk, but a deeper and more destabilizing outcome: a growing refusal to accept the burdens imposed by a world no longer experienced as a shared one
This is superbly put. I liked the article a lot, but this comment adds a missing dimension. I think it also points to a broader problem in US politics--especially on the left, I suspect--which is a tendency to conceive of better policy as both necessary and sufficient to solve our problems. Obviously, I prefer better policy to worse policy, and good policy is necessary, but it isn't sufficient. Cultural responses are necessary too, in order for policy responses to have legitimacy, as this comment emphasizes.
A disjointed and raucous governance structure — one with a long record of mismanaging medicine, education, welfare, pandemics, and immigration, just to name a few — is now supposed to take on a new, highly technical responsibility in an arena it barely understands? That is your proposal?
Before we rush to build a new bureaucracy for AI, we might consider curtailing the ones we already have. Expanding the administrative state in the name of “oversight” has rarely produced competence. It has far more often produced delay, confusion, and the diffusion of responsibility. The idea that the same machinery that struggles with its existing duties should now be entrusted with governing a technology it does not understand deserves far more skepticism than it is getting.
History offers a counterexample worth remembering. Underwriters Laboratory, created by the industries themselves rather than by government mandate, built an extraordinary track record of electrical safety in the United States. It worked because the people with the most at stake — and the most knowledge — were the ones setting the standards. Self‑interest, when properly structured, can be a powerful regulator of other self‑interests.
I am not sure the piece argued for slow bureaucracy to solve this; it asked for mandatory oversight to solve this. Could you mandate certain tasks be performed by an industry consortium and let them handle it? I'm not sure, but I think this is a previous way we've handled things that move too fast for government.
The 2008 global financial crash was largely due to private financial market corporations getting way in front of their skis... creating products and practices while dismissing mounting risks. Regulators were either missing or participating.
The 2019 global pandemic was largely due to private big pharma corporations getting way in front of their skis... creating products and practices while dismissing mounting risks. Government regulators were either missing or participating.
Now AI is repeating this cycle.
Meanwhile our government demonstrates its competency to keep us safe while half of it focuses on trans rights and forcing us to use pronouns.