AI Companies Aren’t Evil. But They Are Reckless.
You can’t build machines that jeopardize civilization without expecting regulators to step in.
Our next Ask the Author livestream will take place tomorrow at 6pm ET on Substack Live. Cathy Young will discuss her article “They Went Hard Against Woke. And Then… Went Even Harder Against Trump.” Look out for the notification—or click here to add it to your calendar!
Earlier this year, a prominent company with millions of customers announced a major product upgrade—albeit with one little catch.
If this new product was released to the public, the company said, it could be used to disrupt—and perhaps destroy—civilizational infrastructure, from financial markets to transportation systems to power and water utilities.
But fear not! The company hastened to reassure the public that it had the situation under control. The company would decide, on its own terms, what the world needed to know, who should be called in to contain the problem, and how much gratitude the rest of us should feel for being spared a catastrophe we never knew was coming. No public accountability or government intervention required.
This, of course, is the story of Anthropic and its latest AI model.
Anthropic discovered that the model, known as Mythos, could autonomously identify zero-day vulnerabilities—that is, security flaws that software makers don’t know exist—across every major operating system and web browser. Some of the flaws Mythos found were decades old, overlooked and unnoticed by literally millions of human eyes. This was not an intended feature, but one that the AI seems to have picked up along the way, as Anthropic’s developers rushed to create a more powerful model with better reasoning and coding abilities.
Intentional or not, it introduced a substantial new danger to the world. In the wrong hands, Mythos could be a weapon fit for a supervillain—a cheat code for attacking the world’s most critical infrastructure.
And yet, the decision to build such an advanced model was not made by any external agency. No independent body evaluated it. No regulator was notified in advance.
And once the threat was identified, Anthropic decided—alone—what to do about it. After judging Mythos too dangerous for public release, Anthropic created a private consortium made up of handpicked partners like Amazon, Apple, Cisco, JPMorgan Chase, and Nvidia to fix the bugs and ensure Mythos’ safety.
With that all worked out, they gave policymakers and the public a heads-up on their dangerous new product and the plan to contain it.
This is what passes for AI governance in 2026: a single company accidentally builds an entity powerful enough to pose an existential threat to the digital systems that power modern life, unilaterally decides how to deal with it, and then loops in everyone else.
Except of course, it’s not at all clear that they’re dealing with it: A few weeks after all this transpired, we learn that Mythos was, in fact, accessed by unauthorized users. Was catastrophe avoided, or merely delayed? We may yet find out.
Mythos is the clearest evidence yet that our system for developing, assessing, and disseminating powerful AI systems is dangerously dysfunctional.
As tempting as it is to blame this dysfunction on bad actors or rogue tech CEOs, I think it’s something deeper than that: a broken incentive structure. As careless as their actions may sometimes seem, AI developers aren’t being intentionally malevolent—they’re rationally operating within a system that rewards chasing progress now and worrying about consequences later.
The leading AI companies, armed with billions in capital, are all sprinting toward the same horizon with an imperative to cross the finish line first. They all have the same motivation: “If I don’t build it, someone else will.”
That logic co-exists with a genuine belief that AI may prove to be a transformative force for good, generating productivity in unimagined new ways and pointing the way forward for progress. AI’s potential benefits have been exhaustively documented—whether to address climate change or to enhance medicine or simply to widen our horizons—but at this stage in the AI era, we all have to acknowledge that AI is accompanied by myriad harms, from job loss to manipulative engagement to cognitive offloading to AI psychosis to AI-assisted suicide and murder.
The scale of these numerous challenges demands a response as wide and deep as our society. One self-interested company or a hand-picked corporate consortium can’t be trusted to get it right—the issue is far larger than that. The solution, should we get there, will require public understanding and engagement, and government oversight.
To those that claim AI is too complex, too consequential, or too powerful to govern: you’re wrong. In reality, this argument is—at best—a shoddy defense of the broken incentive structure producing it.
Because AI is complex, we have a responsibility to comprehend it. And because AI is so consequential, we have a responsibility to govern it. Institutions, policymakers, and regulators have been understandably disoriented by the AI frenzy of the last few years, but now must rise above the noise and rewrite misaligned incentives. That means—yes—establishing a role for government in the AI sphere. Concerns about governmental efficacy are understandable, but government must be meaningfully engaged. There simply is no other manifestation of the will of the public.
We have governed consequential technologies before: automobiles, aviation, pharmaceuticals, nuclear energy, and more. Every one of these industries today operates inside a hard-won system of accountability—a system that took time to build but, crucially, did not kill innovation. It’s time to apply the same rules and accountability structures to AI, and with even more urgency, considering how quickly it is integrating into virtually every aspect of our society.
And the fact is, no meaningful federal regulation of AI currently exists. States have stepped up to fill the void, with 73 AI laws—ranging from protecting kids online to ensuring a human is in the loop when it comes to critical decisions like healthcare—enacted across 27 states in 2025. But states’ reach is increasingly limited, with Trump issuing an executive order in December directed against “excessive state regulation.” The tech industry, meanwhile, has worked to paralyze regulation at every turn, with AI companies pouring money into Super PACs to support tech-friendly candidates and block state regulatory laws.
So what could a meaningful regulatory structure actually look like, assuming the political will for it materialized? Let’s take Mythos as a test case.
Under a more rational governance framework, a tool with society-altering capabilities like that of AI would face mandatory pre-deployment testing by independent evaluators—not the company selling the product.
There would be standardized public reporting of risks, so that regulators, businesses, and users could make informed decisions rather than relying on what the developer chooses to disclose. There would be real whistleblower protections for employees inside AI labs who see something wrong and want to say so.
And if an AI product caused foreseeable harm after its release, the company that built and deployed it would bear legal responsibility. Liability is what aligns private incentives with public safety. It’s why cars have seatbelts and airbags—not because manufacturers wanted them, but because they knew they would pay the price for cutting corners and because insurers and legislators aggressively pushed safety measures. The same logic applies here.
These two principles—safety and transparency before deployment; and a genuine duty of care to the public—are key to establishing a framework for orienting policymakers, companies, and citizens towards what responsible AI actually requires.
None of this is radical. It’s all standard with existing products. And all of it is overdue.
Mythos is just the latest and most egregious evidence that we cannot keep relying on the judgment of individual companies to stand in for the public accountability structures we’ve so far refused to build around AI. The next threat may not be discovered in time. Or it might come from a company more desperate to succeed in an incentive structure that rewards reckless behavior.
We’ve done this before. We have the tools. It’s time we reclaim our future with principles that will protect us, individually and collectively.
Julie Guirado is the Executive Director of the Center for Humane Technology and oversees its AI Roadmap.
Follow Persuasion on X, Instagram, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:






What this piece captures well is the scale of the risk. What it only partially captures is the structure of the problem.
The issue is not simply that AI companies are moving too fast or behaving recklessly. It is that decisions with civilizational consequences are now being made by actors who do not—and perhaps cannot—claim authority on behalf of those who will bear the costs.
Anthropic didn’t just discover something dangerous. It decided—on its own—what to build, what to release, what to conceal, and who to include in the response. That is not just a technical governance gap. It is a legitimacy gap.
People are not only worried. They feel increasingly outside the “we” in whose name these decisions are made. And when that happens, the problem is no longer just risk management—it is the breakdown of obligation itself. Why should anyone accept the consequences of decisions made by entities they neither authorize nor meaningfully constrain?
This is why calls for regulation feel both necessary and insufficient. Regulation can allocate responsibility. But it cannot by itself restore the missing condition: a structure in which those who bear the costs recognize the authority making the decisions as acting on their behalf.
Until that problem is addressed, the system will continue to generate not just technological risk, but a deeper and more destabilizing outcome: a growing refusal to accept the burdens imposed by a world no longer experienced as a shared one
A disjointed and raucous governance structure — one with a long record of mismanaging medicine, education, welfare, pandemics, and immigration, just to name a few — is now supposed to take on a new, highly technical responsibility in an arena it barely understands? That is your proposal?
Before we rush to build a new bureaucracy for AI, we might consider curtailing the ones we already have. Expanding the administrative state in the name of “oversight” has rarely produced competence. It has far more often produced delay, confusion, and the diffusion of responsibility. The idea that the same machinery that struggles with its existing duties should now be entrusted with governing a technology it does not understand deserves far more skepticism than it is getting.
History offers a counterexample worth remembering. Underwriters Laboratory, created by the industries themselves rather than by government mandate, built an extraordinary track record of electrical safety in the United States. It worked because the people with the most at stake — and the most knowledge — were the ones setting the standards. Self‑interest, when properly structured, can be a powerful regulator of other self‑interests.