The Meaning of Anthropic vs the Pentagon
On the death rattle of the republic.

I.
A little more than a decade ago, I sat with my father and watched him die. Six months prior, he had been a vigorous man, stronger than I am today, faster and more resilient on a bike than most 20-somethings. Then one day he got heart surgery and he was never the same. His soul had been sucked out of him, the life gone from his eyes. There were moments of vivacity, when my father came back into his aging body, but these became rarer with time. His coherence faded, his voice grew quieter.
He spent those six months in and out of the hospital. And then on his last day he went into hospice. That day he barely uttered any words at all. In the final hours of his life, my father was practically already dead. He lay on the hospital bed. His breathing gradually slowed and became less audible. Eventually you could barely hear him at all, save for the eerie death rattle, a product of a body no longer able even to swallow. A body that cannot swallow also cannot eat or drink, and in that sense it has already thrown in the towel.
I spoke with him, more than once, in private. I held his hand and tried to say goodbye. My mother came into the room, and all three of us held hands. Eventually a machine declared with a long beep that he had crossed some line, though it was an invisible one for the humans in the room. My father died in the late afternoon of December 26, 2014.
A few days and eleven years later, on December 30, 2025, my son was born. I have watched death as it happens, and I have watched birth. What I learned is that neither are discrete events. They are both processes, things that unfold. Birth is a series of awakenings, and death is a series of sleepenings. My son will take years to be born, and my father took six months to die. Some people spend decades dying.
II.
At some point during my lifetime—I am not sure when—the American republic as we know it began to die. Like most natural deaths, the causes are numerous and interwoven. No one incident, emergency, attack, president, political party, law, idea, person, corporation, technology, mistake, betrayal, failure, misconception, or foreign adversary “caused” death to begin, though all those things and more contributed. I don’t know where we are in the death process, but I know we are in the hospice room. I’ve known it for a while, though I have sometimes been in denial, as all mourners are wont to do. I don’t like to talk about it; I am at the stage where talking about it usually only inflicts pain.
Unfortunately, however, I cannot carry out my job as a writer today without acknowledging that we are sitting in hospice. It is increasingly difficult to honestly discuss the developments of frontier AI, and what kind of future we should aim to build, without acknowledging our place at the deathbed of the republic as we know it. Except there is no convenient machine to decide for us that the patient has died. We just have to sit and watch.
Our republic has died and been reborn again more than once in America’s history. America has had multiple “foundings.” Perhaps we are on the verge of another rebirth of the American republic, another chapter in America’s continual reinvention of itself. I hope so. But it may be that we have no more virtue or wisdom to fuel such a founding, and that it is better to think of ourselves as transitioning gradually into an era of post-republic American statecraft and policymaking. I do not pretend to know.
This brings me to a skirmish between an AI company and the U.S. government. I don’t want to sound hyperbolic about it. The death I am describing has been going on for most of my life. This particular incident took place last week, and it may even be halfway satisfyingly resolved within a day.
I am not saying this incident “caused” any sort of republican death, nor am I saying it “ushered in a new era.” If this event contributed anything, it simply made the ongoing death more obvious and less deniable for me personally. I consider the events of the last week a kind of death rattle of the old republic, the outward expression of a body that has thrown in the towel.
III.
Here are the facts as I understand them: during the Biden administration, the AI company Anthropic negotiated a deal with the Department of Defense (now known as the Department of War, hereafter referred to as DoW) for the use of the AI system Claude in classified contexts. That deal was expanded by the Trump administration in July 2025 (full disclosure: I worked in the Trump administration at that time, though did not work on this deal). Other language models are available in unclassified settings, but until very recently, only Claude could be used for classified work, which is where the actions that involve intelligence gathering, active combat operations, and the like occur.
The deal, first negotiated between the Biden team and Anthropic, included two usage restrictions. First, Claude could not be used for mass surveillance of Americans. Second, Claude could not be used to control lethal autonomous weapons, which are weapons that can identify, track, and kill targets with no human in the loop at any point in the process. When it negotiated the expanded deal, the Trump administration had the opportunity to review these terms. It did, and it accepted them.
Trump officials claim to have changed their mind not so much because they want to do mass surveillance on Americans or use autonomous lethal weapons imminently, but because they object altogether to the notion of privately imposed limitations on the military’s use of technology. The administration’s change of heart on the terms of this deal has caused them to commit to a policy decision intended to harm or even destroy Anthropic, one of the fastest-growing firms in the history of capitalism, and arguably the current world leader in AI, an industry the administration claims to believe is crucial to our country’s future. But we’ll get to that.
IV.
The Trump administration has a point: it does not sound right that private corporations can impose limitations on the military’s use of technology. Yet of course, thousands of private corporations do just that. Every transaction of technology between a private firm and the military involves a contract (indeed, the companies that do this are called defense contractors for a reason), and these contracts routinely contain operational use restrictions, technological limitations, and intellectual-property restrictions.
In some ways, Anthropic’s terms resemble these traditional examples of privately imposed contractual limits on the military’s use of technology. The company’s position on autonomous lethal weapons, for example, is not one of outright opposition to the use of such weapons but instead a judgment that today’s frontier AI systems are not capable enough to autonomously make decisions about human life or death. This seems similar to existing cases where contractors negotiate stipulations on the use of their technology. The big difference, however, is that Anthropic is essentially using the contractual vehicle to impose what feel less like technical constraints and more like policy constraints on the military. (Think of the difference between “this fighter jet is not certified for flight above such-and-such an altitude, and if you fly above that altitude, you’ve breached your warranty,” and “you may not fly this jet above such-and-such an altitude”). It is probably the case that the military should not agree to terms like this, and private firms should not try to set them.
But the Biden administration did agree to those terms, and so did the Trump administration, until it changed its mind. That alone should make one thing clear: terms like this are not some ridiculous violation of the norms of defense contracting. Anyone attempting to convince you otherwise is misinformed or lying.
There is no law that says “contractual terms between the military and the private sector can have technical limitations, but not policy limitations,” in part because the line between those things is awfully hard to draw in timeless and universally applicable words (i.e. in a statute). The contract was not illegal, just perhaps unwise, and even that probably only in retrospect. Note that this is true even if you agree with the underlying substance of the limitations. You can support restrictions on mass domestic surveillance and lethal autonomous weapons, but disagree that a defense contract is the optimal vehicle to achieve that policy outcome. The way you achieve new policy outcomes, under the usual rules of our republic, is to pass a law.
Except the notion of “passing a law” is increasingly a joke in contemporary America. If you are serious about the outcome in question, “passing a law” is no longer Plan A; the dynamic is more like “well of course, one day, we’ll get a law passed, but since we actually care about doing this sometime soon, as opposed to in 15 years, we’ll accomplish our objective through [some other procedure or legal vehicle].” With this, governance has become more and more informal and ad hoc, power more dependent on the executive (whose incentive is to jam through every goal he has in as little time as possible, since he only has the length of his term guaranteed to him), and the policy vehicles in question more and more unsuited to the circumstances of their deployment, or the objectives they are being deployed to accomplish.
There are two concerns that the Trump administration says caused it to change its mind: number one, that Anthropic may impose these policy restrictions on them, by, say, pulling Claude from military use during active military operations. Number two, that these policy restrictions would be imposed by Anthropic in its capacity as a subcontractor for other DoW contractors. In other words, the DoW could come to rely upon some other company’s technology, which is in turn enabled by Claude and governed by the same terms of use that restrict domestic mass surveillance and autonomous lethal weapons (or, in the DoW’s mind, arbitrary new restrictions Anthropic could add at any time). Add to this the reality that the Trump administration perceives Anthropic to be its political enemy (they are probably right about this), and you have a situation in which the military suddenly realizes it is building reliance upon a firm it does not trust.
The Department of War’s rational response here would have been to cancel Anthropic’s contract and make clear, in public, that such policy limitations are unacceptable. They could also have dealt with the above-mentioned subcontractor problem using a variety of tools, such as issuing guidance to contractors to avoid agreeing to terms with subcontractors that constitute policy/operational constraints. If Anthropic refused to compromise on its red lines for the military’s use of AI, the execution of such a policy would mean that Anthropic would be restricted from business with the DoW or any of its contractors in those contractors’ fulfillment of their classified DoW work.
But this is not what the DoW did. Instead, the DoW insisted that the only reasonable path forward is for contracts to permit “all lawful use” (a simplistic notion not consistent with the common contractual restrictions discussed above), and has further threatened to designate Anthropic a supply chain risk. This is a power reserved exclusively for firms controlled by foreign adversary interests, such as Huawei, and usually means that the designated firm cannot be used by any military contractor in their fulfillment of any military contract.
War Secretary Pete Hegseth has gone even further, saying he would prevent all military contractors from having “any commercial relations” with Anthropic. He almost surely lacks this power, but a plain reading of this would suggest that Anthropic would not be able to use any cloud computing nor purchase chips of its own (since all relevant companies do business with the military), and that several of Anthropic’s largest investors (Nvidia, Google, and Amazon) would be forced to divest. Essentially, the United States Secretary of War announced his intention to commit corporate murder. The fact that his shot is unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: do business on our terms, or we will end your business.
This development strikes at a core principle of the American republic, one that has traditionally been especially dear to conservatives: private property. Suppose, for example, that the military approached Google and said “we would like to purchase individualized worldwide Google search data to do with whatever we want, and if you object, we will designate you a supply chain risk.” I don’t think they are going to do that, but there is no difference in principle between this and the message the DoW is sending. There is no such thing as private property. If we need to use it for national security, we simply will. The government won’t quite “steal” it from you—they’ll compensate you—but you cannot set the terms, and you cannot simply exit from the transaction, lest you be deemed a “supply chain risk,” not to mention have the other litany of policy obstacles the government can throw at you.
This threat will now hover over anyone who does business with the government, not just in the sense that you may be deemed a supply chain risk but also in the sense that any piece of technology you use could be as well. Though Chinese AI providers like DeepSeek have not explicitly been labeled supply chain risks (yes, really; this government has decided that Anthropic, an American company whose services it used in military strikes as recently as this past weekend, is more of a threat than a Chinese firm linked to the Chinese military), that implicit threat was always there.
No entity with meaningful ties to government business would use DeepSeek, simply because the regulatory risk is too high. Now that the government has applied this regulation to an American company, the regulatory risk simply exists for all software. This action, combined with the broader political risk the government has created, will increase the cost of capital for the AI industry. Put more simply, this will mean less AI infrastructure and associated energy generation capacity.
Stepping back even further, what the DoW has done could end up making AI less viable as a profitable industry. If corporations and foreign governments just cannot trust what the U.S. government might do next with the frontier AI companies, it means they cannot rely on that U.S. AI at all. Abroad, this will only increase the mostly pointless drive to develop home-grown models within Middle Powers, and we can probably declare the American AI Exports Program (which I worked on while in the Trump administration) dead on arrival.
In short, I can see only downsides to the Trump administration’s decision to designate Anthropic a supply chain risk, particularly considering the far less costly policy alternatives it could have employed. One gets the sense that the people making these decisions at the DoW are not acting with strategic clarity nor any respect for the basic principles of the American republic—not to mention in stark contrast to President Trump’s own stated vision of letting AI thrive in America.
V.
With each passing presidential administration, American policymaking becomes yet more unpredictable, thuggish, arbitrary, and capricious—a gradual descent into madness. It is hard to know at what point ordered liberty itself simply evaporates and we fall into a purely tribal world.
Even if Secretary Hegseth backs down and narrows his extremely broad threat against Anthropic, great damage has been done. Even in the narrowest supply chain risk designation, the government has still said that they will treat you like a foreign adversary—indeed, they will treat you in some ways worse than a foreign adversary—for refusing to capitulate to their terms of business. Simply for having different ideas, expressing those ideas in speech, and actualizing that speech in decisions about how to deploy and not deploy one’s property. Each of these is fundamental to our republic, and each was assaulted—not anything like for the first time but nonetheless in novel ways—by the Department of War last week. Most corporations, political actors, and others will have to operate under the assumption that the logic of the tribe will now reign.
There is something deeper about the damage done by the government, too. The Anthropic-DoW skirmish is the first major public debate that is truly about where the proper locus of control over frontier AI should be. Our public institutions behaved erratically, maliciously, and without strategic clarity. Our political leaders conveyed little understanding of their own actions, to say nothing of the technology and its stakes. They got off on an extraordinarily bad footing with leading AI companies, and it is hard to imagine their ever recovering, because they do not seem to care about improvement. The machinery of our current republic seems to be in such disrepair that it is hard to see how it lasts. No one knows what comes next, but I strongly suspect that whatever it is will be deeply intertwined with, and enabled by, advanced AI. It is with this that we will rebuild our world, as Tyler Cowen advised in a recent post. As we do, and as we have future debates about the proper nexus of control over frontier AI, I encourage you to avoid the assumption that “democratic” control—control “of the people, by the people, and for the people”—is synonymous with governmental control. The gap between these loci of control has always existed, but it is ever wider now.
No matter what world we build, the limitations imposed in the law on what we know today as “the government’s” use of AI will be of paramount importance. We really do want to ensure that mass surveillance and autonomous weapons/systems of control cannot be used to curtail our liberties—at least we want to try. So I applaud the AI labs for caring about these red lines. Over the coming years and decades, I expect that our liberty will be in greater peril than many of us comprehend.
Each of us gets to choose which futures we wish to fight against, which we can live with, and which we will fight for. As you make your choices, I suggest ignoring the din of the death rattle and trying to think with independence. Do not process this with the partisan blinders of 20th century mass politics; one way or another, you are entering a new era of institution-building in living color.
Before you get to all that, though, take a moment to mourn the republic that was.
Dean W. Ball served as Senior Policy Advisor at the White House Office of Science and Technology Policy, where he was the primary staff drafter of America’s AI Action Plan. He writes the AI-focused newsletter Hyperdimensional.
A version of this article was originally published in Hyperdimensional.
Follow Persuasion on X, Instagram, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:





O dear, I wish you would not forget that there is no DoW. It's real name is Department of Defense... Please do not concede!
Thanks for the fantastic article. I agree that the DoD's brutal campaign of retaliation against an American company represents the conduct of an authoritarian state -- one that has replaced the republic we once knew. (Not that you said it exactly like that, but this is what I take away). The corrupt kleptocratic side of the equation is represented by OpenAI's CEO's $25 million gift to Donald Trump. The only response I can summon is to switch from ChatGPT to Claude, which I described in my post about this debacle. https://www.activevoice.us/p/i-unsubscribed-from-chatgpt-and-subscribed