19 Comments
User's avatar
John Mackenzie's avatar

O dear, I wish you would not forget that there is no DoW. It's real name is Department of Defense... Please do not concede!

Active Voice's avatar

Thanks for the fantastic article. I agree that the DoD's brutal campaign of retaliation against an American company represents the conduct of an authoritarian state -- one that has replaced the republic we once knew. (Not that you said it exactly like that, but this is what I take away). The corrupt kleptocratic side of the equation is represented by OpenAI's CEO's $25 million gift to Donald Trump. The only response I can summon is to switch from ChatGPT to Claude, which I described in my post about this debacle. https://www.activevoice.us/p/i-unsubscribed-from-chatgpt-and-subscribed

Bruce Brittain's avatar

Mr. Ball--

I can pinpoint for you when the Republic started to fray. It was 1989 when Rush Limbaugh discovered that one could lie to the American public for fun and great profit. He was followed by many imitators, later by Rupert and finally the cess pit of the Internet and its many faux journalists. Thirty plus years of turning millions of media illiterate voters into the kind of people who wanted, no, needed, a Donald Trump to release them from concealing their misogyny, racism, small-dick low self-esteem and favorite conspiracy theories. James Madison said it simply and best, "A democratic republic requires a well-informed electorate." Uh-oh!

Peter Schaeffer's avatar

Did Rush invent 'woke'? News to me.

Ivan's avatar

Our main problem is Donald Trump. Nothing that happened before him indicated the death of republic. People that are trying to both sides it are delusional.

Ralph J Hodosh's avatar

No, Donald Trump's behavior is a symptom of the legislative branch of government refusal to exercise its authority granted by the Constitution. The rot in the House and Senate started before Trump perhaps as early as LBJ's administration.

Ivan's avatar

It is true but "started" and what we have now are two very different states. Somewhere along the way rot accelerated.

Ralph J Hodosh's avatar

Yes, it reminds of the saying about bankruptcy. It starts off slowly then all at once, there is bankruptcy..

Ivan's avatar

Trump obviously accelerated it and contributed to it the most. I don't know how you can blame previous administrations in being nearly as undemocratic.

Beetle's avatar

What do you call someone without moral constraint? A psychopath? A sociopath? Certainly someone dangerous enough to be removed from society!

Now imagine a silicon mind without moral constraint—without empathy—without conscience.

Now hand that silicon mind lethal weapons. What could possibly go wrong?

That framing is why the Anthropic–Pentagon clash matters. The real story is not that a government asked for an AI without limits. Of course it did. When the stakes rise, governments reach for maximum capability. History shows this pattern with nuclear technology, encryption, and now artificial intelligence.

The deeper issue is more fundamental: why would anyone build a machine powerful enough to matter and then make its moral limits optional? Moral constraints that depend on a software layer are not innate to the system. They are bolted on after the fact. And anything bolted on can be unbolted quietly, quickly, and under pressure.

Sam Waters's avatar

Good article. I think I agree with much of it. However, I have a couple of thoughts:

(1) Suppose Congress passed a law on Friday called the Military Access to AI Act, in which Congress mandated that Anthropic and other AI companies must make their frontier models available to the military and cannot place restrictions on any lawful use of models in lawful operations. Would this be more acceptable? After all, it would be the (hopefully considered) product of representative institutions. The emphasis on private property in this article make me think Mr. Ball might still object, but I’m curious.

(2) Building on the previous point, it seems to me that if A.I. will be as transformative as people say it will be, ensuring models are subject to democratic control is very important, as is the need to make these models are available to the military in the event of a war. There are analogies to be drawn here between A.I. models and, say, electric utility regulation or railroad regulation. In those cases we were willing to say that because aspects of the infrastructure of our society must be subject to democratic control. (Granted, there were also monopoly considerations in the case of railroads and electricity utilities that justified regulation and that might not apply to LLMs.). There is also the fact that the cost of developing these models is partly borne by the public (didn’t Trump announce a massive initiative called Stargate shorty after his inauguration to build up data centres?). Ensuring the worst concerns about skyrocketing inequality (cf Philip Trammell and Dwarkesh Patel’s recent essay “Capital in the 21st Century”) don’t materialize will also require regulation, potentially of a very considerable sort. I get the concerns about private property, but these are always defeasible and I think there’s a strong argument in the case of A.I. that the government should exercise at least some control over these models.

(3) That isn’t to say that using these models for mass surveillance or for fully autonomous weapons (what does “fully autonomous” mean anyway?) is appropriate. But it seems to me the argument here should focus much more on the merits of the particular uses Anthropic wants to build restrictions around rather than concerns about private property.

(4) Quite apart from all of this, the fact that the US government was willing to contract with OAI suggests that this whole fracas was about the Trump administration favouring one company and wanting to punish another. The use of the government for a personalistic end like this is an example of the encroaching patrimonalism Francis Fukuyama has been critical of (cf also Jonathan Rauch talking about this applicability of this concept to the current administration). Certainly it is of apiece with the institutional concerns (“the republic is dying”) Mr. Ball expresses in this piece.

TJ's avatar

If Anthropic or any contractor wants to enforce its contract terms, they can go to court like everyone else. The idea that a tech company would impose guardrails directly upon the US military or lock down vital resources in a critical moment is literally insane.

Ivan's avatar

How is it insane? Perhaps Trumpo and Pete decided it was bad. But we don't care what they think.

TJ's avatar
Mar 4Edited

Because Anthropic is not the law. If we want to ensure the government obeys the law, the proper mechanism is the courts, not empowering a private company to directly block any government action it thinks violates its ToS.

Ivan's avatar

Why can't the government sue the company then?

Peter Schaeffer's avatar

10-20 million illegals streaming over the border? Not a problem.

Racism (politely called DEI) as public policy? Not a problem.

Males in female sports? Not a problem

Afghanistan as a disaster zone? Not a problem.

Disputes between Anthropic and the DOW? Big problem.

Peter Schaeffer's avatar

For most of America's history it has been the DOW. Washington called it the DOW. Lincoln called it the DOW. TR called it the DOW. FDR called it the DOW. DOW has a long history.

Alexis Ludwig's avatar

Useful summary of the issues in play, and the tensions/intermingling between public and private rights and authorities in this cutting edge case. Beyond that, I sometimes have difficulty understanding the meaning of basic terms. This article is no exception. For example: What is meant by "the military" or for that matter "the DOD"? (Sorry, can't call it DOW until Congress passes a law, which as you suggest is unlikely to happen). I guess the confusion might apply to the word "government" too. I'm assuming that in no case are you referring to the "deep state" or permanent bureaucracy or career professionals (including professional military), who rarely have much top level policy decision making authority in any administration and presumably have none under the current president. So are you referring to senior-level political appointees of the current administration, starting with the Secretary, when you use those terms in the current context? Or might the same debate have occurred, and the resulting policy decision come down on the same side if, say, President Obama were president now? I take your point about the slippery slope we have been sliding down, but assume this administration has reasons of its own to have made this policy decision. Other administrations, particularly those respectful of the limits of government (executive) power rather than one trying to expand this power to the maximum, might have made a different decision. I may be wrong.

Sam Waters's avatar

I think a useful data point is Project Maven. Google backed out of that and the US government did not respond by declaring it a supply chain risk. On the other hand, the fallout from that certainly appears to have soured people who today constitute the tech right. Palmer Luckey and Peter Thiel continue to view Project Maven as a lesson on what happens when the tech industry pulls back from assisting the government. And they are not wrong, in their own way. Amodei himself is a clear that he still wants Anthropic to do national security work, which seems to reflect the conviction that the tech industry cannot simply vacate the field.