Dean Ball served as Senior Policy Advisor at the White House Office of Science and Technology Policy, where he was the primary staff drafter of America’s AI Action Plan. He writes the AI-focused newsletter Hyperdimensional.
In this week’s conversation, Yascha Mounk and Dean Ball discuss the clash between Anthropic and the Department of War over AI usage restrictions, why mass domestic surveillance capabilities make AI governance so challenging, and how to regulate transformative technologies under conditions of radical uncertainty.
This transcript has been condensed and lightly edited for clarity.
Yascha Mounk: So we recorded a really interesting in-depth conversation about the broader philosophical issues on how governments should or shouldn’t regulate AI. And then, you know, this amazing news story broke over the course of the last week with this head-on clash between Anthropic and the Department of War. For listeners who are not in on the details of this, give us a brief summary of what happened. What are the stakes of this fight?
Dean Ball: So the brief summary is that in June 2024, during the Biden administration, Anthropic and the Department of Defense signed a contract for the use of Claude in classified settings—in intelligence gathering and analysis, active combat operations, things like that. That contract had usage restrictions that applied to a number of different things, but the two that are now the subject of discussion have to do with mass domestic surveillance and autonomous lethal weapons. Autonomous lethal weapons are weapons that can autonomously close the kill chain, which is to say they can autonomously identify, track, and kill human targets, with no human intervention whatsoever. That contract was expanded by the Trump Department of Defense in July 2025.
When I worked for the administration—though I will say I had no role in negotiating that deal—it had basically the same restrictions. The terms changed very slightly, but those restrictions did not change. Then in fall 2025, Emil Michael, the Undersecretary of War for Research Engineering, looked at these contracts for frontier AI. As far as I understand it, he determined that these contracts had usage restrictions that were too onerous. This is not what the previous political officials in the Trump DoD before him had thought; it’s also not what the many lawyers who I’m sure signed off on this agreement—or what Biden and Trump—thought. But he decided that these terms were overly onerous and so he sought to renegotiate them. This started several months ago. He began to try to renegotiate them, and Anthropic, I think, did renege on some of its red lines, but not particularly on the red lines related to mass domestic surveillance or autonomous legal weapons.
So rather than just canceling the contract, they have canceled Anthropic as a contractor. They have now designated Claude a supply chain risk. What this means is that no DoD contractor can use Claude—but we don’t know the exact details of what that means. Does it mean they can’t use Claude at all? Does it mean just when they’re doing DoD business, or only particular subsets of DoD business? We don’t exactly know. Those details will probably be available in the relatively near future, but they’re not at this exact moment that I speak to you. That’s kind of where we are.
Mounk: Just to jump in with a few points here—that is a really extraordinary step, right? To say, look, we don’t agree over these user restrictions, we’re going to go search for a different commercial partner, this is something that may be wise or unwise, but that would be, you know, a very normal step. To designate Anthropic, a U.S. company, as a supply chain risk—which is a designation usually reserved for companies like Huawei that are effectively controlled by the Chinese state—is really quite extraordinary. It matters a lot whether that just relates to dealings with the Department of War or whether it’s in general, as there would be an even broader escalation. I suspect that we can certainly expect Anthropic to sue against this designation, and quite possibly to prevail, given how unprecedented the use of this designation is in this context.
Ball: Yes.
Mounk: I kind of want to get to a couple of the broad implications here. The first is to talk about the nature of these use restrictions. We talked a little bit about autonomous weapons and the complete kill cycle. What about the fears about domestic political surveillance? Why is that one of the major sticking points here? It’s one that’s been talked about less in the debate over this. How is it that these new AI models facilitate domestic political surveillance and why is that such a concern?
Ball: Well, there’s two things that are worth mentioning here. The first is that it is illegal for the government to directly collect private data about American citizens. So the government can’t, without a warrant, without some extraordinary circumstances, wiretap my phone. It can’t put cameras in my house and record what’s going on inside my house. But it can often acquire data from commercial vendors who might have private sensitive information about Americans, or might have data sets from which private facts can be inferred using analysis. And it can also conduct that analysis.
In other words, doing the analysis on the data is different from actually directly collecting it yourself as the government, and doing the analysis doesn’t count as surveillance for purposes of the relevant national security laws here. So this is something that’s existed for a while—privacy advocates and civil libertarians have talked about these issues for a long time. How does AI change this dynamic? Well, frontier AI means that all of a sudden, you don’t need some expert. I don’t need to pay some data scientist or some intelligence analyst.
Let’s say the government wants to track my movements. Well, in the past, it might’ve been awfully expensive to do that because why would you pay a human to specifically track the movements of me and my family? It would be quite expensive relative to the intelligence value of my life. But all of a sudden, you get to a world where the government, instead of having a few thousand intelligence analysts, has a few million or tens of millions because they’re autonomous agents. Well, then all of a sudden the cost is effectively zero. Nothing changed about existing law. Nothing really even particularly changed about existing intelligence practices. All that changed is that AI essentially made the value of expert attention much cheaper than it used to be. That in and of itself enables quite significant potential for surveillance.
Mounk: One way of putting this is that the laws haven’t changed. The laws were put in place for a reason, which is to make it impossible for the government to survey innocent Americans at scale unless there’s probable cause of some very serious crime, some kind of judicial sign-off, and so on. Now, suddenly, technology just means that the laws, even though they haven’t changed, are really no longer effective for the purpose that they were trying to secure. Obviously that is a very large concern.
We hope you’re enjoying the podcast! If you’re a paying subscriber, you can set up the premium feed on your favorite podcast app at writing.yaschamounk.com/listen. This will give you ad-free access to the full conversation, plus all full episodes and bonus episodes we have in the works! If you aren’t, you can set up the free, limited version of the feed—or, better still, support the podcast by becoming a subscriber today!
If you have any questions or issues setting up the full podcast feed on a third-party app, please email leonora.barclay@persuasion.community
Now, that gets to a larger set of things I’ve worried about for the last week. So in this particular fight, I think it’s very easy to be sympathetic to Anthropic. You’ve written quite strikingly about your disquiet—that’s really probably a too soft a word—wiith how the administration has acted. You yourself served in the Trump administration, helped to formulate the administration’s AI policy. And you have said and written that you find the way in which we’re treating Anthropic to be very counterproductive, irresponsible, and dangerous to our republican institutions. But we also need to go beyond this current situation and the particular way in which this fight has shaped out to this broader question of: Who do we actually want to control those technologies?
My fear is that this is a dilemma that really doesn’t have a very good answer. As we’re seeing at the moment, you don’t want the government to have control over technologies that would, for example, allow it to engage in mass surveillance of ordinary citizens. But of course, contrary-wise, you also don’t want a bunch of private individuals to have control over governments that have huge national security implications—technologies that could, for example, potentially hack into nuclear weapons programs or in other ways impede how we compete with competitor nations and keep our country safe. And another element of this, it seems to me, is that our analogies don’t really work very well. The president of the United States has a nuclear football and he can wipe out a big part of the world at his whim within 20 minutes. That is a very, very scary and dangerous fact.
But even though the president is able to lead to the destruction of much of the world in that way, nuclear weapons actually aren’t terribly useful in engaging in political manipulation domestically, in mass surveillance, in manipulation of public opinion. These new AI tools are both potentially very dangerous in perhaps inspiring a war if they fire on the wrong target or a rocket launch is misinterpreted as an incoming nuclear weapon or other kinds of problems. But they’re also really powerful in these domestic ways that these previous technologies were not. So the sort of dilemma of AI control is really sharpened in this context.
Ball: Absolutely. And I would go even further and say that, when I think of my current uses of AI chatbots, for lack of a better term, and the coding agents as well—and I think my current use is probably scratching a very small amount of the surface of what my uses of this technology will be in the fullness of time—but even when I think about just what I use them for today, already, I feel that these technologies are a profound part of how I express myself, thus how I use my first amendment rights. A big part of that has to do with the ways in which the models are aligned. So the idea that the government is going to decide how to align the models and it’s going to be able to monitor my usage of these things directly just seems crazy to me—that such a thing could ever be possible.
It’s also worth noting that this administration doesn’t even believe in that version of AI capabilities improvement. This administration has said, no, we think AI is like normal technology. We don’t think it’s gonna rapidly progress in these ways and become so profoundly important that it’s a huge nuclear weapon-like event. They dispute that characterization of AI and in some ways I actually agreed with them. It’s funny because this decision by them is kind of even more confusing and erratic given that it doesn’t even fit into the broader administration strategy.
Mounk: If you thought that we are on the precipice of super intelligence and it’s a completely unprecedented technology, then being absolutely unwilling to have any private control of it would make a lot more sense. But if you’re saying, look, this is an important technology with important implications, but it is actually kind of analogous to previous inventions and is not completely abnormal in those ways.
Ball: I think that we are pretty close to something like Superintelligence, but it is always very complicated. It means a lot of different things to different people, but whatever we mean, we’re on the verge of extremely powerful world-changing systems. I think we kind of already have them and I think they should remain in private hands, though of course with the government providing regulation of various kinds. But ultimately I hope it’s pretty light touch regulation. I think many people will come around to my view if they really think about what this technology will mean and what government ownership of it would mean. Imagine if the government owned your bank, every bank, and you could only ever spend money in ways that the government had complete and total visibility over, and could bill into and control. I think many Americans think you would not practically have liberty in that scenario. And I think that something similar is true here.
Mounk: You write what I think is one of the most interesting newsletters about the intersection of artificial intelligence and public policy. One of the problems of trying to grapple with AI, whether that is from a perspective of political science, public policy, or economics, is how you govern something. How do you think about influencing the development of something whose effects are still so radically uncertain?
That is true both because it is still so hard to figure out, for example, whether AI is currently leading to huge job loss among entry-level professionals or whether it is actually a bust and not really having an impact on the economy. It is all the harder to know ten years from now whether AI is going to look roughly like what it looks like today or whether it is going to be some form of superintelligence that is incredibly able to do things of its own accord, whether good or bad.
Before we get into some of these questions about how to actually govern AI, how do you think about trying to answer some of those questions under these conditions of radical uncertainty?
Ball: One of the things I really like about AI is that it is maybe the most general-purpose technology that has ever been conceived. I think its competition is electricity. It has a very uncertain capabilities trajectory. It is very hard to know exactly what the world with it will look like.
A world with AI diffused in it is very hard to know. The “positive vision” is very hard to articulate. It presents such a wide variety of things that implicate the public interest and public policy that you end up going to political-theoretic first principles much more than you do when we are debating the details of telecoms policy or health care subsidies. There the works of Edmund Burke, Friedrich Hayek, of philosophers and political theorists, do not come up in the same way that they do with AI. AI is almost, in some sense, a literary device for thinking about the future, in addition to being a very real technology. What should the disposition of government overall be?
My view on this is that, first of all, there are high-level incentives, very deep incentives, in our society. One of them is the market system and the price system. That structures the incentives of everyone in various ways. Another that is really important, on the more protective side, is liability. We have this system of common law liability that has existed for centuries and that allows people harmed by the actions of someone else to seek redress in court, decided by a jury of their peers, if they so choose.
Those two things in and of themselves structure the incentives of every actor in our society in ways that are quite deep and useful. I like to look at those things first and think about what they are going to get me and what they are not going to get me. The price system and capitalism get you aggressive development of the technology. The liability system gets you some harm reduction. It causes big companies to incorporate the idea that if this hurts someone, they will get sued, and so they have to build in safeguards.
We have those two big things. What are the gaps that come in between? There definitely are going to be gaps. How do we identify what those gaps will be? We can try to sit around a table in Washington or in Brussels or, for that matter, in Beijing and speculate about what the harms will be that we need laws to address. We will do that with varying degrees of success. My personal disposition is that most of the time we will not be very successful at that.
Or we can have a fundamentally reactive public policy that looks at harms as they emerge and decides ways to deal with them after they have happened and passes narrowly tailored laws to avert specific kinds of harms that, for whatever reason, are not adequately addressed by existing law or the liability system.
That is my very broad impression. AI is already regulated in many ways. We should rely on the huge base of existing laws that we have before passing a bunch of new, speculative rules. We are usually wrong in our speculations, even the smartest of us. Maybe especially the smartest of us have a tendency to be quite wrong about things. The EU AI Act does not talk about a lot of the things that people are actually worried about with AI today, like in the American electorate. None of the regulations from a year or two ago addressed those things at all.
I think it is all very much a combination of a Burkean view and a classical liberal view. A traditional conservative view, but also a classical liberal view about being very open to change, which maybe Burke is not.
Mounk: There are a lot of things that are interesting in there, and I want to drill down on some of them in a moment, but let’s stay for a moment at this philosophical level because there is something surprising about invoking, whether it is the classical liberal thinkers or particularly someone like Edmund Burke, in a discussion of this twenty-first-century, cutting-edge technology that is going to transform the world. You have written in a number of ways about how conservative thinkers, whether it is Burke or Oakeshott and others, can help give us the right kind of orientation toward this.
The point that you have just made that is obviously right is that your general conception of politics and of the good governance of society is going to give you instincts about how to react to artificial intelligence. There are going to be some people in a more European statist tradition who think there is a new thing happening. It has lots of potential benefits, perhaps, but also lots of potential risks. Let’s pass a bunch of regulations that go heavy on reining in the risks. The role of the state is to protect its citizens from harm. It is to govern the economy and to make sure that there is a kind of primacy of our collective will over the private economy. So let’s lean toward trying to figure out the ways that we are going to minimize the potential harm and master this.
There is a very different set of attitudes that listeners to this podcast will also instinctively understand, whether they agree with it or not, that is more libertarian. People should have the freedom to do whatever they want unless or until there is very clear proof of harm. Until we actually see AI doing really terrible things, we should be on the light side of regulation.
It seems to me that conservative thinkers have more of an attitude toward continuity and change. They think that there are lots of elements of our society that work for reasons we do not fully understand, and we want to make sure that we do not impose a huge rational design on the world in a way that might stop those things from working. That includes forms of regulation that something like the EU might be tempted to pass.
At the same time, we also want to ensure that change does not engulf everything. We do not want a libertarian instinct that says, let’s just see what happens if there is a potential risk of AI completely transforming society in such a rapid way that we cannot keep track of it. Perhaps there is a reason to slow that down a little bit.
So tell us, when you invoke Burke and Oakeshott, what kind of sensibility do you think that should give us about how to respond to this potentially revolutionary technology that is spreading at incredible speed throughout society and has the potential of leading to a degree of change and upheaval not experienced since the first large-scale factories in Manchester in the early nineteenth century, and perhaps not even then?
Ball: I think this is the fundamental tension. It is a tension that exists in my own writing because it is a tension in how I think. There is one part of me that is intrinsically skeptical of change. I like things about my life the way they are. I have what Michael Oakeshott would have called a disposition to be conservative, particularly with respect to what government does. The reason I am conservative about what government does is because, ultimately, conservatism is this kind of beautiful and ultimately tragic philosophy. I say tragic in the sense that conservatism, done right, is not about resisting change. It is about a disposition toward change and a way of reconciling yourself to the dynamism of reality.
Of course, there is a more libertarian or classical liberal impulse that says dynamism is often good, which is also true, though it is not always true. One thing that I think I ultimately come down on is this: what is worse than creative destruction? Creative destruction has its problems. The word is destruction. It is not fun to be destructive, even if it is creative. Stagnation is worse. Stagnation is much closer to death. At least from creative destruction, new things can be born. Birth is the thing that we should want. We should want things to be born, and we should want growth. Those things are healthier and better from all dispositions. They are morally better, I would say. Stagnation is the worst possible thing, and an inevitable outcome of stagnation is gradual death.
What regulation does is it can freeze things in place. It can create red tape. Inside every law that is written about AI is a complex of implicit and explicit assumptions about what AI is, what kind of thing it is, what kind of role it will play in our society, and what is even technologically possible. There are all these assumptions, stated and unstated, that I think AI itself will combust. What concerns me very much is that we will pass laws that freeze in place a status quo with which, if we are honest with ourselves, many of us are already dissatisfied.
Everyone is of the opinion that something has to change, and we have different intuitions about what. To use a specific example, five years ago, in 2020 during the pandemic, a lot of Americans, especially conservatives, got outraged when their kids came home and were on Zoom class and they heard what schools were teaching. They said we urgently need reform in the education system. This is horrible.
A couple of years later, a technology comes around that has obvious, profound implications for the institution of the schoolhouse and the institution of education, and I think creates huge new opportunities in education. All of a sudden, the mentality, especially among conservatives but also others, is that this technology is such an imposition. Why do we have to accommodate all this change just for this technology? I thought change was what you wanted.
There is this tension all the time in a lot of popular discourse about AI, where AI puts us in a posture of defending institutions that, if we are honest with ourselves, have been showing their age for most of my lifetime, and I am 33.
Mounk: There is a broader issue that I think is confusing a lot of politics at the moment, which is that it probably never was true that progressives wanted history to speed up and conservatives wanted to stand at the top of history and yell stop. But there was something to that. Today, in many respects, the roles have reversed.
A big part of the Republican coalition today is accelerationist and wants to go all out on innovation and change. I know that is not the dominant faction, but it is a significant faction. A significant faction of the left today is people who instinctively are against any form of change. I am really struck by the fact that one of the most viral essays about AI on the left was this piece by Jia Tolentino in The New Yorker, which I mentioned in a piece I wrote about this.
To my ears, it sounds like a nineteenth-century priest denouncing the evils of trains, before going on to say, “I have never used AI. Trains are evil, but of course I would never ride on one.” I know that is not the dominant faction of the left, but there is this deeply entrenched, small, conservative instinct on the left that if it is change, then it must be bad, and we should somehow stop it, even as the left often criticizes the status quo as being very negative.
Let’s make this a little bit more concrete, because I still struggle to get my head around the different options for how to regulate things and what the actual proposals are on the table. You know this in quite a lot of detail. It sounds like you are quite critical of what the European Union laws and regulations about this are going to be. My understanding is that some of these laws have been suspended for at least a year, but they are going to be implemented at some point, at least supposedly. What assumptions about AI and about the right way to govern it do you think are implicit in the set of rules that the EU has passed? What actual rules are at stake here, and why do you think they are a mistake?
Ball: The AI Act has two halves. Conceptually, you can divide it into two parts. One is regulations put upon developers of AI systems. Another is regulations placed principally upon deployers of AI systems in “high-risk contexts,” which would be governments, banks, financial institutions, health care, education, and things like this, some of the most vital sectors in our society, and maybe some of the ones most in need of institutional dynamism.
The AI Act perfectly exemplifies the assumptions problem, because most of the text of the AI Act was written in the early 2020s, prior to ChatGPT. The assumption of what AI was at that time was that the AI systems that predominated were principally narrow, machine-learning-based systems. For example, computer vision used for facial recognition, but that is all it can do. I am going to have a camera on the bottom of my tractor that looks at images of crops for defects in the crops, but that is all it can do. It cannot do anything else.
Most famously, there were algorithms where you might take a loan application and process it through a machine-learning system, basically a statistical model based upon the previous loan data that the financial institution had, and then make a prediction about whether this person is going to pay back their loan based on the historical data, and then decide whether or not to issue the loan based on that, often with human review.
The interest was in regulating systems like this. There are a couple of things about that. First, it is mostly regulated institutions that are doing that kind of thing. In that world, where that is what AI is, I as a consumer am not going to use AI. AI is going to be used on me. It is an already regulated institution that is working with contractors, assembling data sets, and that sounds like it probably costs a lot of money and takes time. So maybe the marginal increase in paperwork is not that significant compared to what they were already going to do anyway in terms of cost. That is debatable, but you could at least have that intuition.
Finally, it may also be the case that the bank’s historical loan data has real biases in it. There may have been a period in the U.S. context where it was de facto or de jure illegal for certain demographic groups to even get a loan, and that might bias the data. If you purely take the historical data and put it into a machine-learning system, that system may well be unjustly biased against certain demographic groups. There is a fair argument to be made there.
You can also get into disparate impact arguments where it turns into race Bolshevism, where everyone has to be treated equally about everything, and that is also a problem. But there is a core of legitimate critique there.
Then, right at the end of the AI Act, in the final months of the legislative process, comes generalist language models: ChatGPT, Claude, Gemini, and so on. These are totally different. Is bias an issue in these things? Yes. Anyone can use them. They are adoptable extremely quickly by both enterprises and consumers.
There is also the issue that one of the fundamental constructs in the AI Act is the notion of the consequential decision. When you make a consequential decision, that is a highly regulated moment. With a language model, which structures your information environment in subtle ways, does research for you, and writes software and code, this is general intelligence.
Mounk: The idea of a consequential decision is that you are an insurer and you are deciding whether to accept this person for insurance. That moment is the consequential decision. If you want to outsource this so that human beings do not review the files, you have an automated system that you have programmed specifically for this, then it is very clear what the consequential decision is.
If, instead, you incorporate ChatGPT or whichever AI model into all kinds of business processes, where does the consequential decision start and stop? If this regulation suddenly applies to all of those business practices, then you are basically making it impossible to use AI.
Ball: To a first approximation, the insurance company has one use of that narrow system, which is to review the application and provide a recommended decision. In the case of a language model, an insurance company might have ten thousand uses of the same neural network. This is exactly what I am referring to, where there is this complex of assumptions that just ended up being wrong, and now we have to live with that. We have to live with the consequences of that.
That is a great example of what I worry about. We have witnessed that play out in well under half a decade. It took three years for us to go from the framework that the Europeans had to the generalist models completely obviating it. I bet you a lot of the laws we think of today will be obviated by something that comes down the road for which we do not yet have a word.
Mounk: When you say that we have to live with those consequences, I have a question, somewhat narrowly, about the European Union. I have been in a number of debates about this in Europe and would love your view on it.
Europe now really believes in the idea of a Brussels effect, that because Europe is a very significant market, when they pass certain rules and regulations, for example on the fuel efficiency of cars, that ends up constraining even what Ford does in the United States because they cannot afford not to produce for the European market. It is not really worth producing two completely different sets of cars for the U.S. and Europe. Therefore, a rule passed in Brussels can transform the car industry even outside of Europe.
They have now applied this idea to AI and really think that Brussels is able to steer the development of these frontier AI models, which, to a close approximation, do not happen in Europe at all. They happen mostly in Silicon Valley and to some extent in China, and perhaps a few other places. There are not many significant AI players in Europe at the moment. But because companies are going to want to sell their products in the EU, perhaps that somehow really constrains OpenAI and how it produces those frontier AI models.
I have a few things to say about this. As somebody who has both a U.S. and a German passport, the first is that it just seems to me like an incredibly low level of ambition for Europe to say, fine, we are not going to be players in the development of the most consequential technology of the twenty-first century, but we are going to be able to regulate it. It seems like a somewhat sad ambition.
The second question is that when I think about what the real risks of AI are, terrorists being able to design a biological weapon that kills tens of millions or hundreds of millions of people, a complete change in military technology where autonomous drones transform modern warfare and perhaps change the balance between offense and defense in such a way that it becomes much more enticing to start wars, a huge percentage of the middle class losing their jobs because AI becomes better able than most human beings at doing white-collar jobs and perhaps eventually blue-collar jobs, or even the existential risk of some misaligned AI model becoming so superintelligent that it runs over humanity, none of these things are going to stop at the border of Europe.
It seems absolutely naive to think that any of those developments will somehow not happen in Europe because, at the European border, the European AI Act comes into play.
On the other hand, it sounded from what you were saying that we now have to live with the consequences of the European Union AI Act from an American perspective. How relevant or irrelevant are those EU rules to the CEOs you talk to in Silicon Valley? Are they a headache that might make it harder to monetize some products in the short run, or does the EU actually have some power to influence the trajectory of that technology?
Ball: I think we are at an interesting turning point right now because the Europeans have really backed down from the posture they had toward regulation even just two or three years ago. As I speak to you right now, I am at the office of the delegation of German Industry and Commerce in DC. I was speaking at an event with a bunch of US–German business interests.
I think the posture really has changed, and the Europeans have realized that they need to soften some things. In particular, with respect to the AI Act, there are rules on AI development. Most of those rules actually gave themselves enough leeway in the text of the AI Act that the implementation has been considerably softer than I might have predicted two or three years ago when the law was first drafted.
That being said, it is a real concern for industry because we do not know where those rules are going to develop. It is also a concern for both Europe and America that most of the rules I mentioned were rules on deployers. Those are regulations of any company with operations in Europe, any European company, but also any multinational company.
That mostly is bad for their economy. If there is some multinational American firm that has a big research presence in Europe and also a big research presence in the United States, and that is being coupled with tariffs and build-in-America incentives, and then the Europeans also have laws that make a lot of fundamental work harder and make the adoption of AI much harder in Europe, then on the margin you start relocating facilities and people.
In the short term, that is great for America. In the long term, it is not so great for either Europe or America, because I think America will be weaker without a strong Europe.
To your point about catastrophic risk, that is one thing I did not mention when it comes to my disposition toward how we govern AI. Government is a big risk-management enterprise. Tail risks, catastrophic tail events, unlikely events with very high cost, are the kinds of things that market-based systems tend to handle poorly.
From a technocratic perspective, this is an area where proactive regulation, or at least proactive steps taken by government, can make sense. That is why I have been supportive of things like SB 53 in California, which is light-touch. It requires frontier AI companies only to disclose how they measure, evaluate, and mitigate catastrophic risks posed by their models, cyber, autonomy, bio, things like that.
I support that. If there is a federal standard, I hope it includes something like that. It is worth noting that the EU code of practice includes some of this as well. That seems like a good thing to do and a potential area of U.S.–EU collaboration.
Most of what we should be doing is non-regulatory, proactive, defensive, and focused on resiliency. But transparency regulation in this case allows us to get a glimpse of what catastrophic risks are emerging at the frontier and what resources and steps we should be taking, often far afield of AI itself, to mitigate those risks. That is quite prudent.
Mounk: That is promising, that the U.S. might still be able to collaborate on this or anything else. Let me push you a little bit on how we should think about regulating AI in a better way.
You are saying that the problem with a lot of the approach to regulation is that we see what the technology is right now and make a set of rigid rules around this technology in order to minimize risks and reassert our control over it. That is really going to start being unhelpful very quickly if, two or three years from now, the nature of the technology has changed so much that those rules end up being far too specific to a particular state of the technology. They either end up being overly constraining or underly constraining, or they mistail what the state of the world is a few years later. Since it takes a lot of effort and time to change and reform legislation, you are then stuck with a regulatory regime that really does not make sense.
I have two questions. The first is, what is the alternative? What kind of spirit or approach to regulating AI makes more sense? Inherently trying to regulate how AI works when we do not know what this technology is going to look like in three years is going to be really hard. So what is the alternative? In keeping with that, when we think about some of those bigger risks, is any alternative going to work to actually constrain that? Unless we are saying that a certain kind of capacity is not allowed to be developed, or needs to be developed under very strong security regulations, how can we avoid something like a superintelligent AI that we cannot control? Is a set of dispositions ever going to be enough to facilitate that?
Ball: I think you need to have a system that, first of all, begins with transparency. Having information about these things matters. A lot of things about the world are going to change in ten years. But one of the things that probably will not change is that we will not want AI-engineered pandemics.
We will probably still want to avoid those in ten years, even if many other things are quite different. Jeff Bezos always says the best way to predict the future is to think about what will not change. That is also the best way to write a law, to try to write for the evergreen thing.
Mounk: I assume that in ten years, in one hundred years, in a thousand years, we are not going to want to put technologies in the hands of everybody that make it very easy to cause a pandemic. The obvious follow-up question is what we have to do in order to make sure that the ability of AI to teach people to do things in general, and to give them knowledge about biology in particular, and even more specifically to figure out new DNA sequences or other things that would allow someone with basic skills in laboratory technology to create a really potent bioweapon, cannot be used for that purpose. That truly depends so much on the state of the technology that knowing we share the goal of avoiding people being able to engineer these bioweapons is great, but knowing what kind of regulation is necessary to carry out that task is much harder.
Imagine that AI were so good at inventing really simple lab techniques, so good at inventing technologies that allow someone to sequence DNA and create a virus, and so good at inventing incredibly deadly viruses, that anybody with access to an all-purpose AI-powered chatbot would be able to do that. Imagine that we are not very good at even basic alignment, so that reliably programming these chatbots to stop people from giving that advice is not possible.
In that kind of world, we would have to make sure these models basically do not exist, because the moment they exist, somebody is going to be able to exploit them in a way that causes hundreds of millions of deaths. I do not think that is likely to be the state of the technology, but the policy approach we should take, including whether it should be permissive or incredibly restrictive, fully depends in part on what the state of the technology is. So how can we pass regulation that is completely agnostic to predicting the future in that way?
Ball: It is not entirely agnostic. Something like SB 53 does specifically talk about bio risk. This is a risk that the state is acknowledging. We are saying that if you are developing a model of this size, you have to measure it and you have to tell us how you are mitigating it.
Of course, how a company does that, given that it is now public, and how a company does that compared to how other companies do it, will factor in if there is ever liability or a legal situation involving something like that. How well they mitigated those risks, what they did, how they measured them, and the thoroughness with which they did so will be compared to their peers. That is how we will assess a duty of care. The common law allows that to be dynamic.
It is also very possible that the things you need to do to mitigate this kind of risk at the AI model developer level will change. In fact, I think it is likely that they will change quite a bit over time. You want companies to be incentivized to do a good job mitigating these risks while not being super prescriptive, because your prescriptions might become outdated quickly.
For example, there are safeguards you can put on models to prevent jailbreaking and to monitor usage. You can have specific features of a model that, when activated, immediately trigger human review to decide whether this is a virologist at a university doing legitimate scientific work or a potential bioterrorist. Those are the kinds of things you would want to do.
You could also imagine a world where there are technical standards for how reliable those safeguards need to be. Various organizations could be involved in creating those standards, private organizations, government organizations, or the Center for AI Standards and Innovation. It could be a hybrid of all of those things.
Eventually, if the risks become severe enough, we might want supervision or auditing so it is not just AI companies grading their own homework, but some government entity or government-blessed private actor independently testing safeguards against a pre-existing specification not created by the model developers. That is one direction you could go if downside risks become much higher.
The other important thing to say is that a lot of other things have to happen downstream for this to go well. One example is something I worked on in the Trump administration called the Nucleic Acid Synthesis Screening Framework on biosecurity. This is a mandatory regulation on companies that provide nucleic acid synthesis services.
If I want to make a virus, I might have its genome and send it to a company. This is not exactly how it works today, but bear with me. I would send a protein sequence or a genome, and they would synthesize it. You can require those companies to screen sequences for pathogenicity and toxicity. We already have an early version of these regulations in place.
You can also require know-your-customer checks to ensure that the requester is a legitimate lab doing legitimate work. In parallel, you might have biosurveillance programs that constantly measure wastewater and other environmental indicators for novel threats.
In the longer term, in AI terms anything beyond five years, if models are that good at synthesis, they are probably also that good at synthesizing treatments. You could imagine a future where people have devices on their bodies connected to the internet, and when a new pathogen emerges, those devices automatically administer a treatment developed by AI and delivered immediately. That is the kind of future you can imagine.
Mounk: Given how universal the acceptance of mRNA vaccines was, I am sure nobody would object to a machine like that. That is beside the point. I want to ask you about one element of this, because you have touched on liability a little bit. I think liability is actually something really important that we do not talk about that much in the broader discourse.
I can see how a liability framework can often be more useful than straight-up regulation. But I think it depends on a few boundary constraints. When you have a car company, the risk is that some design flaw in a car they produced is going to kill some number of people. It is not going to be millions of people, but it is what we consider an unacceptable number of people who die as a direct result of this kind of design flaw.
When that happens, we have liability rules in place, which mean that those customers are going to be able to sue the car companies. Most likely, together with bad publicity and other factors, the incentives make it such that the car company would rather spend more money on research and development and safety tests than pay out those financial fines or take that kind of publicity hit.
Part of this is that there is no catastrophic risk involved here. There is obviously catastrophic risk to individual customers, but not at scale. That makes it kind of work. The problem comes when the likelihood of a bad impact for any one company is very low. It is very unlikely that any one particular AI lab is going to create the frontier model that causes the big global pandemic of 2030 or creates a superintelligent AI that kills all of humanity.
Each company might think the risk of doing this is relatively low, well below one percent. They might not worry about it too much because it is unlikely to impact the long-term future of their company. If it happens, perhaps liability means that the company goes bankrupt. But the pandemic has already happened, or the superintelligent machine has already killed all humans. The fact that one company ends up going bankrupt does not matter in that grand scheme.
It may be that if you have three hundred companies building AI models, the risk of any one of them bringing about this really bad outcome is so low that liability is not front of mind for any single company. Cumulatively, though, the risk from those three hundred companies is actually very high, or at least significant. In that kind of world, the liability framework does not protect us in the ways we need. Is the liability framework just not well suited to this kind of problem, or do we need to complement it with other kinds of things?
Ball: Yes, it is. In fact, I alluded earlier to the notion that liability is traditionally not good at catastrophic tail risk. That is an area where liability fails. For that reason, that is why I specifically concentrate my efforts to develop public policy on those things, because that is where I have less trust in liability.
A couple of things I would say. First, I want to complicate it slightly and say that the liability system does incentivize companies to have a wide variety of safeguards and jailbreak-protection methods for normal consumer harms that would be dealt with well by the liability system. That work is probably general-purpose work that also has benefits for catastrophic risk. But it is generally true that this is why you need public policy to step in. That is why I am supportive of things like SB 53 in California. It is why I am supportive of downstream work on biosecurity policy in the biotech industry. Mostly, those are regulations.
SB 53 is a law passed by the California government that was signed by Governor Newsom in September of this year. It is a brand-new law. Basically, it says that if you are a very large AI model developer, essentially the top five or six companies as designed, you have to publish what is called a safety and security framework. Every major company already does this. They go by different names. Anthropic’s is called Responsible Scaling Policy. They all have different names. But for the most part, they already do this. This is really codifying an existing industry practice.
That document must contain details about how you measure certain areas of catastrophic risk, including bio risk, what the results of those measurements were, and what you do to mitigate those risks. That is the idea. There is also the notion that, as we move up levels of capability, as we go from models that are not much more useful than a Google search in terms of bio risk to models where, if you know the right questions to ask, experts can get useful uplift, and eventually to models that could help a complete novice design a bioweapon, we are probably somewhere between the first and second stages right now. They provide uplift to experts, but it is not like I could design a bioweapon right now. It is not clear when or if we will get to that point.
As we go up these different scales of capability, those capabilities are also very useful, because the same intelligence allows you to design novel treatments and drugs. They are hugely useful, and you do not just want to say you cannot do it. That probably does not work. Saying that a model may not pose bio risk is not going to be an effective law for many reasons.
The notion contemplated in SB 53 is that as you reach qualitative levels of capability, different qualitative levels of safeguards are required, which seems reasonable to me. You do have to take this seriously. My view is that these are tractable issues on which we can make a lot of progress. It is not that they are not threats at all.
I also think that early on in AI safety, shortly after ChatGPT, some advocates exaggerated how overwhelming these problems are. My view, especially having sat in government, is that a new catastrophic tail risk comes across your desk every day. There are no 100 percent solutions. There are compounding 95 percent solutions. As I described with layered mitigations, AI model safeguards, biosecurity measures, wastewater monitoring, biosurveillance, and rapid development of cures, doing all of those together is how we deal with these issues.
I think these are tractable problems. We do not have a 100 percent solution, and it was never realistic to expect one. That does not mean we should relax or do nothing. There is quite a bit of work to be done, and a lot of it needs to be done with urgency.
Mounk: I feel like you have given us a sense of how you want to think about regulating AI. Perhaps you can now pull those threads together for listeners. If you had to answer in about sixty seconds, what forms of regulation can help mitigate some of those very serious risks without slowing down innovation in AI that could have very positive impacts for the world? Very briefly, give us a list of some of the things we should do and some of the things we should not do.
Ball: I think that transparency and facilitating insight about these problems is step number one. We are kind of in the process of taking that step right now. Another step is to think about, for threat models that we believe are real, examining the downstream. What would we need to do? The way I would put this is, what is the victory condition here?
If we are worried about autonomous cyberattacks, we know that autonomous cyberdefense is also possible. So what does the world look like? What is the set of policies, institutions, and technologies that we need in order to feel that not only are we not just treading water, not only are we mitigating this risk, but we are actually going to get better at cyberdefense than we were before?
Bio is a very similar thing. We know that biosecurity is a problem. We just lived through a multi-year pandemic. You think about what the victory condition looks like. Then you build, in parallel, different societal actors. You channel philanthropic resources, government resources, and corporate resources toward building that institution.
What I view as my role is helping to channel those resources and trying to articulate what that institutional arrangement looks like. What is the win condition? Or at least trying to amplify the voices of those who have already articulated such things.
The last thing I will say, and the reason regulation is a double-edged sword, is that all of the things I have mentioned, bio, cyber, and even the challenge of AI governance itself, involve AI as a general-purpose technology. It is useful for the defensive side and the resiliency side of all of those things. We will not govern AI without AI.
That is a strange fact, but it is also obvious if you think about other general-purpose technologies. Imagine trying to govern computers without computers. Governing computers with pen and paper would not work very well. It is a similar thing. If you regulate AI too much, you not only stifle capabilities, growth, and innovation, but you also stifle the ability of people in critical infrastructure to adopt AI in defensive ways that we want them to. We regulate critical infrastructure a lot, but if we regulate AI use in critical infrastructure because we are worried about AI risks, we might accidentally regulate away the ability of those systems to become more resilient. That is the challenge. That is the balance you have to strike. I do not have a way to summarize all of this in one or two sentences, but that is basically how I think.
Mounk: This starts to get into another point you have made that I find interesting, which is about how AI is going to transform our institutions. So far, I think this conversation has been quite focused on what regulation is helpful or harmful and what kind of framework to use for regulation. But there is a more fundamental question. Our institutions were built in the eighteenth century, and already there are all kinds of technologies, including the internet, that lead to a felt mismatch between our democratic institutions and how people are now used to having an impact on the world. Some of the basic assumptions that we cannot all come together and deliberate together are not really true with Web 1.0 or 2.0 technologies.
There are other constraints. One of the reasons you do not want direct democracy is that most people are not that interested in politics, and that would end up with the people most obsessed with politics, who are often the most ideological, having the biggest voices, along with all kinds of other problems. But there is a fundamental question about whether, if we are really serious about the project of self-government, we should be governing ourselves with eighteenth-century technology rather than using late twentieth-century and early twenty-first-century technology. You could imagine ways in which artificial intelligence poses the question of how we should radically transform our institutions in quite fundamental ways as well. Part of that may be about the institutions of self-government. How is it that we can use AI to translate popular views into public policy, which I think is one of the core points of democracy? But also, how can we radically change our idea of what the DMV looks like?
Beyond government, how are universities going to look different, how are research labs going to look different, and how are corporations going to produce goods in a different way? There are obviously no firm answers to any of those questions, but how should we think about keeping what is valid in our institutions, not becoming a weird kind of AI utopian who thinks all these institutions no longer have value because they are going to be replaced by something like Gemini 3.0, and yet remaining open to intelligent and smart ways to rethink our institutions such that they can seize upon these technologies?
Ball: This is, in some sense, the fundamental question to me. I wrote a piece a couple of months ago. It was called “The Building Company,” and it was a speculative short story about a company that has developed the capability to robotically construct buildings, warehouses, and similar structures. You can think about a world where that technology exists and is widely diffused.
In that world, there are building codes in local government that are enforced by departments of buildings. They send inspectors out periodically to look at sites, and it is often a very cumbersome and expensive process. When you have end-to-end autonomy in the construction of a building, the robots collect telemetry data, visual data, and other information about what they are doing. That information can be streamed in real time to a regulator who can monitor constantly, using their own AI systems, for potential risks, bad practices, or violations of the code.
You can imagine that this would change how banks and insurers think about writing policies for buildings if they knew every step that was taken in construction. You can imagine many downstream changes, and you can imagine most of them being better. You can also imagine the same idea being used to constantly surveil everyone and send everything to surveillance systems, which gets you into something like Skynet.
So we need a set of principles by which we decide when we want heavily AI-enabled governance and when we do not. Those principles probably need to be much more sophisticated than current ideas about data or privacy. I think the answer will often involve binary decisions where we say no, we do not want AI-enabled governance here, and others where we say yes, we want to aggressively adopt it.
You mentioned democracy. The institution of democracy itself is an example. The notion of going to a polling place and casting a vote on a specific day is a nice notion, but it is also technologically contingent. There may be better ways, more profound than simply voting online, to use digital technologies to bring public perspectives to bear on public policy and ensure that the public retains sovereignty over decisions of public interest. That is possible, but you have to be extremely careful.
You need very firm principles. This is another area where AI becomes deeply political and philosophical. You need a robust set of principles to decide when we do this kind of thing and when we do not, and what we want to preserve. Governments will be able to do much more than they currently can. They will be able to surveil and incorporate much more information. In some ways that will be good, and in some ways it will be very bad. If we go into this with weak principles, we will end up with poor outcomes. That is why I think classical liberalism is so important, and I hope it has a serious resurgence, not just in America but across the world, because it offers an individual-liberty-preserving and privacy-respecting way of thinking about these issues.
Mounk: I feel really torn on the subject because the first thought I have is, of course we should try and figure out ways to do this. It is kind of strange to run our institutions along a model that was available in the eighteenth century and that sidesteps all of these technologies that we can use now. It is not like we are doing very well at delivering on the basic promise of our political system.
One of the key premises of the system is that it is government by the people, that we are translating popular views into public policy. Most citizens do not feel like that is happening. That is something that citizens on the left, the right, and the center probably have in common. Most of them feel like those people in Washington are not really listening to me. So, of course, we should be open to this.
The second point is that any actual set of ideas about how to do this is very easy to pick holes in, because they usually do have holes. I used to teach a class on democracy in the digital age at Harvard ten years ago now. Even then, there were utopian ideas about liquid democracy and different ways to have smart setups. They were kind of cool and fun to talk about. They were a good teaching tool in that class. But it was obvious that none of them would work.
The third point is that, as you are pointing out, one of the reasons why they do not work has quite fundamental causes that apply at any stage of technology. One beautiful text from a very different political tradition than yours, but I think relevant, was written in the 1960s, when there was a huge fashion for basic participatory democracy in a somewhat socialist vein, by Michael Walzer in Dissent magazine. He called it “A Day in the Life of a Socialist Citizen.”
The idea of the piece was that if we manage to achieve socialism through basic participatory democracy, what would the day of the socialist citizen actually look like? The joke of the piece was that where Marx promised that people would be fishing in the morning, hunting in the afternoon, and being critics in the evening, the day of a socialist citizen under participatory democracy would be sitting on the fishing license committee in the morning, debating laws about hunting in the afternoon, and sitting on a literary prize committee in the evening. It would completely alienate you from the things that this utopian system is supposed to deliver.
The debate today is very different and has different contours. But the basic question remains. Yes, we have the technology to involve everybody in the design of the details of regulation about AI. We could have public comment in a much more fluid way. We could have every US citizen voting on this stuff if we wanted. This is something we could have done before AI. We could have done it ten years ago.
But most Americans do not have the interest or the expertise to participate in this process. If we open it to everybody, the only people who would really participate would be special interests, people with extreme ideological views, and weirdos. It would not make the system more representative of ordinary citizens. Some of the constraints on why our political system is not responsive to popular views look technological, but are ultimately rooted in basic facts about human psychology and politics that are unlikely to change.
Ball: I completely agree. By the way, when it comes to things as fundamental as democracy, any kind of institutional evolution in these areas, my inner Burkean comes out very strongly. I am quite skeptical. I would like to see the technology diffuse and develop in many more areas before we even begin to think about things like that.
One of the other things here is that I really have a biological conception of institutions. I often use the language of biology in analogical ways to think about institutional evolution. I think it is true that institutions that are insufficiently supple will, in the long run, struggle. We are probably already living through that right now. The insufficient suppleness of Western governance institutions is probably a big part of why many citizens of Western societies do not feel as though their government represents their interests.
So what are the institutions that are more capable of evolving and that are functioning competently in our society today? I would argue that big technology companies are a fairly good example. What went right during COVID? We did produce mRNA vaccines that worked. That was controversial, but we did in fact do that. We were all able to go to remote work almost overnight, massively increasing our usage of cloud computing, and that largely worked. We owe AWS, Microsoft, Google, and other companies thanks for allowing us to keep our economy afloat during a very fraught time.
What I am saying is that the risk you run of being highly Burkean with respect to everything involving government adoption of AI is that those institutions simply do not evolve, while other institutions do. There are things about current big tech companies that are already quasi-governmental in nature. This has long been a feature of America. Financial institutions have quasi-governance aspects. Banks do. Many private organizations have quasi-governmental functions. That is not inherently bad.
But you can imagine a world in which AI fulfills some of its promises. One of the questions I think about all the time is what kind of institution the frontier AI lab is. What kind of a thing is OpenAI or Anthropic or Google DeepMind? To what extent and in what ways will it be governmental in nature? How will that challenge the existing structures of the nation-state?
To put an exclamation point on it, I do not think the nation-state is necessarily the end of history. The evolution of something beyond it, even if AI moves very quickly, would still take decades if not centuries. But I think we need to be prepared for the possibility that the tectonic plates are moving beneath our feet in a way that challenges the nation-state over time. I do not have high confidence in that, but I think it is a plausible possibility.
In the rest of this conversation, Yascha and Dean discuss policy solutions to the risks posed by AI, what technology shows us about ethics, and whether superintelligence will kill us all. This part of the conversation is reserved for paying subscribers…












