Persuasion
The Good Fight
Kelly Born on All the Ways AI Is Changing Politics
Preview
0:00
-59:31

Kelly Born on All the Ways AI Is Changing Politics

Yascha Mounk and Kelly Born discuss the socioeconomic impact of artificial intelligence.

Kelly Born is the founding director of the Packard Foundation’s Democracy, Rights, and Governance initiative.

In this week’s conversation, Yascha Mounk and Kelly Born explore how AI could impact democracy and society—and how to respond.

This transcript has been condensed and lightly edited for clarity.


Yascha Mounk: We’ve known each other for a long time. We run into each other at all kinds of conferences. But you gave a really interesting presentation at a conference I was at recently—helping us think through the different kinds of impacts that AI is going to have on the political world.

Before we dive into some of those in more detail, how should we think about this—from the obvious, straightforward ways in which AI is already influencing the world, to the more remote but potentially far more impactful ways in which it might transform the world 5, 10, or 20 years from now?

Kelly Born: My sense is that AI is going to impact just about everything. This is a general-purpose technology. As with any kind of major technological change in the past—whether it’s the printing press, radio, TV, or the internet—you see huge social, political, and economic changes as a result.

But I think AI is a little bit different for a few reasons. First, it’s a true general-purpose technology. It’s not like social media, which primarily affects communications. AI is going to affect health, science, national security, banking, transportation—basically every sector.

Not only will it have this incredibly broad impact, but it’s being deployed faster than pretty much any technology we’ve ever seen. So we have far less time as a society—and as a democracy—to adapt.

Mounk: Explain that claim to me, because I think it’s something that people who think about AI have heard a lot about, but which may not be as obvious to others. There’s this weird residual skepticism about AI in a lot of the population saying it’s kind of useless, that it hallucinates a lot, and that it doesn’t actually work. When you say it’s being adopted faster than any other technology, what’s the evidence for that?

Born: Granted, it’s based on a lot of company disclosures about uptake, but you’re seeing usage in the hundreds of millions. I think you’re also seeing that everyone is talking about it and using it. So I think you don’t just need to take the companies’ word for it—it’s very much in the zeitgeist and clearly being adopted quickly.

Mounk: When you look at how many millions of users are even paying for OpenAI every month now, as well as for other AI platforms, there’s a question about whether or not the huge investments these companies are putting into the technology are going to be sustained by those expenditures.

But when you’re just looking at the share of consumers in the United States who now actually proactively pay for some AI platform, it’s a very large number of people. So for technology that’s only really been out there in a commercialized way for three years now, that is actually very rapid take-up—which seems to indicate that at least those people seem to think, rightly or wrongly, that it’s delivering some value.

Born: I think that’s right. It is being adopted quickly, and it’s a technology that’s being adopted on top of a society that is really unstable—a lot of challenges around trust in institutions, polarization and economic inequality that are pretty significant. Introducing a general purpose technology thus quickly, I think, is cause for concern.

Mounk: One thing that I find really interesting is that when people talk about AI, I sometimes feel it’s very polarized between the people who think it’s completely useless and just hallucinates, and the people who think that in three years it’s going to be so intelligent that we’re all going to be unemployed. It’s also polarized between people who worry about very small, concrete things and these vast, intangible things.

On the one hand, a lot of the literature on ethics and AI is about algorithmic discrimination. When some AI system is helping to determine what the price of your influence is, it might end up discriminating against vulnerable groups in all kinds of ways, which of course is a bad outcome, but feels like a relatively small element in how it might influence the world. On the other side of this, you have people like Nate Soares, who was on the podcast recently and co-authored with Eliezer Yudkowsky this book, If Anybody Builds It, Everybody Dies. That’s the idea that the end of humanity is around the corner if we don’t put a moratorium on AI tomorrow. Tell us a little bit about what’s in between those two things. What are all the different kinds of impacts that you’re trying to think through that are a little bit beyond these very straightforward, immediate impacts, but perhaps not quite as far as the super-intelligent AI is going to enslave you and all of your children.

Born: I’ve been working in democracy for a while now, so I think about it in terms of the implications for democracy. The way I tend to hold it in my head, because there are so many different ways that AI is going to impact democracy, is to think about it as a series of concentric circles. You start with the most obvious bullseye that everyone thinks about when it comes to AI and elections, and you think about the pros and cons in each of these domains.

For example, in the elections context, on the positive side, we know that election administrators in the U.S. are really under-resourced, so they’re starting to use it to locate polling places, verify mailing addresses, and this kind of thing. Of course, there are all kinds of risks. The disinformation side of things is talked about a lot, but also the phishing attacks that it enables. There’s the really obvious stuff of the machinery of democracy, elections, and then you move one circle out to government use of AI across the executive, legislative, judicial, and military domains. You’re seeing cities deploying it to improve bus routes, or the State Department saying they’ve reduced time spent on FOIA requests by about 60%.


Thanks for reading! The best way to make sure that you don’t miss any of these conversations is to subscribe to The Good Fight on your favorite podcast app.

If you are already a paying subscriber to Persuasion or Yascha Mounk’s Substack, this will give you ad-free access to the full conversation, plus all full episodes and bonus episodes we have in the works! If you aren’t, you can set up the free, limited version of the feed—or, better still, support the podcast by becoming a subscriber today!

Set Up Podcast

And if you are having a problem setting up the full podcast feed on a third-party app, please email our podcast team at leonora.barclay@persuasion.community


There are implications for the machinery of democracy, elections and governance, and then you get out to what I think of as the prerequisites for democracy: political engagement and culture. One circle out from that is the information ecosystem. Beyond that are the socioeconomic conditions, concentrations of wealth and power, and labor market implications. One level further out are the geopolitical conditions, including the relative balance of power between authoritarian and democratic nations and how we’re engaging vis-à-vis China. The furthest circle out is the systemic or existential risk circle, where a lot of the bio risk and cyber risk lives. When people talk about AI killing us all, that’s the circle they’re referring to.

What’s confusing in this field right now, particularly when people talk about AI and democracy, is that discussions jump across all of these levels, from the very nitty-gritty of the machinery to the preconditions for democratic governments, or even the world itself, to survive.

Mounk: I think that’s a really helpful way of organizing the different kinds of spheres in which AI is going to have an impact. Obviously, those spheres are interrelated. The point of this is not to say that these are somehow completely separate spheres from each other, but it helps with conceptual clarity to think about them in those terms. Why don’t we start with a narrower one of those concentric circles and then move out a little bit? When it comes to government, that is one of the areas where you might think the potential for positive impact is relatively higher than in some of those other areas. If AI systems really do make us more productive and are able to improve the delivery of government services, that would obviously be a good thing.

The skeptic in me wants to say, first, that whether or not it’s able to do that depends, in general, on whether it’s possible to adopt this technology as seamlessly as people predict. There are studies which claim that 95% of companies that have tried to implement AI into their processes end up not using it that much after a little while. That might be a question of the stage of the technology, but it raises a broader question.

If the MTA uses AI to analyze which routes are most sensible and whether it should change where it runs some buses, is the new route map actually going to be better than the old route map, or is it going to fail at that? There are also ethical concerns about whether some government decisions are being made by an algorithm that we don’t fully understand and where we don’t really know what goes into that decision. Should we be comfortable with that? Does that potentially lead to certain forms of discrimination, or does it belie the idea of government by the people and for the people? If a machine is making those decisions for us, does that erode democratic agency?

How are you thinking about the pros and cons of how AI is going to transform the way in which our government actually does its job on an average Wednesday?

Born: It’s a complicated space. I think that we are clearly seeing benefits already. There was a great study by a group at Stanford that worked with the city of San Francisco to review thousands of pages of reporting requirements that were required by the federal government and found that something like 35% of them just weren’t used or reviewed. They used that data to go back and advocate to no longer be required to submit these reports.

There are clearly places where people are already finding value with the technology. I’m seeing ideas for things like policy sandboxes in the legislative branch. Could you create an AI system that would allow policymakers to think through an environmental policy they want to consider, run that through four different potential future scenarios, and theorize how that policy might play out in the real world under each of those scenarios and how they might want to tweak it or future-proof it? You’re seeing all kinds of interesting ideas, some of which are already showing actual results, and others that are much more at earlier stages. I do think that this is an area where there’s probably more potential benefit than cost, particularly at a time when we really need to improve trust in the ability of democratic institutions to deliver.

There are significant risks. There are horror stories, like the Michigan MIDAS one that’s often brought up, where it was deployed to screen unemployment benefits for thousands of people and had an unbelievably high inaccuracy rate, and people were bankrupted as a result. That was in 2013, and we’ve moved a long way since then. I’m not saying there aren’t problems here. There are very real risks. A lot of people are clear-eyed now about trying to build retrieval-augmented generation systems where you have a very narrowly defined corpus that you’re pulling from, which reduces the opportunity for bias relative to what we’ve seen before.

People are also working through questions about how to keep humans in the loop. To your point about government by and for the people, I often think about how The Economist talks about democracy ratings and the threat of autocracy and technocracy as parallel risks. The question of whether you keep humans in the loop is a really important one, but there are ways to design systems that are at least less biased than what we’ve seen so far.

Mounk: I think there’s a risk in assuming that if there’s an AI system involved and some form of bias might result, that is completely unacceptable, while tacitly assuming that the status quo or the hypothetical alternative involves no bias at all. One obvious example of this in a different context are debates about Waymo, which, according to the best studies we have so far, has a much better road safety record than human drivers. It seems likely that if we adopted many more self-driving cars at the current level of technology, let alone the level we might get to in five or ten years, we would save thousands of lives across the United States every year and tens or even hundreds of thousands of lives across the world. But when there are instances where a Waymo ran over a cat in San Francisco, it suddenly becomes unacceptable.

We would obviously want to be very careful not to increase discrimination in our society through the incorporation of AI systems. A good goal is zero discrimination, but that can’t be the precondition for adopting AI systems. If an AI system reduces the amount of bias and discrimination in decision-making about whatever state benefits we want to give out, that seems like an improvement over the status quo. It seems strange to require that the standard be zero bias from day one.

Born: I think the idea that people shouldn’t compare these systems to the Almighty but compare them to the alternative continues to have legs. I do want to caveat, though, that no one is pretending or saying that these systems don’t have bias. Everyone at this point clearly agrees that these systems are being trained on the corpus of human knowledge, which itself has bias built into it.

Your point about the alternative is important. The alternative is not a perfect human decision-making system. It’s years and years of wait times on decisions that themselves might also be biased.

Mounk: That brings me to the other point. I thought it was very interesting what you said about how FOIA, Freedom of Information Act requests, can now be dealt with more easily. The idea, presumably, is that part of that process is going through a large number of documents and figuring out which are relevant and which are not, and some of that can be automated. You’re then able to respond to these important requests in a more timely manner, which is obviously what the laws aimed at making government transparent were supposed to achieve.

Of course, it’s an arms race. I’m struck by the fact that I spoke to someone who works with a large department store, and they have a real problem in their legal department. In the past, many customers wrote letters that clearly came from people without much legal expertise and made demands that didn’t seem very plausible, so it was easy to say that these were not reasonable requests. Now they’re getting a glut of letters that sound much more sophisticated because they were written by ChatGPT or some other AI system, and the legal department is having huge trouble keeping up with all of those requests.

Presumably, the same thing is going to happen in the case of government. On the one hand, the government is going to be better able to respond to citizen requests, which is a good thing. On the other hand, it’s also going to be much easier to bring spurious lawsuits to gum up the bureaucracy for any building project, with plausible-sounding objections to some environmental review.

It’s hard to know where this arms race is going to end up. Is the improved ability to deliver certain services going to be the dominant factor, or is the improved ability of some well-meaning citizens, and some less well-meaning special interest groups or people who simply want to gum up the system, going to be more powerful? Which side of this arms race is going to prevail is very hard to predict.

Born: I think what you’re pointing to is that notice and comment is already broken, and we’re already seeing that. There’s one world in which you imagine that AI is part of the problem and breaking it, and another in which it’s part of the solution and helping to screen it. You hear a lot of people in conversations talking about a future where AI agents become more dominant on the scene, and you end up with a bunch of AI agents in dialogue with each other. Government agents are screening inbound information, and citizen agents are flooding them with information.

There’s another area of work that I do that we may not have talked about before. In addition to working on AI and democracy, we also work on improving the effectiveness of government. This question about who wins the arms race is often predicated on the idea that the system is going to stay more or less stable as it is right now, with notice-and-comment sessions or similar mechanisms. We’re also in conversations with groups about how to reimagine the future of governing institutions in ways that might offer completely different alternatives to notice and comment altogether.

There’s an interesting question about where we’re going to see an arms race within existing systems and where we’re going to see entirely new systems invented, which will then have arms races of their own to contend with.

Mounk: That’s a really good point. The arms race dimension is going to be present in any system, but the question is how the system is going to change to try to accommodate this technology. If we move up a little bit on this ladder of concerns to points around political culture and engagement, and more broadly the information ecosystem, this is where we start to worry about whether we’re still going to be able to have real conversations as citizens.

So much content produced on the internet, including op-ed articles published in newspapers and certainly much of what we see on social media, may increasingly be produced by AI systems, and that could degrade our ability to speak to each other more broadly. What I worry about is the emergence of an age of cynicism. What happens when there are fake videos everywhere? It’s not necessarily that people will believe those fake videos, but that you never quite know for sure what is authentic and what is not, which can breed a generally cynical attitude toward the world.

People start to feel that they can’t really know what’s true or false, or what’s real, and that may be quite corrosive of the basic factfulness we need for a democratic system to work in a meaningful way.

Born: I think that’s right. I tend to think of the information-ecosystem impacts, at least the negative ones, in three buckets. The one that people worry about the most, but that I’m actually not quite as concerned about, is the persuasion piece, the idea that someone’s going to flip their vote or that this content is going to change people’s minds. My sense is that, unfortunately, we’re at a place where we’re so polarized that, at least within the existing political system, persuasion is harder. Sure, if a new kind of Pizzagate comes up, you might convince someone of something, but I’m a little bit less concerned, like you, about the persuasion piece.

I’m more concerned about learned nihilism, the liar’s dividend, the sense that nothing is true and anything is possible. This resembles the Russian model of propaganda, where there’s so much noise that no one can make sense of anything. I think that’s the second bucket. The third bucket, which we talk about less, involves questions around quality erosion, the corpus of human knowledge, and what’s going to happen there. There’s the copyright conversation and what incentives are going to look like to create new content in the future, but there are other pieces I worry about as well.

Model collapse comes up a lot, the idea of when models start ingesting their own outputs and no longer have new content to build on. There are also questions about how we interact with knowledge itself. When you use Google, you ask a question, get ten responses, and can think through them. When you ask voice assistants or similar systems a question, they have to produce a single answer, which can vastly oversimplify the world and the truth. So it’s not just persuasion and not just the liar’s dividend or nihilism. It’s also about how this changes the overall corpus of information we’re working with and how we interact with knowledge.

Mounk: I wonder whether the concerns about echo chambers and filter bubbles, which are quite old at this point and around which the empirical evidence is quite mixed, may finally come true. The idea that we end up in an echo chamber online is intuitively appealing. There is some evidence that this may be true to a certain extent. When we encounter content from people who disagree with us, it’s often not representative but instead the most extreme and enraging content on the other side. But it’s been less clear than people think that this is actually happening in a systematic way.

The pre-existing media landscape was already one in which, if you were progressive, you might have read The Nation, and if you were conservative, you might have read National Review. It’s not as though everyone was consuming politically neutral content all the time. But you could imagine that there’s a very obvious tendency, at least for now, in these chatbots to want to please you. They’re literally programmed to get positive feedback and trained to produce text that the user is likely to enjoy. As these models get more mature and competition between model companies intensifies, each will have a strong incentive to figure out ways for these systems to speak directly to your tastes.

I already find that when friends of mine show me questions they’ve put to ChatGPT, it often has a different tone than what I get because it has adapted over time to what they seem to like hearing and how they speak to it. You could imagine that if you have a very progressive worldview, these chatbots are going to say, of course you’re right, this is objectively how things are, and then draw on the smartest writing that supports your point of view while denigrating opposing views. On the other hand, if you’re very conservative, it might say that progressives are completely wrong and agree with you instead.

Perhaps we could figure out a way not to do that. The designers of these systems might recognize the danger and try to impose some kind of common reality based on what they think is best. But that seems just as problematic. That would mean someone in Silicon Valley is deciding which political values these AI systems represent as the obvious true state of the world. They might hold values I agree with, or values I find deeply troubling, or they might simply get important factual questions wrong. It’s not even clear what the right state of affairs would be here. Either way, it’s something that seems deeply fraught with danger.

Born: I would add to the concerns about sycophancy the question of who determines truth. Related to sycophancy is the ability to handle conflict. A successful democracy requires the ability to engage in spirited debate, and the sycophancy we’re seeing in these models is a challenge to that.

There are also studies, some of which have been questioned or debunked, that look at the idea of cognitive decline associated with model use. I’m still trying to get my arms around this, but at least theoretically it makes sense. These studies involve brain mapping, comparing what happens in conversation with another person, with using Google, and with interacting with GPT-style systems. Some of these studies raise methodological concerns, but people are still probing these questions. How does a person’s ability to engage in democracy develop if they grow up interacting primarily with a sycophantic, agentic AI model, experience very little conflict, and don’t engage in their own research? What kind of capacity does that produce for meaningful democratic debate?

Mounk: There’s also a question about how the technology itself will develop. I’m a little skeptical of some of these claims for now, and perhaps that’s just based on how I use these systems, but I find that I’m able to dig down in a back-and-forth with ChatGPT in a way that feels different from going down Wikipedia rabbit holes, where you often stay at a certain level of generality.

I realized recently that I hadn’t kept up as much as I would have liked with the impact that Giorgia Meloni’s government has had on Italy itself. There’s been a lot of writing on her role in the international system, where she’s turned out to be more Atlanticist than expected, but I wasn’t really up to speed on the details of a constitutional reform she’s trying to put forward. For that kind of issue, it can be helpful to say, I don’t quite understand the nature of this proposal, give me more detail, what are the pros and cons, and it feels to me that my brain is just as active as if I were trying to find relevant articles in Italian newspapers or on Wikipedia.

It’s still a process of building literacy. One striking thing is that the main mode of interaction so far involves typing and reading, which is a form of literacy. But just as social media started in a relatively text-based form and eventually evolved into something like TikTok, which is mostly video, and then influenced every other platform, we’re likely to see something similar with AI. It’s already possible to speak to AI systems by voice, and you see major AI companies investing heavily in feeds where video content is delivered continuously and users are much more passive.

I’m skeptical of some of the studies so far about how we currently engage with AI in this respect, but they may end up being directionally right about where the technology is heading and what the dominant mode of engagement with AI is likely to be for most people, for much of the day, in the future.

Born: I think that’s right. It really depends on how you’re using it and what you’re using it to do. When I’m doing research, I had dinner last week with one of the guys who was among the first twenty people at Anthropic, and we were talking about how we were using AI. He said that he takes one week a month and tries to do everything using it, meaning he doesn’t do anything without first trying to use AI.

I find that sometimes I’m thinking much more clearly. Like you said, when I’m doing research, I’m going back and forth as if I’m talking to the world’s expert on some topic, going deep into understanding the nuance. But when I try to get AI to write something for me, it’s often a disaster, and I feel like I’m getting dumber the more I read it. So it really depends. I don’t think there’s a single answer about how it’s going to impact cognitive competence.

Mounk: One of the concerns I have, motivated in part by teaching and by the way universities are currently dealing with AI, is that many academics, and probably high school teachers as well, are deeply naïve about what AI is able to do. I’ve talked to a number of people who say that while AI might be able to get a B-minus on standard assignments, it wouldn’t be able to handle their more creative ones.

On standard essay-based assignments in the humanities and social sciences, AI today is able to get an A-minus or an A in just about any class at the universities I’ve taught at, including Harvard and Johns Hopkins. Those creative assignments would also be well within its capabilities. As a result, there’s a whole generation of students being educated right now who may never have written an essay in high school because it’s too tempting to have ChatGPT do it, and who may never write an essay in college.

In my view, the act of thinking often requires writing. When people say they’re bad at writing, harsh as that sounds, it’s often because they haven’t learned how to think. Once you try to commit words to paper, you realize where you’ve made logical leaps and what you haven’t fully thought through. In that sense, I’m very worried about cognitive loss unless academics get smarter about how to address this.

For the first time in my life, at Johns Hopkins, I’m going to give students a pen-and-paper exam. At the same time, I’ll allow them to use AI as much as they want when writing the final paper. On the one hand, I want them to explore AI to produce the best product they can, which still requires their own thinking and some of their own writing. On the other hand, I’m going to sit them down in a classroom and have them answer general questions about the themes of the course to make sure they can still think through the material and articulate clear, coherent thoughts in their own voice.

At the moment, it’s probably possible to get through college while doing very little of that. To a remarkable degree, the system seems to be in denial about this so far. Tell me about the socioeconomic conditions.

Born: Can I take a step back quickly? There’s a piece we missed, which is political engagement and political culture. We talked about this a little in terms of broken mechanisms for engagement, how people voice opinions in a democracy. We talked about information, which is often a one-way street where people collect facts and make decisions based on that. But the engagement piece is interesting as well.

Here, you’re seeing at least four challenges. One is broken systems, including the flooded notice-and-comment and feedback loops we discussed. There’s also concern about active silencing, including doxxing, trolling, and state surveillance, and what that does to democratic conversation. Then there’s passive silencing, or thin engagement, where people participate in online deliberative democracy processes in a very shallow way. They check a box, engage through AI rather than with other people, opt out altogether, or begin deferring civic engagement to AI agents as those come online.

I tend to think about these as four buckets: broken systems, active silencing, passive silencing, and thin engagement. On the challenges side, those are the issues people are grappling with. At the same time, there are some genuinely interesting developments. A lot of work at Google’s Jigsaw has pivoted toward AI-powered citizen assemblies. You’re seeing new forms of polling and sentiment analysis. In the movement-building space, people are using these tools to better understand authoritarian regimes, including identifying pillars of power or financial backing behind events like protests in Georgia or ICE deployments in Los Angeles.

These tools are also being used to go deeper in understanding the power and funding structures behind many movements. There are likely to be significant changes in how people engage with democratic institutions and with each other as these technologies play out. I didn’t want to miss that, because there’s a whole ecosystem of nonprofits and civil society actors actively trying to figure this out.

Mounk: That’s really fascinating, and I agree it’s important. What’s more important is if 80% of us lose our jobs. I’m joking, but to move up one further step in this ladder, to this bucket of socioeconomic conditions, that seems to be one where it’s hardest to predict what’s going to happen. As you’ve established, it’s hard to predict what’s going to happen in any of those buckets, but this is one where the uncertainty feels especially radical.

On the one hand, there are people who say that past technological disruptions have always led to significant job loss in some categories, but people just retooled. There’s the example of radiologists. A bunch of things that radiologists used to do have been automated. People thought that as a result radiologists’ wages would go down and that a lot of radiologists would get fired. Instead, what’s happened is that we just use a lot more radiology, and radiologists now actually spend a lot more of their time on high-value tasks rather than relatively simple, repetitive tasks that they had to do before. On the whole, this seems like a happy story.

On the other hand, we’ve never had a system that has a general intelligence that matches that of most, and perhaps soon all, human beings. In the past, when you could print books and the people who had painstakingly copied books line by line were no longer needed, those were skilled people, and the economy still had a need for skilled people in all kinds of areas. They went and did something else. Perhaps not the people who were fifty years old, for whom it was tragic, but the next generation was fine. Is that still true when the machine can do anything that a human can do intellectually at the same level? That’s really unclear to me.

When thinking through these impacts on socioeconomic conditions, are we going to live in a world of plenty where we still need skilled human beings for all kinds of things, or are we going to have an ability to do a lot of things through machines while nobody is making wages and the entire socioeconomic basis of our economic and political systems gives way? How are you thinking about this?

Born: I agree. I think this is the biggest question, and it’s the one where there’s a huge range of uncertainty. I talked to some people at tech companies, and they say we’re going to see 20% unemployment in the next couple of years as a result of these technologies. The last stats I heard from the IMF were that 60% of jobs in advanced economies could be affected. McKinsey, I think one of their stats was 14% of the global workforce. The estimates are all over the map, and as a result it’s hard to plan.

What makes it even harder is that no one seems to have a plan. I recently had a conversation with the former head of the future of work for the Newsom administration, and the conversation was very much, so what’s the plan? There wasn’t one. There are a lot of scholars working on this, and there are some interesting ideas on the table. Most have moved away from universal basic income for a couple of reasons. It seems hard to pull off financially, and there are questions about the dignity of work and whether people want a universal basic income, at least politically across the board.

Then people started moving to the idea of universal basic capital. The distinction is that you’re entitled to capital if you’re an owner of an asset. If you have universal basic compute, you reap dividends from ownership of that asset. You can make the argument that because all of humanity, over many hundreds of years, has contributed to the corpus of human knowledge on which these models are trained, we should all have a stake in that. There are precedents for this. Alaska’s oil dividend model pays out a few thousand dollars a year to citizens, and no one seems particularly angry about it. It appears to work reasonably well.

I’ve also seen people move to the idea that maybe redistribution isn’t the right approach and that the focus should be on predistribution. The argument there is that we need to increase worker rights and worker participation in decisions about how technologies are deployed, so they have greater bargaining power. Other arguments focus on guaranteed employment, which seems to have some bipartisan support. In communities that are hardest hit, the community could come together to identify the jobs most needed, with paid reskilling rather than unpaid reskilling.

There are a lot of ideas being discussed, but the basic point is that no one has a good plan, and the potential impacts are enormous. Every major economic disruption in history has been followed by massive political upheaval. It’s concerning that no one has a plan here.

Mounk: Two thoughts on this. The first is that I agree with you that there are a lot of ideas in this space, and none of them seem particularly convincing for a host of reasons. We could go through each of them, but that might be overdoing it. There are very significant logistical and financial problems with all of them, and also a problem of meaning. If people don’t have jobs that give them some kind of meaning in the world, that is a big personal challenge, and it can become a big political challenge.

Inventing jobs that an AI system might do just as well, but that humans still do because of rules and regulations, is, at best, a very short-term solution. One way or another, all of these solutions seem to fall into one of those buckets. Another thing I think is really helpful is the concept of an AI resource curse. In political science, there’s long literature on why Saudi Arabia and other countries like it are not democracies.

One obvious answer is that democracies tend to emerge when monarchs and other people in charge really need an educated middle class as a revenue base. That gives them an incentive to invest in education, and it makes it easier for citizens to make demands because they can say, you’re living off our taxes, and in return we want political representation and a say in what happens with that money. If a monarch or dictator has access to a lot of resources simply from selling oil, you never get the socioeconomic mechanisms that empower a middle class to make those political demands. That tends to be very bad for long-term economic development.

One way of thinking about AI, if it ends up replacing a lot of middle-class jobs and leading to a much more polarized income distribution, is as a form of resource curse. It would mean that people at the top of society are less needful of ordinary citizens for tax income, less needful of an educated citizenry, and less needful of a military recruited from citizens.

You could imagine that, at some point, even though this still feels a bit science fiction, it may not be far off historically, many security needs are outsourced to drones, robots, or other systems. In that world, leaders no longer need a loyal citizenry staffing the core elements of security forces. All of that would be a structural boon to people at the top of society.

Born: I think you’re touching on two different pieces here, and I’d love to unpack them. The first is the question about dignity. Even if we had universal basic capital, compute, income, whatever you want to call it, that question feels more solvable to me. I used to do scenario planning years ago, mostly for the intelligence community. I was in the private sector before coming into philanthropy, and we would do future and forecasting work to think about what the world might look like in 2050 and what the national security implications could be if Sino-Russian relations evolved in a particular way.

We did something similar recently for the democracy field and had to work hard to think about scenarios in 2050 that could actually go well, given all the different trends, with AI being one among many, alongside declining birth rates, climate change, and growing gender gaps. We explored a lot of terrain, and the way we were able to arrive at dignity in an AI-fueled world was by getting back to service in one’s community. That was essentially the only path we could identify that involved both heavy reliance on AI and some level of dignity. That felt like something you could at least theoretically imagine.

The challenge on the power side you’re talking about, the resource curse and the fact that we might not need a middle class anymore, is much harder. At a time when we have figures like Bezos earning whatever it is, a million dollars an hour, we don’t quite know what to do about the floor and how to ensure everyone is taken care of. We do have ideas about how to handle the ceiling and how to prevent people from accruing so much wealth and power that it becomes impossible for average citizens to have economic, and therefore political, agency. But that again comes down to political will.

I sometimes like to unpack these questions by asking where we genuinely don’t know what to do, and where we actually do know what would need to be done but haven’t yet found the political will to do it.

Mounk: That’s a really helpful distinction. We could spend a lot more time talking about this subject, but I want to make sure we cover one more area you teased earlier, which is the geopolitical context. Once again, there are so many facets to this. One is great power competition. We already see that the prospect of increasingly powerful AI, and the kind of power this would give states, is leading to competition between China and the U.S., which might make cooperation on other dimensions more difficult.

There’s also the element of military technology. How is the nature of the international system transformed if you can mass-produce drones that can attack another country and civilians in a very significant way? What happens to internal security? Does it become much easier to assassinate people if you can send a tiny killer drone to kill a politician while they’re giving a speech or something like that? Those are questions about the geopolitical implications of AI itself.

One thing that’s really striking when you read some of the more optimistic or pessimistic accounts, from people who think AI is going to grow incredibly powerful very quickly but are deeply worried about alignment, is the assumption that the current era is temporary. They imagine that the period in which OpenAI, Anthropic, and Google have cutting-edge AI models developed inside private companies with relatively minimal security apparatus is going to be over very soon.

At some point, the national security state may step in and say that the implications for national security are so significant that this will start to look more like the Manhattan Project at Los Alamos, behind barbed wire and security clearances, rather than a set of private labs tinkering independently. How are you thinking about these different elements of the geopolitical context of AI?

Born: It’s a complicated space, particularly given that this is the first really major technology that has been developed in the private sector rather than by government. Because there’s such a strong narrative around an AI arms race with China, and concerns about losing economic or military dominance, that drives a story many people find very compelling: that we can’t regulate these technologies lest we end up at a disadvantage to an authoritarian state, putting democracies at a permanent setback.

Here, you’ve seen a lot of Track II dialogues with China, mostly in the safety category. Nobody wants anyone to be able to develop a bioweapon or something like that. But there’s much less collaboration beyond that, as countries try to figure out who’s going to come out dominant.

What troubles me most about this conversation is that the risk of being outcompeted by China is both real and a very convenient narrative that serves company interests, because it can be used to justify no regulation at all. I’m not naïve about this. We don’t want bad regulation. But we do need some guardrails in place to ensure that these technologies benefit self-government and democratic rule.

I believe it’s possible to balance those goals. There’s good data suggesting this. I remember a lot of discussion about GDPR and claims that it would completely crush tech companies in Europe. From what I’ve seen, European profits, at least for Google, basically doubled between the introduction of GDPR and today. There has to be a way to put sensible regulations in place that protect democratic values while still competing in this arms race with China.

Mounk: This is a great segue to the next section of the conversation. You have all of these concerns we’ve talked about. We’ve also discussed concerns about existential risk, which I’ve covered extensively on the podcast with other people like Nate Soares, so we can skip that for the moment.

There’s a huge panoply of opportunities and a huge panoply of risks and concerns. How do you think about the public policy response to all of this? Is this really ten separate questions that require ten completely separate answers, or is there some emerging set of schools of thought about the general approach we should take to channeling and regulating this overall space?

Born: I do think we will need some distinct or bespoke interventions in each of those concentric circles. More importantly, there are some foundational, cross-cutting interventions that would be really helpful. I would say there are probably fifteen or twenty that are being legitimately discussed among informed populations, including universal basic income, trade restrictions, and issues around data center placement.

In our work in philanthropy, we don’t take policy positions. It would be ironic to be very supportive of democracy and then come in heavy-handed with the idea that we have all the answers. I believe in a democratic conversation around these questions. That said, there does seem to be a real consensus, at least around three cross-cutting needs: transparency, privacy, and restrictions on government use of these technologies.

There’s a lot of conversation about the kinds of transparency we need to enable democratic accountability and visibility. I think about transparency along the policy stack. If you think about data and infrastructure and how that’s being built out, then compute, and then move to data collection, questions arise about what data models are being trained on and what kinds of bias and copyright protections are in place. Then you move to model development and finally to deployment, looking at how models are actually being used and how they’re impacting people. There’s a need for transparency at every layer of the stack.

A good example comes from infrastructure placement. In Oregon, people were trying to understand how much water Google was using. If I recall correctly, they were a couple of years into a drought and had to file suit against the government to get access to data showing that Google was using something like 30% of the water in that county. That points to a real need for basic transparency laws. We need that information to hold companies accountable.

Privacy law is another area we could spend much more time on. I sometimes tell friends that in the United States there are roughly a thousand licensed data brokers, each holding about 1,500 data points on any given person. One striking figure is that every second, something like 1.7 megabytes of data, enough to fill an 800-page book, are collected about us. I had to do a lot of research to understand what that even looks like. The collection itself isn’t necessarily the problem. The issue is how that data is used to make decisions and inferences, and the risks of surveillance, as we’ve seen in China, are significant.

Then there are restrictions on government use. There’s a fair amount of consensus that while there may be areas where we want to be careful about bias, there are others where these technologies simply shouldn’t be allowed in a democracy. Mass surveillance, predictive policing, and social scoring fall into that category. These are areas where there appears to be broad agreement among the public, even if that doesn’t necessarily translate into policy right away.

In the rest of this conversation, Yascha and Kelly discuss the existential risks of AI, how governments should respond, and the role partisan politics plays in responses to new technology. This part of the conversation is reserved for paying subscribers…

This post is for paid subscribers