Persuasion
The Good Fight
Luis Garicano on the Economics of Artificial Intelligence
Preview
0:00
-1:05:56

Luis Garicano on the Economics of Artificial Intelligence

Yascha Mounk and Luis Garicano discuss how AI will reshape labor markets, productivity, and economic growth.

Luis Garicano is Professor of Public Policy at the London School of Economics.

In this week’s conversation, Yascha Mounk and Luis Garicano discuss the economic magnitude of AI’s transformative potential, whether artificial intelligence complements or replaces human workers, and why Silicon Valley predictions about automation consistently miss the mark.

This transcript has been condensed and lightly edited for clarity.


Yascha Mounk: There are many things I would love to talk to you about, but the topic I have been thinking about a lot is artificial intelligence. I have had conversations on this podcast about the technology itself with people like Geoffrey Hinton. I have discussed the dimension of existential risk with people like one of the co-authors of If Anybody Builds It, Everyone Dies. I have also thought about some of the broader public policy angles.

However, I have not yet had a conversation specifically about the economics of artificial intelligence. It would be really interesting to try to get a handle on those questions. We will focus particularly on the labor market, but before we get there: what, in general, do you expect the impact of AI to be? Is it going to be major, middling, or minor? Is it going to lead to the vast economic growth some are predicting, or is it going to really decimate the number of jobs out there for humans? Is this going to be an economically revolutionary time, or is it just one of many developments that are interesting but ultimately not that consequential?

Luis Garicano: I don’t have a crystal ball—anticipating things is always hard. But let me give you my best take based on what we see. It is clear that a lot of knowledge work, even if the technology stopped tomorrow, could be automated—a lot of knowledge tasks. It is already very clear that tasks which are routine, tasks that have to do with diagnosis, writing, crafting documents, doing research—the AI is already doing these perfectly. Coding work is really spectacular. In terms of whether it is going to be big, I think it is going to be huge. It is probably as big a revolution as the industrial revolution—that is a very likely thing—except that instead of automating physical work, it is for cognitive work. Everything points to a large impact, and also an accelerating one.

There were people who were doubting, people who were wondering if AI would be a big deal or not. I don’t think any of those people could still be doubting, given what we have observed in the last six or eight weeks. The explosion of new models, the way they work—Claude Code is really taking the world by storm. Everybody has noticed that software firms’ valuations are plummeting in the stock market, showing that people believe many functions, many verticals, and many software products that were accommodating one particular use case can be replaced by AI.

So yes, a big deal, and in many segments. On the question of growth: yes, if this is as big a deal as it appears, we will see big productivity growth and an acceleration—though not the kind of growth that many people in Silicon Valley predict, because most economists think in terms of O-rings and bottlenecks and weak links. Meaning: you can invent as many compounds to solve cancers as you want, but if you need to go through years of clinical trials and regulatory approvals, that is not going to suddenly accelerate massively. Those weak links will constrain growth everywhere.

On the question of labor: the evidence so far is that AI is more complementing than replacing. In the three areas where we expect the largest impacts, translators haven’t dropped—everybody thought translators were going to be decimated, translation seems like a solved problem, and yet the amount of translation work hasn’t dropped according to world labor statistics. Customer service agents: some people get let go, some get rehired to do different jobs; again, the BLS doesn’t see much. Even computer programmers—we are not seeing big drops. There were a couple of papers earlier in the year. Erik Brynjolfsson has a paper with co-authors called Canaries in a Coal Mine which was starting to see drops in more exposed segments for more junior employees, and we do see a bit of that. But there is a lot of discussion on whether that has to do with COVID and so on. For employment at the moment, it looks like AI assists more than it replaces.

It is clear that AI can do many tasks. My main quarrel with the Silicon Valley interpretation of things is the belief that if a computer can replace the tasks most easily done by a machine, then the job is gone. Jobs are more than their most automatable tasks—a radiologist, for instance, spends only 30% of his time looking at scans. The job of a radiologist is much more than just diagnosing scans.

Mounk: I was in a meeting with Sam Altman in, I believe, 2018—I barely knew who he was at that time. I remember him pointing outside the window of a hotel in Silicon Valley, saying that in three or five years there were going to be robots building homes there. None of that has materialized. There is a very real tendency of people in Silicon Valley not just to overpromise on the technology, but to underestimate the obstacles to real-world adoption of technology. Those obstacles are particularly evident, as you recently pointed out, in something like house construction, where the constraints are not actually the inability to build homes—we know how to build homes. It is regulatory approval, zoning laws, concerns about whether the nature of a neighborhood is going to change, and all of those kinds of things.

Garicano: Two points about the comments you made. One is about the Silicon Valley position. I am very surprised that they are not just hyping the technology—which I understand, because you want to sell enterprise subscriptions—but they are also hyping the risks of the technology, and all the time threatening people with extinction, saying AI will take all their jobs. I don’t see the point of this tactic. I can see that if you want to justify the valuations, they need to say that all these things are incredibly transformative—and they are transformative.

The other day, Mustafa Suleyman—the ex-DeepMind co-founder and Microsoft AI head—was saying to the FT that they are going to automate all white-collar work in 18 months. I was joking: does anyone really believe that Microsoft will actually get Outlook or Word to work properly in 18 months? I don’t think they can fix their two pieces of terrible software in 18 months. We all hate Outlook—we’ve hated it for 15 years, and I’d bet we’ll hate it in 18 months. So they’re talking about automating complete, complex jobs, and they cannot fix their own software. That’s just completely ridiculous.

Mounk: Before we dive into the substance of this, I would love for you to help us establish the premise you’re operating on. A lot of my listeners are tech-forward, and a lot are not. I still find many people in conversation who experimented with ChatGPT when it came out three or so years ago and have gone back to use it every now and again—perhaps using it instead of Google to search for certain things, or for a translation need, or for very specific tasks. They are still convinced that it hallucinates a lot, and they feel the limitations of what it can do are very strong.

Part of that, I think, is that the most commercially used ChatGPT products are not very good compared to some of the competitors now—in part because they route your requests sometimes to a really powerful model and sometimes to a not very powerful model at all. Part of it is that a lot of people use free versions of these AI tools, which are much less powerful than the ones for which you need to pay at least $20 a month. Part of it is that probably only a fraction of people who listen to this podcast have used tools like Claude Code.


We hope you’re enjoying the podcast! If you’re a paying subscriber, you can set up the premium feed on your favorite podcast app at writing.yaschamounk.com/listen. This will give you ad-free access to the full conversation, plus all full episodes and bonus episodes we have in the works! If you aren’t, you can set up the free, limited version of the feed—or, better still, support the podcast by becoming a subscriber today!

Set Up Podcast

If you have any questions or issues setting up the full podcast feed on a third-party app, please email leonora.barclay@persuasion.community


So, just to motivate what we’re talking about: when you say there has been tremendous progress over the last few months, and more broadly over the last few years, what are these tools able to do today? How are people using them in ways that are so different from what you might expect if you’re just using the free tier of ChatGPT?

Garicano: Let me give you a Claude Code example and a deep research example.

The Claude Code example is the following. What is interesting is that the machine can talk to you and it can deploy tools—it can put Python tools to work. Let me explain this in a very clear way. I did a paper: I was a member of the European Parliament, and after returning to academia, I wanted to do some research on how narratives work in the European Parliament—I wanted to show there are no trade-offs in the narratives. What I did was collect 46,000 speeches, all the speeches, downloaded them, put them in a spreadsheet. Each speech goes to ChatGPT through an API—which means it goes through a special pipe—gets processed, comes back into a spreadsheet, gets classified in certain ways, and then we analyze that classification with statistical tools. That took six months. It is a lot of work: getting each speech, sending it, bringing it back, and so on. I had done this for climate.

I then decided to use Claude Code to do all of that work—six months of work—for the topic of AI. How is the discourse in the Parliament evolving on AI? I told Claude Code—in text, no programming—here is my directory, where I had all these files, and I told it to write the entire same pipeline: get the speech, send it, classify it, analyze it, but instead of for climate, as in the original files, do it for AI. There are many Python programs involved, multiple programs I had to run. Six months of work. Six to ten hours later, there was a complete analysis by Claude Code—all the directories, all the tables, every single figure, from start to end. The difference is that you talk to it, but it can deploy all these tools, do all these things, go over the web, run code.

The second thing I would tell your listeners about, which they would very much enjoy using if they are not already, is the deep research tools. On the highest-end research frontier of these models, you ask it to research a question—for example, you might say: populism has been growing, and there are two explanations, cultural and economic. I want an in-depth literature review of all the evidence comparing those theories. I want you to spend a lot of time collecting hundreds of papers, classifying them, telling me the prevalence of the evidence, and writing a thorough research report. This is now better than what a research assistant could do over several months.

Those are two examples of what is possible at the higher end. Why are they useful for the world? Think of a lawyer. A transactions lawyer is essentially comparing a situation to existing precedents and existing case law—drafting, for example, an intellectual property agreement or a contract to buy a house. They go and find similar contracts, upload the relevant knowledge, and convert it into a new contract. If a law firm incorporates all its contracts into Claude Code and asks the system to use that knowledge to automate contract drafting, compliance, and verification, it is definitely able, right now and without any question, to handle that complete task.

Mounk: I started to use Claude Code about a month and a half ago. I have basically no coding background—I had a couple of group lessons of C++ in middle school, did a little bit of programming in statistical software in graduate school, but very limited. I was mostly a political theorist. I then took a few weeks of CS50, the famous online computer science course on edX—a very good course, and that was ten years ago. If you had set me an entry-level coding task, like programming a number guessing game, I would not have been able to do it. Now, with this tool, I have been able to program five different things that are of concrete use to me. It is just astonishing what it can do.

More broadly, some of the pitfalls that AI used to have until a few years ago are not really there anymore. When ChatGPT 3.5 launched, it didn’t have an extended thinking modality. It’s as though I ask you a challenging question and it’s part of a game show where if you don’t start answering within one second, or if you hesitate more than one second between any two words, you lose—that answer is not going to be very coherent. Now the systems, at the higher tier, talk to themselves and walk you through the process by which they attempt an answer. They try one answer, check whether it makes sense, and then say, no, actually I made a mistake, I should do this. By the time they give you output, they have thought through it in a much, much bigger way.

On the problem of hallucinations: I wrote a post on Substack about asking Claude to write a publishable paper of political theory. A number of senior colleagues in the field wrote to me after I published it saying it would absolutely have been published in a top journal if it had been submitted. I looked through some of the references—not every single one—and it was not hallucinating. It now knows, by and large, how to ensure that something actually exists, and it flags when it is uncertain. It told me: I have put in the page numbers for the canonical translations of Tocqueville—I’m not sure about those, please go and double-check them, I don’t have access to that full text. If I upload the PDF of that book, it will do it for me. So it knows what it knows and it knows what it doesn’t know. A lot of those problems have been fixed.

Now we go into the realm of economics. I don’t know whether we have reached superintelligence as defined by Dario Amodei—where we suddenly have the equivalent of a whole country of geniuses. But we certainly have the equivalent of a whole country of middle-class professionals. Suddenly the number of people who can competently draft a legal contract, and do so in ten seconds for very little money, is vastly larger than it used to be. So what does that do—first for growth? If economic growth was in some ways constrained by human capital, constrained by the number of well-trained people with access to a lot of knowledge able to carry out that work, that should mean we are going to see a real increase in economic growth. Or is it more complicated than that?

Garicano: The first-order approximation is that you have an increase in productivity and an increase in growth—that is a reasonable place to start. There are two or three caveats that I think are important in trying to figure out how big that increase is.

The first is organizations. The organization of work is intensely human. As you were hinting from my recent post on London housing: the reason 23 out of 25 boroughs of London are building zero housing this year—in 2025, there were zero housing starts—has nothing to do with technology. Giving them better technology is not going to solve the problems with the neighbors, with the NIMBYs, with the Greens, with the land regulation, the lawyers, and all the other things that stop construction that we already know about. So the first caveat is organizations and all-too-human obstacles, which mean that even when the technology is there, many other factors have to collaborate.

There are entire sectors which have Baumol characteristics. Baumol had an observation in the 1960s—and maybe you have discussed this with your listeners—that a string quartet would still take one hour to play a Mozart piece, the same exact hour as it would have taken 200 years ago: four people, one hour, no productivity gain.

Mounk: This is a very old point because nowadays, no economist would talk about string quartets.

Garicano: This observation holds for a very large share of the economy. For hairdressers, cooks—technology doesn’t play any role. It’s not just that there are bottlenecks, but that productivity growth is very small because there is really no actual technology and no actual AI involved.

What is interesting is that in the sector of the economy that enjoys technological change, as prices drop, it is perfectly possible—and we will talk about demand elasticity in a moment—that people reach satiation and that sector becomes smaller. Think of agriculture: it became technologically fantastic, but it became smaller and smaller as it grew more productive, because people’s stomachs didn’t grow. The amount of workers employed went down. What that means is that the sector with the technological expansion reduces its size, and the other sector—the one with the violinists—expands its. As a result, the weighted average of growth depends not just on how much the productive sector is growing, but on the fact that the sector that is growing may itself be getting smaller.

Mounk: One way of thinking about this is that everything that can be automated suddenly becomes plentiful. To that extent, it might not fully show up in GDP figures, but it does fundamentally remake the world. When I think about the agriculture case: as a result of the successful mechanization of agriculture, that has become a much smaller part of the economy and we are paying vastly less for food than we used to. You will understand the technical details better than I do, but that sort of underplays the degree of that change in the way we track GDP.

What it does mean is that whereas for most of human history, even people in affluent countries—if you weren’t at the very top of the hierarchy—were deeply constrained in how much food they could consume, were malnourished as a result, and died earlier as a result, nowadays, if you are anywhere outside the bottom 20% of a medium-to-affluent country, food is not your primary expense. It is a significant expense if you like nice food and shop for nice things, but if all you want is to feed yourself on ramen and a few supplements in such a way that you avoid malnutrition, that is going to be a tiny part of your budget. That is a fundamental positive transformation of human life, even if it doesn’t fully show up in GDP figures.

Garicano: Economists like to talk about welfare as the sum of consumer and producer surplus. In this case, the consumer is enjoying the biggest gain. A lot of what happens with AI is that the gains are going to consumers and not showing up in GDP figures.

Let me give you an example. We have a dishwasher that is broken. We take a picture, upload it to ChatGPT and ask what is going on. It says this part is stuck and you should just remove it. We remove it. Our welfare has gone up—we are happier, we solved the problem. Now, there is a transaction that would have taken place—some person coming to our house to fix the dishwasher—that didn’t happen. The GDP would have been higher if that person had come and we had paid him. But our welfare increased. If we can diagnose our own illnesses, if we can assess whether our diet is good or bad without going to a dietitian, if we can do our own contracts—all of those things increase our welfare but do not show up in GDP. In fact, some of them could reduce GDP.

I was talking to a CEO from China who told me he thought a lot of the gains were being “smoked in the corridor.” I asked what he meant. He said he observed all his IT people becoming more productive—solving problems faster—but that it wasn’t showing up in better numbers at the end of the month. Each person in IT was more productive, but they were going home earlier or playing video games. Those are real gains that need not increase GDP.

The other thing I would mention is the difference between the short and the long run. Imagine there are two sectors, and sector A gets fully automated—let’s say lawyers, even if that example is imperfect because lawyers have a lot of regulatory protection and there are many contexts where you are required to use one. Imagine we no longer need any lawyers and we solve our legal problems ourselves. All the people in sector A that gets automated need to move to sector B. All the demand that is now consumer surplus—money we no longer need to spend on legal problems—can go and be spent on the other sector. But that reallocation doesn’t happen instantly. The capital has to be moved, the labor has to be relocated, the demand has to be redirected. There is a moment when GDP could be dropping, because we are not consuming legal services or dishwasher repairs, and the transition to new consumption patterns hasn’t yet occurred. In the meantime, capital is being written down, labor is being relocated, and there may not be sufficient demand either. All of that transition could definitely look nothing like smooth, continuous growth.

Mounk: I’m trying to figure out what the aggregate effect of these changes might be. On one hand, agriculture—historically a huge part of human activity—mostly gets automated, the number of people working in agriculture is now astonishingly low, output goes up a lot, and as a result prices go down a lot. Most of the consumer surplus is captured by consumers, and so it is a very good thing.

What I don’t fully understand is what actually provides the basis of ordinary people’s bargaining power. In the agricultural world, the answer is that the production of agricultural goods becomes very cheap, but it turns out that humans are necessary for running all kinds of other elements of the economy. There is a strong demand for human labor, and that is what allows people to continue to consume.

Now, if we get to a world—and this still sounds a little like science fiction, but I am trying to imagine the scenario—where AI can fully run agriculture without any humans, and can fully run the systems needed to manage agriculture, and can fully run the law firms needed to efficiently allocate capital to agriculture and ensure the most efficient firms are tilling the most land, it may be that there are still elements of a human economy where human work is needed. It may be that humans still prefer human teachers, or that humans continue to be required in medical decisions—perhaps because we don’t trust AI systems to make them, or perhaps because of regulatory obstacles to fully automating those decisions.

But if all of the underlying productive processes that actually generate material wealth no longer require humans, is there a kind of perpetual motion in the circular economy of humans that is enough to sustain affluence on its own? Or does there need to be some relation back to material production for the whole construct to sustain itself? If all of the demand for human labor is produced by the fact that it is extremely expensive to look after old people, by regulations that prevent us from building houses, by the willingness of capital owners to pay a lot for housing because they need somewhere to live, and by some people continuing to be employed in human-facing roles because of regulation—is that actually enough to sustain affluence for human workers if all of the genuinely productive processes can be done by non-human workers?

Garicano: Let me break this down into a few parts. First, the satiation case we are discussing—where the sector gets smaller as it becomes more efficient—doesn’t necessarily have to be the case. In fact, in many sectors, as technology gets better and things become more efficient, the sector actually grows in size. This is called the Jevons effect, after William Stanley Jevons, an English economist who observed that machines using coal were getting more and more efficient and yet were consuming more coal rather than less. Why? Because as they got better, they were being used for so many more things that total coal consumption was going up. In many sectors—think about health, think about energy—as things get more and more efficient, it is unlikely that the sector as a whole will shrink. In fact, it is more likely that it could grow in size and demand more humans. The sectors most likely to grow when prices go down—those with the most elastic demand—would be things like health and energy, just to give two simple examples.

The second important point is the idea of complements, which you were clearly hinting at in your question. There are many situations where a human is needed at a bottleneck. Even if the first 99 tasks can be automated, if the 100th task requires a human, the 99 automated tasks are abundant but the scarcity is still the human—and the human is going to capture the rent and the labor income.

Mounk: That depends on the human being scarce. If that task requires a very high level of qualification and you need millions of humans to do it because they are so productive, then a lot of people are going to be in relatively decent employment. But if you only need seven people to do it and they have to be excellent, then those seven people are going to capture huge rents—some of that economic gain is going to go to them, but only to them.

Consider that something like 5% of the male workforce globally is employed as drivers. The rent from the need for human drivers is very broadly distributed—each of those drivers is probably not very affluent, but the rent for that activity is widely shared. Now say that ten people have to supervise all of the self-driving cars, and they have to be incredibly qualified with very few people able to do it. Perhaps they capture a lot of that rent, but that is only going to be ten people who get that money. Or say it needs a thousand people, but a million people are able to do that job—in that case, the wage for those thousand people is going to be really low, because any one of them can be fired and there are 999,000 waiting outside the door willing to take their position. So it depends a lot on those kinds of details.

Garicano: I am writing a book on this point—it is called Messy Jobs. The argument of Messy Jobs is that there is a big difference between a task and a job.

Geoffrey Hinton, whom you have had on your podcast, is famous for having said in 2016 that nobody should study radiology because radiology was just an expert system that could scan photos—and of course any expert system was going to be better, trained on hundreds of millions of breast cancer scans and perfect at detecting those cancers. The truth of the matter is that demand for radiologists has never been higher. Their salaries are growing, their numbers are growing, and it is the third highest-paying medical profession in the United States. Why? Because the task is very different from the job. The technologist imagines a radiologist just looking at scans. But only 30% of a radiologist’s time is spent looking at scans—they have to develop the diagnosis plan, talk to colleagues, talk to patients, and do many other things.

The first crucial obstacle to your dystopia is that automating parts of jobs—tasks—is not automating the job. I invite all your listeners to think of what they did today and consider which of those things could be replaced by a machine. I went to a workshop, had a job market seminar, had a meeting with colleagues, had students walking in, worked on a paper—and if you think through how many of those tasks you could replace with a machine, you will discover that many of them can’t be. The task we are doing right now—having a human conversation about something—can’t be. A job and a task are really very different things. Many aspects of a job can change without the whole bundle disappearing. It will get re-bundled, it will look different, but it will not go away.

There are specific reasons for that. One is the need to direct the AI. You cannot just let it do its thing. The AI is sycophantic—it tends to agree with what you say. If you want to direct it to the left, it says yes, left is great. If you want to direct it to the right, it says yes, right is the best, you’re the smartest. What you tell it is going to matter, and that means somebody is going to have to exercise judgment. Crucially, this is not a problem that is solved by AI getting smarter and smarter. Think of managing a family—everything you do in the morning with the kids, moving around, deciding. A lot of that is not automatable because a lot of the knowledge of what is going on is tacit. It is in your head. No machine can tell you whether the kid has to wear these boots or whether today is the day they need this or that particular thing. Authority is inherently human. Making difficult decisions is inherently human.

Being the consultant who does PowerPoint presentations—yes, that can be automated. But does the consultant only do PowerPoints, or does he or she go to the company, listen to the workers, figure out where the problems are, and determine how to improve things? A lot of that is tacit. So I would push back against the idea that entire jobs are going to be done autonomously.

You are right that self-driving cars passed the autonomy threshold—the cars can essentially drive themselves, which means the supply of drivers suddenly becomes infinite and the wage floor collapses. That is a good example, but it is an example where the task is very clearly defined and very repetitive. Is that the majority of jobs? The claim of Messy Jobs is that if you think about demand elasticity, many sectors will grow; if you think about complementarities, there are going to be crucial scarcities that humans can exploit—and there will be many such scarcities. This is without even getting to the question of demand for human services, about which I am actually not sure. I am not sure that people, when they are old, will necessarily want a person bossing them around asking “are we well today?”—I might prefer a robot taking care of me. So it is not obvious that human-ness in itself is always the preferred option.

Mounk: Including a lot of the more intimate tasks involved in elder care—would you rather have another human wipe your ass, or would you rather have a machine do it? You certainly want some human company. Once your ass is wiped, you would love to have a conversation with a human.

I have a middle position in these debates, and I want to push back on a couple of the things you said—though I am not coming from a maximalist position. I agree that a lot of the predictions that all jobs are going to be gone in two years are testimony to people who haven’t thought carefully about politics or the real world. But some of the examples you gave leave me a little less convinced.

To give one example: can AI outsource the management of a family? Part of family life is that you are negotiating between human beings, trying to come up with a plan together. Even if the AI can make a plan that is Pareto superior to whatever plan you would have arrived at, part of what it is to be a family is to make those plans together—to decide what you are doing today, and so on. On an emotional level, you might not be able to outsource those things. On a purely planning level, though, I think AI absolutely could handle the tasks you described—and in fact, many feminists would say that is precisely what they have been arguing for for a very long time, because it is often women who do the emotional labor and the second shift: keeping track of the fact that Timmy has to go to the dentist tomorrow and Tammy has to go to ballet the day after, and whether the dress she needs for ballet has already been washed. It would require an invasion of privacy—an AI that is party to all of these conversations and immediately notes down when Tammy says, don’t forget I need X or Y for my ballet practice next week. But can AI do all of those things? Absolutely. Could it, in fact, save some marriages in the process? Probably yes.

Garicano: Here is why I disagree. There are information processing tasks—and you are right that a lot of information processing tasks can be automated. We synthesize information, put it in a form that can be processed, and make a decision. But there are other tasks that have nothing to do with information processing. Your wife or your kid is upset—someone needs to talk to the kid, someone needs to understand why he is upset, and someone needs to decide: yes, the optimal plan from the perspective of the family was that you couldn’t stay home, but I listened to you and I decided that you are staying.

A lot of it is not information processing. You understand your kids. You understand what a look means—from your wife, from somebody else. When a look means yes, I will do it. When they say yes but in fact they mean no. There is a lot of tacit, local knowledge that goes into management, into family life, and into business. We are not just talking about politics or emotions—we are talking about interpersonal knowledge. You have known your wife for many years and you know when you can push and when she knows she can push. You might say the machine could know those things—I honestly don’t think it could.

Take the contractor: you know which electrician is reliable and which one played tricks on you last time. Can the AI know whether you can use some piece of leverage to get that electrician to show up on time? We are talking about a level of interpersonal and tacit knowledge that is extraordinary—and also, think about this: a lot of the tacit knowledge within jobs is knowledge that employees have that gives them power. They are not going to be happily sharing it with AI. You should know that my colleague so-and-so has this problem with the boss and never wants to work with him—that kind of thing is going to remain in the heads of humans.

So yes, information processing tasks can and will be automated. But a lot of what remains has to do not just with emotional and social skills, but with tacit knowledge and personal knowledge that the machine will probably never fully gain, because it cannot capture it.

Mounk: I have two different lines of questioning about this. The first is: if we move away from the extreme predictions, and if we recognize that advanced AI tools are clearly capable of doing a lot of the tasks involved in knowledge production, that presumably means some jobs are going to go away. The idea that AI is incompetent, that it can’t do any of those things, that it’s all hype and a bubble—we agree that is wrong. But I think we also agree, at the other end, that real-world frictions are very real. Jobs are messy because the world is messy, and therefore the idea that the moment Claude beats doctors on a bunch of stylized medical questions—which it more or less does now—we should expect there to be no doctors tomorrow, is naive and doesn’t understand the real world.

But what happens in the middle space? What happens if the demand for white-collar work is suddenly reduced by 25% or perhaps 30%? It doesn’t have to happen between today and tomorrow—it happens over the course of 10 or 20 years. You just see a continuing, gradual reduction in the demand for that kind of high-skilled work: as existing firms automate work away, as firms that are too stubborn or unable to do that are outcompeted by new entrants that are AI-native—in the same way that in many areas of the economy, it took internet-native companies to outcompete old ones before you really saw productivity gains come online.

That is going to be a significant process, and it is not going to happen all at once. In a way, that raises an equally troubling possibility: that the job market is going to slowly slump for an extended period, and that we face the famous, somewhat apocryphal, boiling frog scenario. If everybody lost their job over the course of two months, perhaps we would all organize and demand some way of being made whole. But if this shows up as decades in which the bargaining power of ordinary people diminishes gradually—because the demand for human labor just continues to fall in a messy, haphazard way—that could still be an incredibly painful period ahead for ordinary people.

Garicano: You are more or less describing my scenario of the transition between sector A and sector B. We know that during the Industrial Revolution, what was called the Engels’ Pause—roughly between 1790 and 1840—this was happening: wages were stagnating or dropping and workers were in trouble. Then GDP roughly doubled over the following decades to 1900. So yes, it could happen that over a period of time the transition is hard.

I would think instead about the combination of factors working in the other direction. First, there are sectors where nothing is going to happen because they are outside the reach of this technology entirely. Second, in the sectors with elastic demand, there will be enormous growth—think of medical scanning. If AI handles all the routine scans, perhaps we would all be getting whole-body scans every year or every six months. The demand is extremely elastic and the sector could grow much, much larger, with radiologists needed to oversee far more machines. I think this is true for many sectors. Third, within the sectors that are getting automated, there are still messy jobs—humans directing, judging, making decisions, setting direction. If you count all of this together, you don’t have a catastrophe. You have a transition that is more significant in some subsectors and less significant in others.

We will also discover entirely new sectors—who would have predicted TikTokers and Instagramers? If you add up the sectors where nothing is happening—from public sector jobs to arts, music, barbers, hairdressers, cooks, and pet care, which alone accounts for roughly 1% of the U.S. population and is of course entirely unaffected by AI—and then add the sectors with very elastic demand that are going to grow, like health and energy, and then add the messy jobs where even though some tasks are being automated the jobs continue—from managers to entrepreneurs—and then add the complementarities, there is also David Autor’s idea of the new middle class: think of a nurse who is empowered with a genius in a box, who can now diagnose really complicated illnesses, hold the patient’s hand, do all the other parts of the job, and solve more problems than ever before. Of course, as you said, then maybe everybody wants to be a nurse, and we have to think about the supply of nursing and other skilled trades. But when you add all of this up, you move away from the feeling that there is a cataclysmic change ahead, and more toward the view that yes, this is automation, yes, it is going to be a bigger revolution than what we have seen in the last 50 or 60 years, perhaps more similar to the Industrial Revolution—but no, it is not going to cause widespread, long-term unemployment. We are going to see new jobs we wouldn’t even think of today, from TikTokers and Instagramers to dog walkers. Who would have told you that you were going to be a podcaster?

Mounk: I don’t know if the vision of the future is that humans are going to be fine because we’re still going to be TikTokers and Instagrammers and dog walkers.

Garicano: No, I was not saying that. I was saying that the pet care sector is 1% of the population. It’s nurses and people who take care of the pets and all these other things.

Mounk: Let me ask you about the dog walkers. One interesting thing that has happened over the last ten years—which just shows how epistemically modest we should be about all of this—is that I remember all of the conversations about drivers losing their jobs, and how that was somehow linked in the conversation about populism to why the Midwest went for Trump. The proposed solution was that they should all learn to code. Now it turns out that AI is really good at coding. Meanwhile, because of a set of technical issues that ended up being harder to solve for a while—though mostly solved now—Waymo is very efficient and much safer than human drivers, yet there are still significant regulatory obstacles. The number of rides Waymo is offering is going up exponentially, but it is still a very small share of the market and most human drivers are still fine. This is going to take longer to play out than many people think.

But we are now in a world in which coders appear to be losing their jobs—though I understand the economic data on that is mixed—and in which knowledge workers are seemingly about to lose their jobs, while all of the manual trades are assumed to be safe. The plumbers are going to be fine. The dog walkers are going to be fine. Well, I watched, as many others did, the quite remarkable display by Chinese robots at the annual Chinese state television gala. The progress in their dexterity from a year ago to today is just astonishing. The ability to combine the manual dexterity of these machines with visual processing and understanding of the world is advancing very quickly as well.

I am personally waiting for the ChatGPT 3.5 moment in robotics. I don’t think it will take very long for there to be some consumer product that is actually usable—we are getting close to that. The applications in the industrial sector are likely to increase as well. Again, I don’t think that is going to happen tomorrow, and it will take time to be fully implemented in the economy.

But when we are talking about a timescale of decades—when you say that in 20 or 30 years, more and more knowledge work tasks are going to be automated because those skills can already be performed by AI, and perhaps it will take a long time for firms to reorganize and for new entrants to come in, but that is okay because perhaps we will all be in the pet care sector—well, that assumes that in 20 or 30 years we will still not have figured out household assistance robots. That if you are out at the office or doing whatever you do during the day, you cannot have a little robot walking your dog in your stead. Given the rate of progress of this technology, that seems to me like a pretty significant background assumption to be making.

Garicano: Physical AI—robotics, let’s say—is not that far off. What we have seen in the past is that capital is in what we call elastic supply: you can always invest more in capital, which means the rents on capital eventually get competed away and the robot gets sold at a competitive price. That means people can use robots for care. Remember, we have significant fertility problems and population growth problems when it comes to paying our pensions, and having robots could be a solution to all of that—it is like having more population growth.

In a world where those returns are competed away, we are back to consumer gains. The capital doesn’t earn extraordinary returns because there is an infinitely elastic supply of capital—more people can invest in making more robots. What is the scarce resource? It is going to be land, it is going to be energy, and it is going to be whatever human labor is still needed. That human labor could mean we work fewer hours, that we are able to enjoy more leisure, or that human labor is employed in a whole range of jobs which, as you rightly say, we cannot anticipate.

What we should not imagine is an economy that works without humans, because all value is generated for humans. What does the economy generate value for if nobody is buying the products? Value, by definition, is something that is worth more to humans than it costs to make. If there is no human who can buy things because they are all poor, there is no value. The way the economy works is that the return to capital gets pushed back down to the competitive return, and the rents get captured by the scarce resources—in this case, the complementary human labor that is still needed.

Nobody can anticipate what happens in 30 years. Both physical robotics and cognitive AI are going to represent a major revolution. I don’t think we should be thinking of this as an apocalypse. There are a lot of complementarities, a lot of scarcities that still favor human labor, and a lot of areas where this doesn’t really bind at all.

Mounk: Tell me a little bit about the state of the empirical literature. I understand that there is a real distinction between micro and macro studies—a real distinction between studies that look at the extent to which particular tasks can be automated and studies that look at how the overall job picture has changed.

When I look at the fields I know a little bit, I worry that the absence of change so far is an indication of what is yet to come, rather than an indication that AI won’t have a big impact. You mentioned translation earlier. Another thing I have been thinking about is index-making in the publishing industry. In all of those fields, there has been basically no change in the economic flows—so far as I can tell, my next book is going to be translated by human translators. Well, perhaps they don’t actually do it and privately send it to Claude and capture the consumer surplus by going on a nice vacation while they pretend to be working on the book. But in terms of the actual economic flows, nothing has really changed, and I don’t know how long that is going to continue to be the case.

It is very sticky and very complicated to change those processes. Somebody needs to be willing to fire all the translators and deal with the backlash—the agent saying the author doesn’t like the idea of AI doing the work, the risk of a newspaper story, the possibility that customers will be upset. There are all kinds of reasons to be risk-averse about being the first mover to make that change.

What I will tell you is that one of the things I have built for myself with Claude Code is a personalized translation tool, because I publish my articles—including some podcast transcripts—not just in English but also in German and French. It is not just better than the off-the-shelf tools; at this point it is better than all but the very best translators I have worked with. The very best translators—particularly in France, for whom I am deeply grateful—I think are still better. But 90-plus percent of the professional translators I have dealt with, people who have translated famous books by famous authors, are now significantly outperformed by it.

For now, if economists tell me that translators haven’t lost their jobs and none of this has changed that much, I believe it—I can see that. But given that AI has existed in its current form for only about three years, and that for two of those three years it really wasn’t yet at the level it is reaching now, and that people have not yet integrated these tools sufficiently into their processes—I would say: come back to me in 15 years and let’s see whether those translators still have jobs in the way they do today.

Garicano: Nobody is predicting that translators will still exist in their current form indefinitely. I said “so far, so good”—but perhaps like the person falling past the window. Jobs do go away. Newspapers went digital, and there were lots of people working in printing presses, paper, and all the associated industries, and all of that was automated away.

Mounk: Including my grandfather, whose job it was to, as a young man, to lay the newspaper letter by letter. He helped to manage the printing side.

Garicano: This is human history all the time. On the question of empirical evidence: the evidence up to now is positive. When randomized controlled trials have been conducted—giving AI to workers in a controlled setting—the results are consistent. In customer support, the most junior agents achieved performance similar to more senior ones. In writing tasks, the worst writers achieved performance similar to the better writers. When it was given to software programmers across three different tasks, the less skilled programmers were brought closer to the level of the better ones. Micro studies seem to be finding complementarities rather than substitution, consistently.

At the aggregate level, there is much more confusion and much less clarity. We don’t see big drops in demand. There are some canaries—as I mentioned from that paper, Canaries in the Coal Mine—some preliminary evidence that there may be drops in junior roles. When we think about the research task, the PowerPoint task, the Excel task—those are the obvious things to automate—we have to imagine that junior lawyers, junior consultants, and junior investment bankers will not be recruited as much, because you can do a research task without a junior person. Yet it turns out the McKinsey class this year is bigger than before. They keep hiring. So far, so good.

I agree with you that this is not a forecast of the future—I don’t mean to say that because we haven’t seen much yet, we won’t. That is not the point. The point is that there are indications that complementarities are important, that people who use AI produce better work, and that substitution is still limited. It is hard not to think that tasks involving basic PowerPoint work and research are going to be fully automated at some point. But I agree—we should not try to make this a 15-year forecast.

In the rest of this conversation, Yascha and Luis share advice for young people at the start of their careers, why AI won’t kill off bullshit jobs, and whether companies run by AI would be more successful than those run by messy, emotional humans. This part of the conversation is reserved for paying subscribers…

This post is for paid subscribers