Richard Thaler is the Charles R. Walgreen Distinguished Service Professor of Behavioral Science and Economics at the University of Chicago Booth School of Business. He is the co-author, with Cass Sunstein, of Nudge: Improving Decisions about Health, Wealth, and Happiness, and is the 2017 recipient of the Nobel Memorial Prize in Economic Sciences.
In this week’s conversation, Yascha Mounk and Richard Thaler explore to what extent humans behave rationally, how nudge theory works, and whether we should outsource questions about life to ChatGPT.
This transcript has been condensed and lightly edited for clarity.
Yascha Mounk: Your big contribution is in the field of behavioral economics, and it comes against a background that is often invoked but is perhaps a lot less true today as a result of your work, of economics making a bunch of assumptions about people acting in rational ways. To those of my listeners who have not taken Econ 101 or who took it too long ago to have this present in their mind, what kind of assumptions are we talking about when we are saying that economics, at one point in time, assumed that people would behave in these rational ways?
Richard Thaler: I do not even think we have to add “at one point in time.” So economics, as traditionally done and more or less as still done, is a theory of maximizing agents. What distinguishes economics from other social science disciplines is precisely that assumption that the agents in the economy are consumers and workers and employers and government officials, and they all solve any problem by solving for the best possible outcome. There are some additional assumptions, more or less for convenience, such as that people are selfish and do not care at all about others. That is a core assumption, but not at the heart of economics, though it is often assumed. And all of this, of course, is counterfactual.
Mounk: I think the point of some of these assumptions and the point also of more formal rational choice models, which often hinge on those assumptions, is sometimes insufficiently understood. This is not necessarily that economists say that people act in these ways all of the time, or that political scientists who have increasingly started to incorporate rational choice models into their work think that under all circumstances political leaders or individual voters or citizens act in a rational way. It is the hope, the idea, that by making some very simple assumptions about the world, you can explain a large percentage of it. With a few simple assumptions you do not get 100% of the way to the truth, but you might get 90–95% of the truth. It is a really good first approximation as to what could happen, how to understand the world.
Do you still find those assumptions useful in that kind of way? To what extent, under what kind of circumstances, should we deviate from that assumption in trying to understand the social world?
Thaler: Well, Milton Friedman in the 60s famously argued that it did not matter if people were incapable of solving these problems as long as they behaved as if they were doing that. Those two words, as if. Economists justified what they were doing for a long time with those two words. The first paper I wrote in what is now called behavioral economics in 1980 took on those two words and said, no, they do not. Friedman had a famous analogy. He said suppose we look at a professional billiards player. The billiards player does not know trigonometry and physics and all the things that would be necessary to actually solve the problem, but acts as if he or she did.
My response to that is, well, in economics we are not limiting ourselves to professionals. Suppose we go into a pub and there are two guys playing pool. What are they going to do? They are probably going to aim at a ball that is closest to a pocket with an idea that they are taking the shot that is most likely to work. They often miss. They are not thinking more than one step ahead, whereas a professional would be thinking three shots ahead. So the model would just be wrong.
If we think about something like the model for people saving for retirement, that is enormously complicated. Somebody in their 40s is assumed to be calculating more or less accurately their future lifetime income, how much they would need to save to smooth over their lifetime, invest properly, and so forth and so on. This is a preposterous assumption.
But those models are still useful. I think they are useful as normative models, that is, models of the way you should behave. If you go to a financial advisor, that person’s job is to help you do that.
How much do you want to give to your heirs? How well do you want to live in retirement? There is software to do that. But the idea that people are doing that in their head is ridiculous. It is also leaving out self-control problems. In order to implement that plan, I cannot get tempted by some new sports car or a posh trip to Italy to eat some good food. So no, the models are useful to say how agents that do not exist would solve, but then it is an empirical question whether those models work.
Mounk: Some of this now seems obvious thanks to your work and the work of others in the field of behavioral economics, but it must not have felt at all obvious when you set out on doing this work. What first set you off on that? How did you look at this line saying people act as if they were rational agents in this kind of way and think that smells wrong? Was that a particular observation about the empirical world in a particular field of economics? Did you come in with a critical mindset about standard economics, or were you at first taken in by it? How did you get towards that avenue of research?
Thaler: I think it started when I was in graduate school, and I like to say I was the first economist who bothered to look out the window, possibly because I was bored with the lectures. It started with a list of funny behavior, dumb stuff people do, that I thought illustrated important points. There is a now well-known story about a dinner party I had, and there was some roast in the oven. Over some adult beverages I brought out a large bowl of cashew nuts, and people started nibbling that with their cocktails, and the bowl started to disappear, and I was worried about our appetites.
So I grabbed the bowl and hid it in the kitchen. This was a group of economists, and when I came back, everyone thanked me. Thank God you got rid of those cashews. We were going to eat them. It being a dinner of economists, the conversation became very boring and technical.
Mounk: We are supposed to be rational agents. How can taking away a choice make us better off? I think perhaps revealed preference here is truer than stated preference.
Thaler: Yeah, well, but it is an asset allocation problem. Maybe you would like to consume both. I think the consensus at the table was that our cashew-to-roast ratio was going to be incorrect. We have all had the experience of deciding after the fact that the alcohol consumption was greater than optimum. Another one of these early examples was about what economists call the sunk cost fallacy. The idea is that if you have paid for something, the amount you paid for it is gone and is not recoverable, and therefore you should ignore it in future decisions.
Thanks for reading! The best way to make sure that you don’t miss any of these conversations is to subscribe to The Good Fight on your favorite podcast app.
If you are already a paying subscriber to Persuasion or Yascha Mounk’s Substack, this will give you ad-free access to the full conversation, plus all full episodes and bonus episodes we have in the works! If you aren’t, you can set up the free, limited version of the feed—or, better still, support the podcast by becoming a subscriber today!
And if you are having a problem setting up the full podcast feed on a third-party app, please email our podcast team at leonora.barclay@persuasion.community
Here is an example. You paid a significant amount of money to attend some concert, and that evening you get a text from a friend whose flight has been canceled, who was changing planes in Chicago or wherever you are living, and says, I am free for dinner. Are you? Let us assume that if on any other night of the year you were given a choice of going to that concert or seeing your friend, whom you see only occasionally, you would say that is a no-brainer. But many people would make the mistake of saying, I paid all that money for that.
So I had this list and I was not sure what to do with it. Then I discovered the work of the psychologists Daniel Kahneman and Amos Tversky. We are in the mid-1970s at this point. They had a big idea, which was that when dealing with complicated problems, people take mental shortcuts, which is entirely sensible, but those shortcuts lead to systematic deviations from the rational economic model, like the two we have just mentioned.
There was a guy called Herbert Simon who was a behavioral economist before the term existed. He got fed up with debating with economists and became a pioneer in artificial intelligence instead—maybe a better goal. He had the idea that people satisfice, that is, they do not try to solve some problem; they figure out some solution that is good enough and quit, which is entirely sensible, but economists more or less ignored him. He was given a Nobel Prize, but that means he impressed seven Swedes. It does not mean he necessarily impressed the profession. I think the reason for that is they did not know what to do with him.
Mounk: Let me see whether I understand this. The idea is that when you are satisficing, you are saying, I am just going to operate at a rough approximation of a solution. But those approximations do not necessarily go off in one direction or the other direction. It means that you do not get a very detailed answer, and you might sometimes be off five points in one direction, sometimes five points in the other direction. But since it does not systematically bias your estimates of what is going to happen, economists felt like, interesting point, well-deserved Nobel Prize, but we can ignore that. So in order to actually change your profession, you needed to find examples where those mental shortcuts or those ways in which we are not acting in a rational way systematically make us choose something that is different from what the model suggests. Is that the idea?
Thaler: That is right. Here is an example from this early Kahneman and Tversky work. This research was not about decisions but about judgments and forecasts. Suppose I say to you, in the United States, what is the ratio of deaths by homicide to deaths by suicide? Most people will say, homicides may be twice as likely as suicides, but the opposite is true. There are about twice as many suicides. Kahneman and Tversky said this is because people use what they call the availability heuristic. We estimate something’s likelihood by how easy it is to recall instances of it.
Mounk: Since murders are much more covered in the news than suicides, they are much more available to us when we are looking for these examples.
Thaler: This is in spite of the fact that, at least for people in our socioeconomic class, we almost certainly know more people who committed suicide than died by homicide. There is a different lens we could have looked through that, even with availability, would have gotten this right, but it is the news bias that creates this. I read those papers and became excited and said, wait, here we go. There is a way of doing this that cannot be ignored. That was the beginning.
Mounk: What is interesting is that you ended up collaborating with them in various ways, but they were mostly looking at social psychology more broadly. Some of the examples they were giving were economic or had applications to economics, but you then built on that to show that in a broad range of economic situations we do not act rationally, including sometimes sophisticated economic actors, not just the average employee who does not like to think about economics too much, who is choosing some random retirement plan and perhaps is not maximizing the way in which income will be smoothed over time. Some sophisticated actors in the economy end up deviating from expectations of rationality as well. Tell us about some of those examples in economics.
Thaler: Sometime in the mid-1980s, I was offered the opportunity to start writing a quarterly column in a brand new economics journal called the Journal of Economic Perspectives, a journal that still exists and is the only economics journal that is accessible to non-economists. That journal still exists and is free to anybody. The column I was asked to write was on anomalies.
These are departures from economic theory. I wrote that column for four years, and when I had enough that looked like a book, I stapled those together and had a book called The Winner’s Curse. We will get to what the winner’s curse is in a minute. These were empirical facts that are embarrassing to economists. Recently, I collaborated with a young co-author to go back and look at those old columns and ask whether those findings were true. Many of the early ones were based on experiments, motivated by psychologists, laboratory experiments with undergrads at low stakes. There is something called the replication crisis in some parts of psychology. Is there a replication crisis in behavioral economics? What we have shown in this new book is that there was no replication crisis in this field, and that many of the early studies based on students running experiments have now been replicated with experts doing something for a living.
Let us take the example of the title chapter of that book, The Winner’s Curse. What is the winner’s curse? The idea is that in an auction where the winner gets something that is worth the same to everyone, it is often the case that the high bidder, the winner of the auction, loses money. Here is a simple lab or classroom experiment. Fill a jar with coins. Count them. Say there is, for example, $75 worth of coins. You auction off the jar. The high bidder wins that amount of money. They do not have to take the coins. What happens? The average bid is conservative, perhaps $40, but the winning bid is almost always more than $75. The winner is cursed.
This was not discovered by psychologists or renegade economists. It was discovered by engineers at Atlantic Richfield Oil Company who were drilling for oil in what I stubbornly continue to call the Gulf of Mexico. What they noticed was that there were hundreds of plots that were divvied up. They bid for many of them. On the ones they won, there was usually less oil than their geologists had predicted. They began to wonder what was going on. Did they have lousy geologists? Were they just unlucky? Then they realized the bids they won were not a random set of bids.
Mounk: It is the bids where we were relatively optimistic compared to the guesses everyone else made. We might have bid on ninety other oil fields, and on each of those fields we were on the money or below what the true yield was. But those were the bids we did not win.
Thaler: Compare the strategy of hiring experts and trying to figure out the optimal bid for each plot with another strategy of bidding one dollar for every plot. You will win some, and you will never overpay. A friend of mine and I used to bid in wine auctions, and that was the strategy we used, making very low bids. There was one auction in Los Angeles where there were demonstrations and people could not show up at the physical site, and our bids that had been faxed—this is an old story—won. We ended up with quite a lot of very good inexpensive wine as a result. That is what a smart strategic bidder does to overcome the winner’s curse.
Mounk: Explain to me where the irrationality lies here. In itself, hiring the best experts to make bids is not irrational, and it does not sound like there were psychological biases going on. The bias of the bidders was not, as you might expect from some of the Kahneman examples, that this oil field had been in the news a lot or that it had a name that made it sound especially promising. It was not a case of someone thinking, I am confident about this; I am going to put a lot of money in, the way someone might bet on a horse because its name reminds them of a lucky word.
The structure of the auction is such that if ten really smart people make an estimate, it is natural that there will be some spread between the estimates, even if each of them is acting rationally. It is a second-order irrationality. The irrationality lies in not recognizing the structural feature of the market and adjusting what otherwise would seem like a very reasonable approach to it as a result. The bid itself was not irrational. It was the overall strategy of bidding, given that you would only win the auctions in the instances when you were higher than everybody else.
Thaler: That is right. Compare it with a famous example from Keynes. John Maynard Keynes, one of the great twentieth-century economists, in his famous book The General Theory, offered a very sexist example from the time. The newspapers used to run contests where there would be photographs of one hundred attractive women, and the contestants, presumably nearly all of whom were men on the train—I always visualize them looking at their newspaper—had the job of picking the five who would be judged the prettiest by everyone else.
Keynes said this was his model of the stock market. The goal is not to correctly say who is the prettiest; it is to say who others will think are the prettiest. Or, as Keynes said, it is actually who others will think others will think others will think.
Here is a version of this, a little game that economists now call the beauty contest game in honor of Keynes.
We say that everybody in this room—imagine we have one hundred people in a room—is going to guess a number from zero to one hundred, with the goal of guessing as close as possible to two-thirds of the average guess. We might look around the room and say these guys are mostly asleep, not paying much attention. Maybe they will guess at random. If people guess at random, their average guess will be fifty, so we should guess two-thirds of that, which is thirty-three. Then we say, wait, maybe some people are awake and have realized they should guess thirty-three, so I should guess twenty-two. But some people may think that. The question is how many steps people take.
The rational economic model of this guessing game, the Nash equilibrium, is to guess zero. Let us say everyone guesses three; you want to guess two. If everyone guesses two, you want to guess 1.3. I have played this game dozens of times, and zero has never won. I played this game in the Financial Times with two business-class tickets, London to the United States, offered to the winner, and the winning guess was thirteen.
Mounk: You go some way toward that and realize some of the logic, but you do not go all the way because some people do not understand the logic of the game and therefore do not go there, or because you expect that some people will not fully follow the logic and therefore it is rational not to go to zero. Seemingly it is rational to go to zero, assuming that everyone else is rational, but if you correctly assume that everyone else is not rational, then guessing zero is irrational, and guessing somewhere in the middle is better.
Thaler: What you guess depends greatly on who you are playing with. If I play that with University of Chicago MBA students, then a number in the low teens is a good guess. I played this in my granddaughter’s high school economics class, and a number in the twenties won. She submitted it, and everyone thought it was rigged.
Mounk: She has just listened to you at the dinner table too often. There is a nice story that vastly oversimplifies the state of research: you can have people play a Prisoner’s Dilemma game. If you want to listen back to what a Prisoner’s Dilemma is, Steven Pinker and I discussed it in some detail in a recent podcast I had with him. The idea is that most people in the world do not act as instructed by the Prisoner’s Dilemma. There have been many studies of people all over the world in different cultures and different professional statuses, and by and large people do not behave as the Prisoner’s Dilemma predicts, except apparently—this may be somewhat apocryphal—economics grad students, who reliably act as the Prisoner’s Dilemma instructs them to and therefore find it much harder to cooperate with each other than any other group of humans.
Thaler: That is a somewhat controversial finding, but it is approximately right. One of the things we have done in this book is ask how we can get outside the lab and raise the stakes. I have managed to write three academic papers using data from game shows. There is a delightful UK game show with the odd title Golden Balls that ends with a high-stakes Prisoner’s Dilemma. They call it split or steal.
There are two people who have won some money, and each can say “split” or “steal.” If they both split, they split. If one steals and the other splits, the one who steals gets everything. If they both steal, they both get nothing. You can find two really interesting episodes on YouTube, one for a hundred thousand pounds. This is well beyond the research budget of most universities. I will not spoil the ending.
Mounk: I believe that one person normally reassures the other that they are going to split. Trust me, of course I am going to split. I am going to be the good upstanding citizen. Sometimes, when they are relatively sure that the other person has been induced by those assurances to be the good upstanding citizen and to split, they secretly steal and get all of the money. In one episode someone inverts the strategy—I know that is the episode you are thinking of—and says, I am going to steal. Whatever you do, I am going to steal, and it leads to a hilarious outcome.
Thaler: Yes, that episode is priceless. He broke the game, but everyone else played by the rules, and the behavior is exactly the same. There is a little more cooperation in that game than in a laboratory experiment for ten dollars.
There is another old economics game called the ultimatum game, where you and I are asked to share one hundred dollars. The rules are that I make you an offer you can accept or reject. If you accept, you get what I offered. If you reject, we both get nothing. Economic theory says you will take anything, so I should offer you a dollar or maybe a penny. Real people find low offers insulting. The results of that game are that offers of less than twenty percent are usually rejected. Profit-maximizing offers are about forty percent. The results do not change if you multiply the stakes by ten or one hundred or go to a country where you can afford to pay a month’s wages.
Mounk: We have done a tour d’horizon of the different ways in which economists once assumed that people behave rationally. Some economists still assume that people behave rationally, but we see in lab experiments, in behavior on game shows, and in the way people bid on oil fields that they do not behave as rationally as was assumed. What are the implications of that for economics, and what are the implications for public policy?
Presumably one thing that might change—and this is where your work with another recent podcast guest, Cass Sunstein, comes in—is that if we assume people make rational choices, then all we need to do is offer them a set of fair choices, such as where to invest their retirement savings or whether to save for retirement at all, and then it is up to them. If we recognize that people are likely to make choices that are irrational in ways detrimental to their well-being, we might start to think about how to manipulate the choice architecture. We might ask whether we can nudge them toward the right course of action so that they still have freedom of choice and can decline the nudge, but we end up with many more people saving adequately for retirement.
Talk us through a variety of implications of this research, including your influential book on nudging with Cass Sunstein.
Thaler: Sure. Let us take the following example. Listeners who are not from America have read about the horrors of the U.S. healthcare system, and it is pretty horrible. We spend more than anyone else and get average care. The system is also very complicated.
There was one company that decided—there was probably some economist who worked for them—that they would offer health insurance to their workers with four variables to choose from: how big the deductible is and similar details. Think of it as a restaurant menu: how many courses, fish or meat, how many desserts. They gave employees every possible option and said, pick one. There were forty-eight combinations.
It turned out that many of these options were dominated by one of the alternatives. Dominated meaning there was an option that was guaranteed to cost less no matter how much healthcare you consumed. A rational agent would start by eliminating all the dominated options. In game theory that is an explicit assumption: agents cross out the choices they absolutely should not take and then pick from the remaining ones.
Remember, this was not an experiment. This was something a company did with twenty-five thousand employees, and more than half chose dominated options.
Mounk: They were leaving money on the table. They could have chosen combinations that meant less money paid up front and less money even if they got sick.
Thaler: The average household spent at least $400 too much and possibly more depending on how much healthcare they consumed. That shows in a blunt way that if you give people a complicated problem, they do not solve it. Giving people all the options is not a good way to do it. It would be like going back to the restaurant example. You would not go to a restaurant where they give you a list of all their ingredients. That would be horrible. The best dinner is, here are five things you would never have dreamed of ordering. Watch what we do and taste what we do.
The book Cass Sunstein and I wrote called Nudge came out in 2008, and then we did an update that we called The Final Edition, a title I insisted on to prevent Cass from rewriting it. We suggested that for hard problems, you should give people some help. I would have eliminated all the dominated options in that plan if they had hired me as the consultant.
The irony is that they used one of our favorite tricks: they offered a default plan. If you do not make a choice, here is a plan. People did not take it. I think the reason is that it is like going to a salad bar and composing a Caesar salad. You are being nudged to do it yourself.
Anyway, our book argued that there are many places where we can help people make better choices. The example we like to talk about is GPS. When we wrote the book, we had each just bought our first iPhones, and we did not have GPS in our pockets. I am very geographically challenged, so having GPS in my pocket is very good for me.
What we would like is for life to be more like wandering around in a strange city with GPS in your pocket. If I had worked at that company, I would have wanted an app that could help. Maybe now, with the right prompts, you could feed the choices into ChatGPT and give it a little nudge, and the AI might help you choose. A good one would start by eliminating the dominated options.
Our book is full of ways in which we can make the world more like wandering around the streets of a strange city with GPS in your pocket, rather than using one of those old maps I could not even figure out how to refold, much less navigate.
Mounk: I just went through a great American annual tradition of choosing my benefits for the upcoming year. There were many windows asking whether I wanted to spend $4.74 per pay period to get some kind of accident insurance and all kinds of other questions, all of which I outsourced to ChatGPT. I have no idea whether ChatGPT gave me good answers, but it was certainly rational to spend more of my time thinking about the world and less of my time making those low-stakes decisions.
Thaler: Here’s some free advice to your listeners: decline all extended warranties. There you go. Free.
Mounk: The other point I want to make is that I have a longstanding theory that the only invention of the last thirty years that is not just net good for humanity, but virtually a costless gain for humanity, is Google Maps. I remember so many stressful car drives as a child where you were looking at a map and forgot where exactly you were on the map and had to stop and ask seven people, with no idea when you would arrive.
Later, when there was MapQuest, you had what I think was an even worse equilibrium: these step-by-step instructions. If you managed on a long drive to follow all of the instructions exactly, then you were golden. But inevitably you would miss one of those turns, and at that point you might not even have a map in your hands, and you would be completely lost. I think that led to a lot of fights between couples, for example.
It is not just that Google Maps means people now on average arrive faster because they are less likely to take a wrong turn. I would not be surprised if it leads to fewer accidents because you are not, at the last second, saying, no, this is the exit, and trying to get over to the exit at the last moment. The domestic tranquility of being able to follow these instructions is substantial. Of all the different inventions over the last thirty or forty years, if there were a Nobel Prize for inventions, this one should get it.
Thaler: I absolutely agree, and that is why I always use it as the metaphor for what we would like to do to the rest of life.
Mounk: One of the other examples you give is that when there is free money on the table, where your employer contributes significant income to you if you put a minimum amount of money into retirement savings, many employees do not do that unless there is a default setting. That is another area in which having that “nudge”—saying, you can opt out for whatever financial reasons you do not want to spend that three or four percent of salary pre-tax—still allows you to opt out, but ensures that employees who have not thought about it and have not gotten around to making a selection are automatically opted in. There is a nudge toward participating in this program, and if for some reason you choose to opt out, you are free to do so.
There are many other examples. With health insurance, for instance, an employer that offers health insurance should nudge you into being opted into the health insurance so that if you forget to sign up, you do not suddenly find yourself without coverage. Even if you have another form of coverage, you can still opt out proactively.
One way you describe this framework is as “libertarian paternalism.” There is a paternalistic element, which is the idea that under most circumstances policymakers know what is beneficial: most people should save for retirement, and most employees want health insurance. The nudge pushes you in that direction. There is also a libertarian element because it is a defeasible pre-selection—you can opt out. This appears to solve a longstanding problem in ethics. On the one hand, people often make choices that undermine their own well-being, and we want to intervene to improve outcomes. On the other hand, we ask whether the state should be telling people how to live. This framework appears to offer a middle ground.
It also relies on a background assumption about a pretty competent state and a benevolent government, about experts who know what is good for you. We are in a moment where there is sometimes wrongly and sometimes perhaps rightly a lot of skepticism about all of those things. How do you think this framework of how to improve the world in the year 2025, with the particular government we have in the United States, with the governments we have waiting in other countries—not just some of the specific empirical examples that I am less interested in—do you think that the ethical appeal of this idea of libertarian paternalism remains as strong? Or do you think, in an age where people have these fundamental concerns about expertise, about the government, about the reliability of those who are in government, we should grow a little bit more skeptical about that paradigm?
Thaler: One of the things that makes us comfortable about that agenda is that we have critics on both sides. You sketched an argument, as you just did, that if we have an authoritarian and, in the United States, obviously incompetent government, do we want to give them more power? On the other hand, we have critics who say that we have been too wimpy, that we should not stop at nudging—we should mandate. My response to those critics is to say, really? During the first Trump term, I would ask them whether they were aware of who was president and whether they actually wanted to give him more power.
Even competent governments make mistakes, so we view the opt-out feature as an insurance policy. The choice architect—and the choice architect need not be the government; it could be the employer—decides how to devise the best route. The engineers at Google Maps decide the trade-off between saving time and making turns every block. My impression is that they have improved that app over the years toward fewer unnecessary turns. I do not need to make twelve turns to save one minute; I will stop at a few lights.
There are other critics who view all nudging as evil and say that we should boost instead of nudge. I do not know exactly what the difference is, but part of it is meant to be education. If we go back to that health insurance example, what are we going to do—give people a PhD in economics to enable them to solve that problem? I do not think so. I am always in favor of making it easier. That is my goal for all of these policies: to make it easier for people to choose the option that is best for them. Sometimes the solution will be to create a default. Here is a counterexample, and one where people think they know what policy Cass and I prefer and are wrong. That is organ donations.
Mounk: Well, I have read enough of your work that I know the answer, unfortunately. I guess you would think that the naive reader of Nudge might think, well, obviously they are going to want you to opt in to organ donations because it is important that we have enough organs to save people whose life might be saved by it. But because relatives at the point of decision still get a word, if they do not feel like you, my relative who has had this horrible car accident, really thought this through and we are not sure about whether they really would have wanted to, they can override that choice in the end. That is why the original nudge architecture does not work. Therefore we should do what instead, Richard?
Thaler: Right, excellent, A plus answer. In fact, the default does work in the sense that if the default is to become an organ donor, almost no one opts out, but that does not save any lives because the family members are told, well, your loved one did not opt out of being an organ donor. So do you not think we should go with his or her wishes? What wishes? We prefer a prompted choice. Ask people when they renew their driver’s license, would you like to be an organ donor? Keep asking until they say yes, then stop asking.
In the United States, there is a rule called first-person consent, which means the person who made a choice, their wishes count. That is the choice architecture we prefer. It is also the one that saves the most lives. In the UK, one of the first meetings I had after the Cameron coalition government was elected, we discussed this topic in Number 10 and decided not to adopt this presumed consent rule. Sometime later, I think during the Boris era, they switched to what we think is a worse policy, and people blame me, and I say, do not blame me.
Mounk: We were joking earlier about ChatGPT, but it occurs to me that as I was going through these 23 or so screens to make my choices of various benefit options and various additional things that I can opt into and so on, which range from key questions like health insurance to some kind of special commuter account, I have to figure out whether or not you are actually going to use it and whether you are going to be actually submitting it. If it is not used up, then you lose it. On the one hand, it is rational to put a bunch of money in because it is going to save you money because it is pre-taxed, so you are not taxing the money that goes into it. On the other hand, there is a chance that you are not going to use it up, in which case you lose all of it. It is not high stakes, but it is pretty complicated.
There is a preset option that the interface gives me. I do not know if that was always the case or if that is a result of nudge, but they do nudge you toward some option or another. It is not that there are three tick boxes without anything preselected. If you just click next, it has one. On a lot of these screens, what I did was copy and paste it into ChatGPT and say, what shall I do? ChatGPT has become for a lot of these things the kind of nudge in chief.
Is that a good thing or a bad thing? Do you think that ChatGPT on the whole is going to be better at prompting us in the right decisions than the preset nudges? Is this the thing that OpenAI and other AI companies should do to make sure that those AI systems actually nudge us in a healthy rather than unhealthy direction? How is nudging transformed in the age of AI?
Thaler: Yeah, well, you really should invite my young co-author Alex Imas to do an interview with you, who knows a thousand times more about AI than me. I think AI certainly has the potential to help a lot, but you would want to tell it a lot about your preferences. I have a friend who is an active AI user who has had ChatGPT read all of his emails and learn as much about him as possible in order to help him be a better version of himself.
We are so new into this. Yes, it certainly has the possibility to help, but it also can hallucinate. I would want it asking me the right questions. I think there is a lot of talk about regulation of AI, which is not a topic that I am qualified to talk about, but I do think that what we absolutely need is AI to be auditable, meaning it should be possible for us to give it tests and see what it does.
Certainly, AI could be very helpful. Choosing the right health insurance options is not harder than navigating a route from New York to Chicago.
In the rest of this conversation, Yascha and Richard discuss why we need behavioral economics. This part of the conversation is reserved for paying subscribers…












