Persuasion
The Good Fight
Audrey Tang on “Misinformation”
Preview
0:00
-56:40

Audrey Tang on “Misinformation”

Yascha Mounk and Audrey Tang on how social media can boost civic engagement.

Thanks for reading! The best way to make sure that you don’t miss any of these conversations is to subscribe to The Good Fight on your favorite podcast app.

If you are already a paying subscriber to Persuasion or Yascha Mounk’s Substack, this will give you ad-free access to the full conversation with Audrey, plus all full episodes and bonus episodes we have in the works! If you aren’t, you can set up the free, limited version of the feed—or, better still, support the podcast by becoming a subscriber today!

Set Up Podcast

And if you are having a problem setting up the full podcast feed on a third-party app, please email our podcast team at leonora.barclay@persuasion.community


Audrey Tang is Taiwan’s Cyber Ambassador and served as Taiwan's first digital minister and the world’s first nonbinary cabinet minister.

In this week’s conversation, Yascha Mounk and Audrey Tang discuss what makes social media so divisive, how to tackle misinformation without undermining free speech, and how online tools can engage participation in a democracy.

This transcript has been condensed and lightly edited for clarity.


Yascha Mounk: We've had a lot of debates over the last years about misinformation. I feel really torn on the subject because on the one hand, I recognize that misinformation is a real problem. If you go on social media, there are false statements, doctored videos, conspiracy theories, just crazy stuff that gets a lot of attention, and that really informs how people think about the world and about politics. Clearly, that's a problem.

At the same time, I have this concern that a lot of the time when we talk about misinformation, first, we might get wrong what is true and what is false. During the pandemic, for example, some ideas were labeled as misinformation that later turned out to be plausible or perhaps true. Secondly, this whole discourse about misinformation can really be an excuse for censorship. It can be an excuse to say, we in power are going to tell you what's right and what's wrong, and we're just going to censor anybody who disagrees with us. What is your approach to this field? Because you take the problem very seriously, but I think you share my suspicion that censorship is not the way to respond.

Audrey Tang: Definitely. So, I served as the Digital Minister of Taiwan from 2016 to last year. Now I'm the Cyber Ambassador, and in Taiwan, we're ranked top in Asia when it comes to internet freedoms. We're also top in Asia in terms of civic space, and so on and so forth. We've never believed in censorship because we had martial law for almost four decades and people don't want to go back. I do feel that the term “misinformation” is a little bit misinforming, if you will. Since 2016, when I went into the cabinet and we tackled this issue, we always called it “contested information.” That is to say, it’s not about whether it's absolutely true or absolutely false, but rather about its potential to drive engagement through enragement, so to speak. Instead of the usual metrics like fact-checking and things like that, we measure how polarized people are because they receive such information and how they retweeted or posted because of the enragement that is derived from it.

The reason why is that since 2015, many of the online platforms have switched their algorithm from a shared feed or a follower feed to what's called a “for you” feed. This feed only maximized one metric: addictiveness, which is how much time you spent on the touchscreen, essentially. Along with that came a lot of autoplay and recommendation algorithms that strip the social fabric so that people no longer have shared experiences. That drives this individualized kind of rage machine so that people can waste a lot of time shadow-boxing on the extremes. If you look at the content in itself, it is not necessarily true or false. Sometimes it has nothing to do with factual information. All it has to do with is polarization. That is also because Taiwan is one of the most internet connected countries. In 2014, the trust level between the citizens and the government was 9%. In a country of 24 million people, anything President Ma said at the time had 20 million people against him. We want to fight that polarization instead of specific bits of misinformation.

Mounk: One of the interesting things about Taiwan, for those who don't know the island well, is that even though it is a relatively small place with some significant ethnic minorities, a clear majority of the population is Han. So it is compared to the United States a reasonably ethnically homogeneous country. But polarization is very extreme on questions about the future of Taiwanese status and that polarization often runs right through families and institutions. When I was in Taiwan for a month to try to learn Mandarin, I had some teachers who were very vocal about supporting greater independence for Taiwan, and some teachers who were very vocal about wanting a much closer relationship with the mainland.

I'm also really struck by the fact that these algorithms drive how we behave online. What was true of the old Twitter and what is true, I think, of X is that it shows you first in your feed the post that's going to be most divisive: those that some people are gonna laugh at and other people are gonna hate. It's going to stir up controversy and people will shout at each other and it wants to get you involved, certainly as a spectator, perhaps as a participant in this kind of mob rule. When you look at something like Reddit, it has some problems of its own, but I think it tends to be much better in part because the first thing that it displays is the post that has the biggest net positive engagement. So if 100 people like that post and 100 people dislike it, it's not going to be very high up. If 50 people like it and one person dislikes it, it's actually going to be displayed further up. So is part of the solution just to incentivize providers of these social media companies to go through algorithms that don't prioritize divisive content the same kind of way? Can that work? It feels like Twitter is a bigger part in our conversation than Reddit, perhaps in part because ultimately users are drawn to the thing that makes them full of rage, and sometimes full of joy. Even if current social media companies change the algorithms, would that just allow a new set of social media companies that offer this more adversarial model to take over the market because that's actually what we're drawn to?

Tang: Well, I would note that it's not just Reddit, but also LinkedIn, which uses a much more bridge-making algorithm. There are people using LinkedIn who feel it has better quality feeds. So it's not like the anti-social feeds are the only ones on the market. There's also pro-social feeds.

Mounk: I'm not on LinkedIn, but whenever I vicariously see a little bit of LinkedIn, it seems to breed a culture that's awful in its own way: this fake kumbaya here's the 17 things I learned by walking my dog this morning form of corporate humble brag self-promotion. That may just be my taste and perhaps I'm being unfair to a lot of what's on LinkedIn.

Tang: No, I totally see that. But the point I was making was just about whether it is divisive in promoting the content that's most likely to result in extreme views of one group toward the other, like a caricature, as the X algorithm does. I'm just saying that LinkedIn doesn't do that.


Since the first live Q&A was really fun, we’ll try to make this a monthly feature! So please join me for the second iteration on Monday, March 31 at 6pm Eastern. I will once again try to answer any questions you may have—whether about my writing, the current state of the world, or what might happen next. Join us on Zoom here. —Yascha


Within X, there's also an algorithm that is more bridge-making, and that is the community notes algorithm. Basically, what it does is that for each trending post, people can volunteer to add more context to clarify. So, they’re not necessarily fact checking, but rather just providing useful context. It's not that the most upvoted note will display, but rather they will first use the algorithm to separate people into different clusters and the clusters are people who would consistently upvote certain kind of notes and the other group is the people who would consistently upvote some other kind of notes and they don't overlap. Now, if a vote from both sides propels a note to the top then that's more likely to be sticky and will be displayed next to the post. The original poster cannot take it down. That's the model that X uses instead of using third-party independent fact-checkers. This model has been adopted by YouTube and very soon Meta—on Facebook—as well, as a kind of different, more horizontal way compared to the vertical institutions of specialized journalists and fact checkers. Within X, two algorithms coexist—for the main post, there is the divisive one, but for the clarification, the community notes, that's the bridge-making algorithm.

Mounk: I think quite highly of the community notes system and it sounds like you do as well. I was really struck, and perhaps this is itself part of the polarized reactions we now have to everything in our politics, that when Meta announced it was no longer going to fact check but rather rely on the community nodes model, the reaction to that from my kind of circles and from people who care about democratic institutions was nearly uniformly negative. I understand that it was in a context where Mark Zuckerberg was making these overtures to Donald Trump. But I thought—and I know less about this topic than you do—that there was a mistake, that it was a knee-jerk reaction to say that we must prioritize fact-checkers over any other kind of system. Even though the community model to me seems like the best feature of the new Twitter, people who may be skeptical might be coming to this conversation thinking, no, Zuckerberg is giving up democracy by getting rid of the fact checkers. What case would you make for the benefits of a community notes model?

Tang: Full disclosure, I'm in touch with the people who implement the Meta community notes and also another group in Meta that implements so-called community forums, which is a deliberative platform, like a jury system where people can go online and steer the Meta system. I would note that neither of these two systems are completely sabotage-proof, so if you really want to take it over, mount an attack and pollute it with a lot of resources, there's a chance that you will succeed. So, I'm not saying that this is a foolproof system. With that said, our experience in Taiwan of both the bridge-making algorithm and the online community forums did show that it is actually possible to strengthen civic muscles and people's solidarity across differences if you implement these kinds of mechanisms.

We first tried out an in-person way of facilitating conversations to uncover common ground between people who are drastically polarized in March 2014, when we occupied our parliament peacefully to show that this kind of method works. Since 2015, we’ve been working with the cabinet on an online version of this process. In 2015, when Uber first came to Taiwan, we asked people, how do you feel about a driver with no professional driver’s license picking up strangers they met on an app and charging them for it? People just shared their own feelings. And just like community notes, the poll system that we used has upvotes and downvotes, but there's room for trolls to grow. There's a visualization that shows whether you're in the camp of pro Uber drivers or pro rural unions, and so on. There are different clusters that are grouped automatically. Every day, people see more and more of these bridging statements. For example, undercutting existing meters is very bad, but surge pricing is fine. That is one sentiment that actually all sides can agree on. After a while, people start competing on the widely acceptable bridging items. At the end of the process, the nine or so statements that got more than 85% approval from all the different groups became the agenda. Then, we made a law based on those consensus ideas.

So, it's a friendly competition about how much distance you can cross to talk to people on the other side. Another story is from last year. We used the same kind of method but in a synchronous way. We asked people, how do you feel about online fake advertisements like Jensen Huang of NVIDIA. If you scrolled through Taiwanese Facebook or YouTube last year, you would see fraudulent advertisements featuring Jensen wanting to sell you some stock or crypto all the time, and if you click it, Jensen actually talks to you. Of course, it's a deep fake. So we asked our population what to do and we sent 200,000 SMS text messages to random numbers in Taiwan. Then, we asked people to volunteer and thousands volunteered to join each other in an online conversation. We chose 450 people statistically representative of the Taiwanese population.

In 45 rooms of 10 people, each facilitated by an AI system, people just came up with a really good idea that does not extend the state censorship power but successfully limits this fraudulent advertisement issue. For example, one room would say, Jensen Huang's advertisements need to be digitally signed by Jensen. If it's not, you should assume it's a scam and display it as such, and probably ban it. The other room would say, if Facebook showed these scam ads and somebody gets scammed out of $5 million, Facebook needs to be liable for that $5 million. We used AI models to weave those things together and passed a law in just a couple months after the consultation. Today, if you open Facebook or YouTube in Taiwan, you don't see those fraudulent ads anymore. But we're still the most free in terms of online freedom. So, these examples show that when designed with care, with good sense-making capabilities, it can strengthen the civic muscle such that people join together with others who they would not have thought of as allies and propose something that is very acceptable to both sides.

Mounk: That's very interesting. I was thinking of this space in terms of three different buckets and I'm starting to realize that you're mixing some of them up. But perhaps let me lay those out and then we can think through what the solution to each of these buckets is and to what extent they're the same solutions.

One is you have a problem with misinformation. This is a great example; fraudulent advertising pretending that this influential figure you would respect is asking you to buy something. In fact, it's a deep fake. There's an obvious public interest in trying to rein that in. But of course, if you give the government power to declare anything misinformation or a deep fake, it's quite easy to see how that could be abused. How do you deal with that kind of misinformation?

The second is how can you inform decision making? You have legislators in a chamber in Taipei or in Washington, DC or in Rome, and they're insulated from ordinary people in the way that elites and politicians often are. How can we use digital technology to facilitate more public input, both in terms of great ideas that people in the capital may happen not to come up with, and in terms of getting a sense of where public opinion really lies, giving people a sense of ownership over political decisions?

A third kind of question is, we have democracies that were designed—like the United States, and many other countries as well—on the basis of the institutions that were founded in the 18th and the 19th centuries. But now, we live in a very different place with very different kinds of technologies. It's clear that if we invented democracy from scratch today, it probably would look different in some ways. How should we rethink the democratic space? Is there a need for renovation from scratch? It feels to me like some of these examples you were just giving straddle at least the first and the second bucket. Perhaps you don't quite go to the third bucket. Let’s round out our conversation about the first of these questions. So what's the paradigm of how we should think about misinformation? How do we combat misinformation without censorship?

Tang: What I believe is that we shouldn’t lump the actor, the behavior and the content layers together. The government can easily say, any content triggering a keyword, such as a minister's name, should be clarified by the minister, who can force the journalist or any online content to be taken down and so on. It's very easy to abuse, which is why the Taiwanese population doesn't want the government to do any content-level countermeasures. But if you don't have that, you need to focus on the actor level, which is why we wrote out this KYC requirement for all advertisements so that when people claim they're Jensen Huang they’d better sign as Jensen Huang. This is not moderation, it is basically saying that our constitution does not give fake robots freedom of expression. I think that's a generally understandable position.

Of course, people worry about non-advertisements and communications where you could say that you're a whistleblower, or in an information asymmetry situation where you want to reveal something, but you don't want to reveal your actual identity. This is why, by the end of this year we'll roll out in Taiwan the infrastructure for what's called selective disclosure. You can sign with just a narrow name, a part of a name, saying, for example, that you're 16 years old or older but don't reveal your birthday, or you say that you're a resident in Taipei or a citizen, but without revealing your address so that you can show that you're not a robot. You have a personal credential or can attach a verifiable credential so that people know who you claim to be, roughly, without doxing yourself or revealing your identity too much. That's on the actor level. On the behavior level, we believe in what's called pre-bunking over debunking. Pre-bunking works by making sure that most of the society receives a depolarizing message. It's like inoculation. A couple years ago, I deepfaked myself and showed everyone how it's done. By the time last year's general election happened and deepfakes started to appear, people were already inoculated because they were exposed to two years of pre-bunking material. We also invite civic teachers, people in middle schools, high schools, and so on, to participate in collaborative fact-finding. It's not just a single checked fact that inoculates those young minds, but rather the process of going through the fact-checking, thinking like a journalist in a group and in conversation networks. Taken together, this kind of pro-social fact-finding behavior inoculates minds against sensational outrage quite predictably.

Mounk: Yeah, that's really interesting. It speaks to me both in an American context—obviously the legal constraint of the First Amendment—and in a more philosophical context, my commitment to the idea of free speech, my deep awareness of how easy it is for governments to abuse that. What you’ve said shows that there are ways to actually tackle more effectively the infrastructure of the internet, without giving government bureaucrats or powerful tech executives the power to decide what is in and what is out, what is true and what is false, which we've gotten wrong in a lot of important contexts. It’s a power that, in my mind, the government and these tech executives shouldn't have.

Before we move on to the next topic, to what extent do you feel like people have actually taken that on board? You are very influential in this field and you're a little bit of a celebrity when it comes to misinformation. But when I look at the way European politicians talk about this topic, like politicians in a lot of places, they revert very quickly to, we have to have a set of laws around hate speech and falsehoods that we put in place and they will allow authorities to decide when something is dangerous, which will then get shut down. If social media companies don't comply with that, we can fine them so much that they become incapable of operating here. Even though you're a big participant in that conversation, it sometimes feels to me as though you're invoked and then ignored. Is that fair, or are you more optimistic about what, for example, countries in Europe are doing to tackle this problem?

Tang: A lot of things like digital signatures, selective disclosure, and forced interoperability are taken up by the EU. The same decentralized wallet is now also being rolled out, and I think by next year the Europe Digital Identity Wallet (EUDIW) will go online—maybe slightly later than Taiwan but roughly at the same time. I think that's a positive sign. What the Digital Market Act says is that once you are large enough to be a gatekeeper in instant messaging, you cannot trap your customers in the same messaging system, you have to provide interoperability. So, if they want to switch to a different system that offers a better experience, they do not lose the contacts. They do not lose the existing conversations they have. They can actually send the same message. We have the same kind of portability when it comes to ATMs. If you have a bank card, you can withdraw cash from other participating banks, not just the same bank—even internationally. All these interoperability, portability measurements are now shaping up to be part of the EU toolkit for digital governance. You can easily think about the next step, which is that if you post on TikTok, you should be able to view the same piece of content on Bluesky or on Truth Social or on the Fediverse, any participating interoperable networks. Then it becomes almost impossible for those big tech operators to dictate censorship rules. If people do not feel safe in a certain circumstance, they can just take their connections and their content to Mastodon or Bluesky, and then just enjoy a different regime of lawful but awful content now being kept instead of being moderated or the other way around. I'm cautiously optimistic of the EU also seeing interoperability as one possible lever.

Mounk: Tell us a little bit more about interoperability. Obviously, it sounds very appealing if I build up a big profile and following on one network, and if for any arbitrary reasons that network can then close me down. That is a very powerful form of censorship. It means that if my livelihood comes from advertising on that platform, it can be wiped out in very arbitrary ways for which normally I don't have any legal recourse. Now, if I can take my followers to another platform at the drop of a hat, then obviously I have suddenly the power to evade that kind of censorship and that's a very positive thing. Now, on the other hand, I'm not a very avid user of social media in general, but I certainly wouldn't want for some of the people who I follow on Twitter to suddenly show up on my Instagram feed, and now I'm getting photos of their families or whatever they may be doing. It's a very different platform. So how do you, on a technical level, combine my ability to port followers from one platform to another with the ability of my followers to say, well, hang on a second, I'm here for your political content, but not for your holiday pictures?

Tang: Yeah, definitely. Bluesky is a good example. It is a re-imagination of the old Twitter on a different substrate by much of the same team that did the old Twitter. If you go to Bluesky, you can see that there is a Twitter-like timeline. You can follow people in a Twitter-like way. There's also the discovery feed, which is something you can curate and share with your friends. But how it differs from X is that at any given time, you can choose a different way to view Bluesky. There's different experiences like Blacksky that have been built on the same substrate. So, even if Bluesky one day decides to censor you for some reason, you can go to one of the alternatives and keep your existing connections.

On top of the same protocol there are other applications. For example, there's something called Flash, which is like Instagram and has a different social network so that if you follow me on Bluesky, it doesn't necessarily mean that you follow my photo posts or my flashes. Just because it's interoperable, it does not mean automatically that one follow translates to a follow across all the different modalities of content. It just means that when you post something or establish a connection on each application, it cannot forbid other applications or products from using the fact that you have posted something or followed someone or pressed like. It’s all broadcast to the entire ecosphere. The ad protocol network and other applications can make use of it, but they don't have to. This is a technical explanation. Now, as part of the Project Liberty Institute we’re also advising something called free our feeds.

What we're trying to do is to build an alternate system to automatically backup everything that happens on Bluesky and therefore keep Bluesky in check so that it would not arbitrarily target people and censor them—not that they're doing it right now, but in the future, if their shareholders tell them to do that, then the executive team can say, actually it has no effect because people already have this alternate relay. It's like a pub or a club with a fire exit. People will just go to the next room. So we’re not just building this, but also offering this as a common public good. I'm also involved in advising the “People's Bid for TikTok.” If they win TikTok’s U.S. assets, it's also possible that TikTok will join the same underlying infrastructure.

Mounk: That's very interesting. It feels to me like there's a potential paradox here. You want to give as much decision-making power in the hands of users. That obviously is a way to prevent censorship and also to just be more in control of the digital experiences we have in general. Now, of course, a lot of users prefer to be stuck in a partisan echo chamber, maybe to silence the voices that they don't like. Certainly, the experience of Bluesky appears to have been that it both attracted a very particular political slice of the U.S. population—it is actually more of an echo chamber than Twitter at any point was—and that a lot of people are using some of these innovative technological options in order to block people en masse. You can now have algorithms that you opt into which say, if anybody follows Audrey Tang, I'm going to block them. That, of course, then creates a very strong social incentive for nobody to follow you because they know the moment that I follow you, all of these people who've installed this algorithm are going to stop seeing my posts as well.

Are we in danger of a new techno-utopianism here, as was the case 20 years ago in the early stages of the internet of social media, we thought this would lead to all of these connections among people and actually help us overcome our deep divisions, make our societies less identitarian and so on? What has happened is the opposite. Might the same thing happen here if we give users of these social media platforms all of these freedoms to really curate their own experiences? It sounds like something really positive, and, as you're talking, I see myself nodding along. But then I consider what Bluesky actually looks like today. I don't know that it in fact has accomplished these things. Is it actually making things better in the way that we hoped?

Tang: Yeah, the two largest free software implementations of this protocol are Bluesky and Truth Social respectively, and, exactly as you witnessed, they may actually be even more echo chamber-ish compared to Twitter when Twitter was still called Twitter. So, this is not necessarily something that we cannot recover from. There's a recent paper that I co-wrote with Glen Weyl, Luke Thorburn, Emillie de Keulenaar, Jacob Mchangama, and Divya Siddarth. What we designed is essentially a way for Truth Social on one side and Bluesky on the other to keep all their echo chambers and their different communities, but also offer a way to curate what's called a surprising validator. If people on the other side said something that actually my community very much appreciates, despite the fact that we don't usually agree on pretty much anything, this is called bridging content.

There is a way algorithmically to surface that bridging content on both of the community's feeds and also let both communities or sub-communities see that the others are also watching this content. The reason why we want to create this kind of common knowledge is we believe this is one of the main ways people can see that there is common knowledge. Despite our ideological differences and differences across generations, regions, and genders, there is content that both sides really appreciate and would in fact invite more of. That becomes a business model. The hypothesis is that I belong to many different communities: spiritual, professional, and so on. If there's some form of content that can heal the divides that I usually feel across all these communities so that I can share this content with all of these communities to bring them together a little bit, it's worth my paying or subscribing for it.

Many communities on the local scale or on a professional scale would also like to sponsor the kind of content that heals their sub-communities while of course still accurately representing the balance of the divisiveness within them. There is a market for that and I totally agree that just because anyone can curate any experience, it doesn’t follow that it’ll all be positive experiences. But what we have witnessed from the Community Notes experiments and from Taiwan is that people would actually really appreciate having some of that, especially if humorous prebunking content is depolarizing. Just a very quick example from early 2020: We already saw two very polarized camps that were blocking each other and really fighting. One camp said, because of our SARS experience a few years ago, they only believed in N95, the highest grade mask. They said everything else was a scam. The other side said that it’s aerosol and ventilation that matters. Now, if we just amplify these two extremes, we don't actually know which one is misinformation, because science has not resolved it yet. What we did was, use the same uncommon ground bridging algorithm. We found the uncommon ground, which is why we pushed out this prebunking message very quickly. It's a very cute dog, a Shiba Inu, putting her paw to her mouth saying, Wear a mask to remind each other to keep your dirty and unwashed hands away from your face. What it does is change the mask signifier. That says, if you see somebody wearing a mask, it's not putting pressure on you, it's just reminding you to wash your hands. We measured tap water usage. It did increase after that. People who laughed about this message could not be polarized again by the two messages that I just mentioned.

Mounk: Ha, amazing. It's very funny that it actually increased the use of tap water. I personally find people who tell me to wash my hands to be very aggressive, but I may be a minority of one. But, if a Shiba Inu tells me that, I'm happy to listen.

What's really interesting about this is the frame of how to reimagine government. One set of elements is what institutions can really accomplish in terms of smaller changes, because political science has been very influenced by the institutionalist tradition over the last decades. There's this hope that if we change our electoral system, that is somehow going to bring an end to extremes in our politics. I've argued about why proportional representation in America would not be a solution to the specific problems of the country. Because, after all, in many systems of proportional representation, you have extremes rising in politics and often managing to get into government. I think we're placing too much of an expectation on a change in the electoral system if we think that's somehow going to miraculously make extreme voices disappear from our politics.

Perhaps what you're making is a somewhat parallel argument about the design of these digital infrastructures. They can make a difference. They can change incentives. They can allow us to see common ground where current algorithms, current forms of infrastructure, are occluded. But it's not going to miraculously transform our public space and make all of those problems disappear. That may just be too much to ask. Nevertheless, I would like to ask a set of slightly more fundamental questions. We've dealt with the first bucket of misinformation. We've started to touch on the second bucket of how we have clever ways of informing policy making processes. But I'd like to spend a little bit of time in this conversation thinking about the third bucket. If we were designing the first democracies today, and the founding fathers of the United States having come ashore in the New World and sundered the ties to the United Kingdom, had available the technologies we do today, what would that political system look like? More broadly, what does it look like to hold on to the principles that, in my mind, undergird our political system? I assume we have a similar conception of them, but perhaps it differs in some ways. The idea that we want to make decisions collectively, that we govern ourselves rather than allowing a dictator or a religious authority or military general or political party to make decisions for us. The idea that at the same time, we want to preserve certain basic individual rights like the freedom of speech that we've been talking about. But we want to radically reimagine the framework of how to institutionalize those values in a digital world. What would that look like? What do we keep from the current system and what might be reimagined from scratch?

Tang: That's a great question. When democracies were first founded, there existed some communication technologies that communicated across distances, like telegrams. There came about, very quickly, some broadcasting technologies, such as radio, and later on television. What did not exist was what's called broad listening technologies. It was possible for one person to talk to another person, it was possible for one person to broadcast to millions of people, but there were no technologies that led one person to listen to conversations from a million or so people, and for that million or so people to also listen to one another, to facilitate understanding that was not previously possible. There are people who have tried, for example, in the Obama White House. There have been letters to presidents and an entire staff of smaller language model humans that parse through all the different letters and choose five or so every day as a representative sample of what people have in mind for the president to read. But it's very time-consuming and also these are individualized so that these people don't actually talk to each other. It's still very much a hierarchical arrangement. What we are now seeing is a new generation of what's called broad listening tools. For example, in the U.S., in Bowling Green, Kentucky, as we speak, there is this tool being rolled out called What Could BG be? WhatCouldBowlingGreenBe.com. If you go there, you can see the feelings of your fellow citizens. You can see a set of listening partners, people who are locally important in Bowling Green who agree to respond to the uncommon ground that is discovered and you can also agree or disagree on other people's ideas. What it provides is that it's not just increasing the bandwidth for one decision maker to listen to all the people in Bowling Green but rather the ability for people there to see the group picture of what we have in common; the common values. Also, what the main differences are, what defines them and so on. This is called sense making. There are now open source tools that can provide such sense making services to arbitrarily large conversations, both online and offline. In California with Governor Newsom, a couple of weeks ago, we launched a similar effort called Engage California. What it does is allow people to have a real-time conversation or a synchronous one with each other on the common topics of how to recover from the wildfire of Eton and Palisades. Again, this is a nonpartisan topic that can really be resolved if people from the whole state contribute instead of just one specialized department. It has the same shape of broad listening, not just to the governor, but people with each other. I think what will increase is the symmetric capability of broad listening instead of just broadcasting.

Mounk: That sounds really interesting. I guess it feels to me like it's somewhat limited in ambition and impact. I mean that in two ways. First, it feels like a reinvented form of a traditional New England town hall. Rather than coming together once a week or once a month in order to speak at this town hall, we now are able to do that asynchronously. That obviously makes it easier for people to participate, and has all kinds of advantages. But it feels somewhat limited in the extent to which it transforms the functioning of government, particularly because the ultimate decision-making power remains with bodies like a city council or the national legislature that are elected in traditional forms. The second concern I have about this is that it also retains—this is perhaps inevitable—the problems of participatory democracy. Who is most likely to participate in this kind of forum? Well, it's a little bit easier now. Perhaps the busy mom who can't get a babysitter can post this once the kids are asleep rather than having to miss the meeting because it's at a time when they have to take care of the children. But I'm sure there still continues to be a very strong skew in terms of socioeconomic status, educational status, and particularly perhaps in terms of political ideology, where we see—this is the fundamental problem of a primary system in the United States—that the more ideologically extreme you are, the more motivated you are to participate in politics. If you have a system like traditional elections, in which, hopefully, in some countries 80 and more percent of the population participate, you're not giving extra voice to the extremes. If you have something like a primary or a caucus in the United States, and some of these forms of participatory democracy where you have a town hall, the people who show up are 10, 5, perhaps 1% of the population. They're not representative. They are selected to be more ideologically extreme. So actually, you're boosting ideological extremes. You're doing the opposite of the infrastructural things we've been talking about in the context of social media. I guess my question is both, how do you avoid that pitfall and don't we need to actually reinvent government in an even more radical way? Don't we need to think in a more radical way about if there is a way of having deliberation be the core of our political system that doesn't involve elected city councils or parliaments in the same way? Is there some way of reimagining what the ultimate decision-making power is in a modern democracy? Or is that just not the case? Is it in fact the right model and we just need to augment it through these digital channels?

Tang: To your first question about avoiding pitfalls, the good thing about the prosocial algorithm or the bridge-making algorithm is that it's clone-proof in the sense that if somebody mobilized and ideologically motivated thousands of people to come to the Polis platform voting exactly the same way, it has no effect on the outcome. Because again, the algorithm first calculates the clusters, the people with different thoughts. Even if, say, in two clusters, one is 5,000 people, and another is just 50 people, because it measures the area, the plurality of their thoughts. 5,000 people voting exactly the same way is just one dot. It's actually smaller in area. So, the design of the clustering algorithm means that even if it's 5,000 people on this side, they still have to get more than 85% approval from the other smaller group in order for their statement to be counted as bridge-making. The same property that protects the community notes algorithm protects this algorithm. But I totally agree with you. There's maybe people who are not even motivated to upvote or downvote in the first place. This system will systematically exclude their voice, which is why not only is broadband as a human right important, but, in the Taiwan case, it's actually a lotocracy or a sortition. When I say we sent SMS to 200,000 random numbers, it's truly random numbers, and it says that if you want to volunteer some time to consider this, we actually pay you for it. It's like jury duty but for administrative functions. In the in-person case, you can pay them even more and call it a citizens assembly. There are already a lot of, especially local-level successes, with citizens’ assemblies in Japan and in many other places around the world. This is an existing form of decision making—not a future form. What I'm describing is essentially using the current generation of AI that does not hallucinate and is grounded to speed up the summarization phase and the reflection phase and some of the informed phase of this kind of citizen assembly. This is an answer to your first question.

The second question is interesting because parliaments also started as a consultative group. It did not start as a very binding decision-making group. That happened after people started comparing the system with parliaments and the parliament's quality with the monarch's team. After a while, people thought, this is consistently higher quality so let's switch over to it. It's like the Buckminster Fuller quote, right—it's not about destroying the old system, it's about building a new system that gradually makes the old one obsolete. This is what I believe in.

In the rest of this week’s conversation, Yascha and Audrey discuss whether social media helps or hinders democracy, and if there is a case for optimism about the future of our democracies. This part of the conversation is reserved for paying subscribers…

This post is for paid subscribers