Renée DiResta on How (Not) to Fix Social Media
Yascha Mounk and Renée DiResta also discuss her involvement in the movement to recall key officials in San Francisco.
Renée DiResta is a writer and the technical research manager at Stanford Internet Observatory.
In this week’s conversation, Yascha Mounk and Renée DiResta discuss how we can turn down the temperature on debates around social media and find real solutions; why terms like “censorship” and “misinformation” fail to capture the complex difficulties in reforming social media; and why, as a resident of San Francisco, she supported the school board and District Attorney recalls.
The views expressed are those of the speakers, not those of Persuasion. The transcript has been condensed and lightly edited for clarity.
Yascha Mounk: As recently as six or seven years ago, I was teaching a class called Democracy in the Digital Age, and I felt that my role was to talk my students out of their unfounded optimism. They came in thinking that the internet would make everything better; the world would be richer and the Middle East more democratic.
But now, all of the conversation about the internet is, “Here's why it's going to destroy democracy, help out autocrats all over the world, why we need to censor online, and why mis- and disinformation are dangerous.” What do you think the potential of the internet for politics is, and where do you think the real dangers lie?
Renée DiResta: There was a lot of optimism pretty early on—the internet as the blogosphere, and the proliferation of voices that came about as a result of the elimination of gatekeepers and the ability to publish content. Then came the period where we moved into social networks. There was a dynamic of content being curated for you, with you being kind of pigeon-holed into being a particular type of person. We see these networks being assembled in the early 2010s, as platforms try to connect you to people who are just like you, independent of geography. And there's some really fantastic things that come about: people with rare diseases find each other. People who are in particular circumstances that they feel they can't discuss with people they know in real life find each other.
But what you also start to see is recommendation engines connecting together all sorts of other communities as well, including those that we might consider a little bit more toxic, like the highly conspiratorial communities. People become really deeply engaged with a highly factional way of engaging on the internet. You start to see it in the 2015 presidential campaign in the US, with people putting certain emojis in their social media bio; you see the Pepe the Frog community come about as a very strong community in favor of then-candidate Trump. And over the next five or six years, you start to see all of these different types of people with a particular factional politics. The norm that gets established is the norm of, “We use this platform to fight. We use Facebook as a platform to argue, to insulate ourselves in particular communities, and then go and wage war against other communities.”
Mounk: If what you're saying is that fundamentally, what's going on is that the structure of social media just makes people self-assemble in very small but very, very passionate groups, then it feels like we need a much, much more structural fix for our institutions to be able to cope. What is the implication of that?
DiResta: First of all, I don't think that misinformation is a particularly useful framing of the dynamics of the problem today. Most of the things that really rile people up are not demonstrably falsifiable. You're not saying that a fact about the world is untrue, and you need to be corrected on that thing through a label or a fact check. And even in those neat cases, people might not believe it. There's this fundamental problem: if you're in Tribe A, you distrust the media of Tribe B and vice versa. And so even the attempt to correct the misinformation, when it is misinformation, is read with a particular kind of partisan valence. “Is this coming from somebody in my tribe, or is this more manipulation from the bad guys?”
One of the more useful frameworks for what is happening today is rumors: people are spreading information that can maybe never be verified or falsified, within communities of people who really care about an issue. They spread it amongst themselves to inform their friends and neighbors. There is a kind of altruistic motivation. The platforms find their identity for them based on statistical similarity to other users. Once the network is assembled and people are put into these groups or these follower relationships, the way that information is curated is that when one person sees it, they hit that share button—it's a rumor, they're interested, and they want to spread it to the rest of their community. Facts are not really part of the process here. It's like identity engagement: “this is a thing that I care about, that you should care about, too.” This is rewarmed media theory from the 1960s: the structure of the system perpetuates how the information is going to spread. Social media is just a different type of trajectory, where the audience has real power as participants. That's something that is fundamentally different from all prior media environments. Not only can you share the rumor, but millions of people can see in aggregate the sharing of that rumor.
Then the question is, what do you do about that? And that's where I think we can examine which rumor is resonating within which community at which time. This claim about COVID is really getting traction within the black community, and this other claim about COVID is really getting traction in the wellness community. But those claims are not the same claims. We don't see the wellness community sharing the claims that the black community finds impactful. And that means that the responses have to also be much more tailored, much more targeted.
Mounk: If the structural architecture of Twitter and Facebook determines the way in which these rumors spread, don't we need a structural fix in order to contain the danger?
DiResta: I think the answer is yes. I'll give a specific example here, to get out of the realm of theory. When you pull up your Twitter feed, there's “Trends” on the right hand side, and they're personalized for you. And sometimes there's a very, very small number of participants in the trend, maybe just a few hundred tweets. But it's a nudge, it says you are going to be interested in this topic. It's bait: go click this thing that you have engaged with before that you are probably going to be interested in, and then you will see all of the other people's tweets about it. Then you engage. And in the act of engagement, you are perpetuating that trend.
Early on, I was paying attention to the anti-vaccine movement. I was a new mom, and I was really interested in what people were saying about this on Facebook. I was kind of horrified by it, to be totally candid. I started following some anti-vaccine groups, and then Facebook began to show me Pizzagate, and then QAnon. I had never typed in Pizzagate, and I had never typed in QAnon. But through the power of collaborative filtering, it understood that if you were an active participant in a conspiracy theory community that fundamentally distrusts the government, you are probably similar to these other people who maybe have a different flavor of the conspiracy. And the recommendation engine didn't understand what it was doing. It was not a conscious effort. It just said: here's an active community, you have some similarities, you should go join that active community. Let's give you this nudge. And that is how a lot of these networks were assembled in the early and mid-2010s.
There are many people who feel that any change to the design or any change to the algorithm, how somebody is nudged or prompted, is censorship. And so what we're starting to see is these arguments around censorship, which historically have been tied to viewpoints, instead being applied to just the basic phenomenon of design—what is amplified and what is not? There is inherently a weighting and a value judgment there. I would argue that we could experiment with that far more than we have, and that is this question of how you use design. What does an optimal system design look like? I think that's really where the interesting research should be at this point.
Mounk: Do you think that people are going to be as skeptical about structural changes as they are about continual attempts by Twitter, Facebook, and other social media platforms to “moderate” speech by making people delete tweets or suspending accounts, or even terminating them? And what are the kinds of structural reforms that social media platforms should be experimenting with to make healthier platforms?
DiResta: I mean, there will always be complaints anytime a powerful platform does something that changes a particular individual or community's perception of their distribution. You may remember, some of the earliest allegations that the platforms were behaving unfairly came about when large numbers of automated accounts were taken down. We used to jokingly refer to these as “bot raptures.” So people wanted fewer bots, but when they saw their follower accounts decrease, there was a sense that something egregious had happened. But I think one way that platforms can address concerns about this is through a little bit more transparency around what came down and why, as opposed to you just waking up and you've lost, say, 20,000 followers. Because that moment creates an opportunity for powerful influencers to frame that reduction of automated or spam accounts as inherently anti-conservative, as opposed to a massive cull across the platform in which politicians in India lost many tens of thousands of followers as well.
I think the legitimacy question is what you're really getting at here: who has the moral authority to decide how the system is run? That is a very complicated question, because of course, there's “ref working” by powerful people on all sides of the political spectrum, and not limited to the United States. We see ref working in many, many different countries, including authoritarians who rely on intimidation campaigns, propaganda bots, and trolls, to create particular perceptions of their popularity or the popularity of their policies. Although this is a very heated debate in the United States, these phenomena are happening globally. And that's one of the real challenges for the platforms.
Mounk: I think there are at least three different questions. One is about who should be allowed to make the rules that govern big platforms like Twitter and Facebook, which are private companies, but also have systemic importance for being able to engage in political debate in this country and in other countries. Then the second kind of question is, once we figure out who should be able to make those decisions, what kinds of decisions should they be able to make, and what kind of decisions are the correct decisions? Is it just a question of making structural adjustments so that Twitter isn't built to be an adversity-and-hating-each-other platform?
And then the third question is transparency. Even if we said that some committee in Silicon Valley should be able to rule on what happens in Facebook and Twitter, even on arbitrary grounds, should they have to inform users about what happens? I think it’s certainly become obvious how negative the consequences are when people are driven into this kind of paranoia. It becomes very tempting for people to think, if a few tweets in a row didn't perform that well, “Hang on a second, I somehow offended some engineer at Twitter headquarters, and they just shadow banned me.”
DiResta: I completely agree. First, there's the question of transparency. As you noted, people are more attuned to takedowns on their own side. And yet one thing you notice when you look at the stats on how many millions of accounts come down, is that everybody feels this same sense of aggrievement that you're describing. So, to be able to assess that, we would want to have transparency in the takedown data so that researchers could look at these questions. There were a couple of audits within Facebook, a civil rights audit and an anti-conservative bias audit. And these things were happening kind of concurrently. I don't remember the final outputs, but I do recall that the anti-conservative bias investigators really didn't find very much. There was no smoking gun there. Glenn Beck and a bunch of other people went to Facebook, had this meeting, decided that their hearts were in the right place, that there probably wasn't much of it there. But Facebook ends up eliminating human oversight. They take the human editors off the trending topics feature, and all of a sudden Trending Topics is promoting all kinds of—I remember getting, in the science section, a Wiccan blog about how Mercury was in retrograde. Facebook ultimately wound up killing that feature, because there was no way for them to curate it in a way that would surface reliable information.
The kind of work that Facebook does internally is using data that only Facebook has. So when we make arguments for transparency, if you want to actually assess the extent to which a particular community may, in fact, be disproportionately silenced by a policy or by a design choice, then there has to be a way for someone outside of the platform to participate in that auditing process. You mentioned regulation. One of the bills that I was most excited about was the Platform Accountability and Transparency Act, which argues that we can't answer these questions without access to data. And there is support for that idea, even from people on the right, like Ted Cruz and Marsha Blackburn, and also on the left, like Senator Blumenthal. You see this bipartisan recognition that in order to actually assess what is happening, we should create systems that enable the research to be done.
The problem, though, is that regulation to simply punish the platforms is much more likely to get picked up in the media and become a topic of conversation. And that is much more likely to get passed at some state house level, as we've seen in Texas. I think people would like to neatly shoehorn this into a “censorship versus not censorship” choice, but it's not really that simple. I think we have to be advocating much more for transparency, if we want to actually assess the extent to which this is a problem. For any given user, though, there is one other thing I'll just flag: there is now the Facebook Oversight Board, which I think provides an interesting model by which you can appeal a decision that was made about your account. It’s a potential opportunity to plead your case, to have an outside body, not within the platform and operating under certain incentive structures take a look at it. That is, I think, a useful model. But as you noted earlier, the questions are, how do you scale that, who funds it, and who sits on the jury?
Mounk: You mentioned education. Let me transition to a related yet very different topic. You lived in San Francisco for a long time, and you became quite active in challenging some of the political trends within the city. You refer to yourself as center-left, I assume you identify broadly with the Democratic Party, but you became worried about what progressives within the San Francisco city government were doing, particularly on education and crime.
Tell us a little bit about why those headline stories about San Francisco really should be concerning to people, both in terms of what's happening to the city and in terms of the mistakes that other cities might make if they follow that model.
DiResta: It was very interesting how we felt about it locally, versus how it was cast in the national media. This created some interesting tensions. I lived in San Francisco for a decade from 2011 to 2021, and it wasn't theoretical for me, right? My house and car were broken into. My children learned to recognize needles at the playground. And I felt like we could do better. When the schools closed during the pandemic, there was a lot of real goodwill. Early on in March we all said, “Great, close the schools.” We were all homeschooling, and everybody was very tolerant of the circumstances for a very long time—through the end of the summer, actually. But when it came to the end of the summer, and we began to realize that there was no plan to do anything differently, and that what had happened from March to May was going to be normal for the entire next school year, that was when people really started galvanizing and saying, “OK, things are really going off the rails here.”
At the same time, the board was very focused on things like renaming the schools. We're renaming schools that we can't go into? That doesn't make any sense. And it wasn't right versus left, it was moderate versus progressive (which in San Francisco means something more like far left)—a blue versus bluer kind of community. And so parents did really galvanize, and it was very interesting because there was this galvanization that was happening nationwide, as people were calling for reopening and different communities had different kinds of things that they were very angry about. The San Francisco moderates were not anti-masks. They were very pro-vaccine. They were very much advocating for teachers to get vaccines first, to help them jump to the front of the line, to keep this moving—can we fundraise for PPE, or ventilation in the classrooms? All of these sorts of things. But it was very interesting, because within San Francisco, the far left, including the teachers’ union, tried to frame this as the right-wing manifestation of the reopener movement coming to San Francisco. And it doesn't help, of course, that right-wing media loves to talk about San Francisco, because there are real problems in that city and nobody has been held accountable for them, because our political machine just moves its friends into higher and higher offices. What you started to see was this pushback from parents who were saying enough is enough. But it was read as, “the right wing cabal comes to San Francisco politics.”
And of course, that comes with a bunch of smears—“You are saying the same thing that Tucker Carlson is saying.” Then you find yourself in that awkward position of having to say, “Unfortunately, yes. We might find their politics unfavorable, but just because they said the same thing doesn't mean that we should be living in these circumstances.” It harkens back to the very start of our conversation. When you have these highly factional dynamics, everything is cast as, “Your political identity is opposed to my political identity, and therefore, what you're advocating for, I will demonize as an identity function, as opposed to engaging with the substance of the argument”—which was that we should reopen schools because kids are not doing well. Making that a partisan thing was extraordinary to watch.
Mounk: This is what, drawing on an idea by Emily Yoffe, I've called “180ism”—the sense that politics has become so polarized that you often want to say the opposite of whatever your opponent is saying. And of course, the ironic thing about that is that as soon as Tucker Carlson jumps onto some bandwagon, you suddenly have to change positions and start saying the opposite of what he's saying, and you're actually outsourcing your own political positioning to the people you most abhor.
What about the crime aspect of this? Give us a bit of a context for why it is that your children by the age of two needed to be able to recognize needles on the playground.
DiResta: There is a lot of deep empathy for the plight of the unhoused and addicted people who are on the streets of San Francisco, and I think the entire blue spectrum in San Francisco feels that empathy. There are debates about what should be done about it, and there is actually a fair amount of support for safe injection centers and similar things, which the right would be horrified by, but which is perfectly within the realm of acceptable solutions. Many of us have argued for that at a state level as well. But there are extraordinary amounts of money that are thrown at the problem. Most of the vehicular break-ins in San Francisco are organized crime rings that operate outside of the city, come into the city because there are a lot of tourists, break in, and leave again. And this has been known for a very, very long time. But in the campaign for District Attorney when Chesa Boudin was running, there were arguments about creating a system by which people who had their windows smashed could apply for the city to cover the cost of the repair. So we're just going to keep paying to repair people's windows? And there's a lot of finger pointing from the DA to the police, and from the police back to the DA: Why aren’t police arresting more? Why isn't the DA charging anybody?
This, again, fed into the larger national conversation about decarceral prosecutors, and how prominent people who were aligned with that movement nationwide saw Chesa as being under attack by this national machine. But what polling showed was just that the people of San Francisco—I think there was some crazy stat that came out a couple of weeks ago that said something like [half had been a victim of theft]. But the arguments were always that the crime stats don't support the sentiment, ergo it must be that a vast propaganda operation is brainwashing the good people of San Francisco, as if they're all reading right-wing media. There was a rather surreal conversation about it, where if somebody on the right is criticizing the crime wave in San Francisco, that means that we should just be happy with the status quo in the city, even though many of us had this very personal experience of being a victim of crime or knowing somebody who was a victim of crime. The idea that you've just been brainwashed by some right-wing pundit was actually really insulting, and so a lot of communities got very galvanized around this and felt that it was time to send a message back to the city leadership by saying that these policies are not working, and we cannot continue with them. I had left by the time of the Chesa recall, but that was where the momentum was.
Mounk: How did this countermovement, first on the school closings, and then on the Chesa Boudin recall, happen? How did people manage to organize, even though the people in political power were arrayed in defense of these individuals and institutions? And how did you message those campaigns to make it harder for people to falsely claim that you were just foot soldiers in this vast right-wing conspiracy?
DiResta: It was a huge effort by a number of different organizations in the moderate center of the city, so I don't get any real credit for this. But the focus had to be on competence: Is this policy working, what might a different policy look like, and who might better execute that policy? As much as the far left in the city tried to turn this into a referendum on the identity of the school board, as if this was just kind of punishing black and Latino school board members, what was really happening were arguments that would have led to the entire board being recalled if they could have been. But these three folks were eligible, and it was really just on competence—sun-shining for city records, for email records, to see if any work was being done to reopen, if there were emails between the school board members coming up with the plans. And they didn't exist. That was the thing that was so staggering, and that's when you start to see local media beginning to write the story of competence. Are these the best people to be running this school district at this time? And in the referendum, the voters overwhelmingly found that the answer was no. The focus was on how we might have a school board that is not just a stepping stone to higher political office, where the DCCC and the folks in San Francisco who do the endorsements are trying to position their people for a subsequent Board of Supervisors role or something. Is there a way to break that model and to argue for a parent-centered, student-centered school board, and I think that was, above all else, what really motivated the parents. The union advocated for the teachers, and the school board had been endorsed by the union, and there was nobody advocating for the kids. That was the gap that a lot of concerned parents decided to step in and fill.
Mounk: What advice would you give to people who may be listening to this and saying, “Hey, my city has similar problems, or some institution I'm a part of has been taken over by deeply ideological and not very competent people—how can I argue against this without ruining my good name or being smeared as part of this right-wing conspiracy?”
DiResta: When I talk about the kind of factional dynamics online today, it does make it very, very easy to smear people. That is almost the cost of participation in the public sphere today. I think it's really unfortunate that that's where the norms are, but that is where the norms are. What we tried to do was to create a positive message: these are ways that we want to help children—not counter-smear, not wading in to the greatest extent possible. But the recognition that you can impose significant costs on a person by smearing them and by making it impossible for them to speak up, because of a risk of firing—it’s my sincere hope that institutions and companies become more aware that this is a tactic that is widely used online by every facet of the factional spectrum at this point. And it is something that institutions should really take care not to fall prey to. There are occasionally times when something egregious has been uncovered or done, and an institution has a responsibility to investigate. But the idea that so-and-so is a fascist for advocating for a competent government is not that.
Please do listen and spread the word about The Good Fight.
If you have not yet signed up for our podcast, please do so now by following this link on your phone.
Podcast production by John T. Williams and Brendan Ruberry. Podcast cover image by Joe O’Shea.
Connect with us!
LinkedIn: Persuasion Community