Twitter's Flawed Justification for Censorship
The platform plays an important role in public discourse. It should stop creating and enforcing ambiguous content restrictions.
In the past few months, Spotify, Substack, and Reddit have all resisted calls to censor content on their platforms. Joe Rogan is still on Spotify despite demands to punish him for spreading COVID misinformation; Substack reaffirmed its commitment to journalistic freedom after pressure to ban contentious authors; and Reddit held firm against calls to centralize content moderation of its community forums. In our era of censoriousness, when the values of free speech and open discourse are under constant pressure, these developments are worth celebrating.
But for every action, there is an equal and opposite reaction. While some tech platforms are standing up for free expression, others are restricting it. Chief among the restrictors is Twitter. And while only about a quarter of Americans use the platform, dismissing it as a niche corner of the internet would be a mistake. Twitter is where breaking news spreads fastest and where much of the news cycle is made. If you doubt the platform’s influence on our public discourse, just consider the number of news cycles that were dominated by Donald Trump's tweets—at least until he was booted from the platform.
All this is to say that Twitter plays a special role in public discourse, and this influence makes its efforts to restrict speech all the more concerning. Twitter’s poor form on this issue—banning accounts, removing tweets—has been pointed out repeatedly, and the company has deflected with a set of justifications. We think it is worth addressing these justifications one by one, detailing the flaws and potential for abuse in each of them.
Twitter’s self-defense splits into three categories: they justify censorship on the basis of privacy, on the basis of harm, and on the basis of misinformation.
Let us begin with the first: censorship justified in defense of privacy. This is well characterized by Twitter’s recent update to its media policy. Late last year, the company announced that it would be a violation to share media depicting individuals without their permission. To our knowledge, there has been no effort by Twitter to obtain such permission for the vast majority of videos on the site. Doing so would be a monumental, probably impossible, task, considering that the number of videos on the platform is presumably in the millions or billions.
Under the rule, it is conceivable that the video of George Floyd being murdered by police officer Derek Chauvin could have never been allowed, or that the videos of the January 6 attack on the capital would have been in violation of company policy. It should not need stating that a warped notion of privacy is no reason to potentially ban every piece of media that is not explicitly consensual. (Inauspiciously, the policy was announced on the first day of the company’s new CEO, Parag Argawal.)
Now let us turn to the second category: Twitter censoring speech in order to remove “harmful” content. In 2020, when asked about Twitter’s troubling speech record, then-Twitter CTO Parag Argawal explained that the company’s policy is to focus on the potential harm that a tweet might cause: “We attempt to not adjudicate truth, we focus on potential for harm.”
Perhaps this would be workable if Twitter were to narrowly define harmful speech worthy of restriction in the same way that the Supreme Court has—as “words which by their very utterance inflict injury or tend to incite an immediate breach of the peace.” But Twitter has offered no such satisfying answer to what it actually means by harm. Instead, the labyrinth of Twitter’s rules on the subject is so tangled, broad, and ambiguous that they tell us very little about how Twitter, in practice, determines what speech is harmful.
Since nobody at Twitter seems to know—or at least nobody seems intent on informing the public—exactly how they apply these rules, the company will probably spend years playing language games in order to justify their approach, and eventually come to learn that nobody can agree on the definition of harm. In the meantime, we are left with a standard of harm that is vague, nebulous, and easy to marshall in service of ideological ends. The problem with restricting free speech is not that there aren't odious opinions; it is that it is naive to trust any institution to determine what they are.
The third justification for Twitter’s censorship is calling it a crackdown on misinformation—a justification that has become particularly fashionable since the dawn of COVID. According to Twitter’s “COVID-19 misleading information policy,” users “may not use Twitter’s services to share false or misleading information about COVID-19 which may lead to harm.” If Twitter decides that a user has broken this rule, the company might flag the tweets as misinformation, delete the tweet from their site, or permanently ban the user.
If we have learned anything from the pandemic, it should be intellectual humility. What we accept as true may be false; what we accept as false may be true; institutions and experts can be wrong. Just consider the changing guidance from experts on masks. At the start of the pandemic, experts were saying that masks did little to stop the spread of COVID and were unnecessary. Then they changed their guidance, advocating for masks, including cloth ones. Now, they are saying that cloth masks are almost useless. Twitter’s policy is not equipped to handle content that contradicts these many iterations of expert opinion.
Our point is not to dismiss experts or institutions, but to acknowledge that the scientific consensus is ever-changing and that pinning down “the truth” is not easy. The belief in free speech and open discourse is, at root, an expression of intellectual modesty. It takes an incredible amount of hubris to believe that one has an incontestable grasp on the truth.
Twitter, of course, is a private company. It has every right to monitor and curate its platform however it likes. Ideally, Twitter would opt for a light touch—only banning content and people that break the law, directly incite violence, or clearly cross the line into hate speech. And if Twitter is absolutely insistent on strictly monitoring misinformation, harm, and privacy, it must be absolutely transparent about its rules, and ensure they apply those rules evenly to all users. Or, better still, decentralize the process by which it arrives at those rules (their Bluesky project is one step in this direction, but it has an awfully long way to go.)
Given Twitter's role in our public discourse, if and how the platform censors content is a concern for all of us—even those who have never even signed up for an account. This is a significant responsibility, and Twitter should follow the example set by Spotify, Substack and Reddit in defending open discourse.
Sahil Handa is a contributing editor at Persuasion.
Seth Moskowitz is an associate editor at Persuasion.