The "Twitter Files" Show It’s Time to Reimagine Free Speech Online
Platforms tried speech codes. There’s a better way.
A few years ago I was invited to an off-the-record meeting with senior executives at a major social media company. The topic was free speech. I’d just written a piece for the New York Times called “A better way to ban Alex Jones.” My position was simple: If social media companies want to create a true marketplace of ideas, they should look to the First Amendment to guide their policies.
This position wasn’t adopted on a whim, but because I’d spent decades watching powerful private institutions struggle—and fail—to create free speech regulations that purported to permit open debate at the same time that they suppressed allegedly hateful or harmful speech. As I told the tech executives, “You’re trying to recreate the college speech code, except online, and it’s not going to work.”
I’ve been thinking about that conversation ever since Elon Musk took over Twitter, and particularly since Matt Taibbi and Bari Weiss last week began releasing selected internal Twitter files at Musk’s behest. These files detail, among other things, Twitter’s decisions to block access to a New York Post story about the contents of Hunter Biden’s laptop ahead of the 2020 election, Twitter’s decision to eject Donald Trump from the platform, and the ways in which Twitter restricted the reach of tweets from a number of large right-wing Twitter accounts.
The picture that emerges is of a company that simply could not create and maintain clear, coherent, and consistent standards to restrict or manage allegedly harmful speech on its platform. Moreover, it’s plain that Twitter's moderation czars existed within an ideological monoculture that made them far more tolerant toward the excesses of their own allies.
In other words, Twitter behaved exactly like public and private universities in the era when speech codes ruled the campus.
At the risk of oversimplifying history, here’s the short story of modern university censorship. As American universities grew more diverse, a consensus emerged in universities both public and private that schools should strive to create a “welcoming” environment for students and faculty, with particular attention paid to protecting students from discrimination on the basis of protected categories such as race, sex, sexual orientation, and gender identity.
Federal and state laws required colleges and universities to protect students from harassment on the basis of protected characteristics. But schools wanted to go further. They wanted to make sure that students and faculty were protected from psychological discomfort. The speech code was born.
At the same time, however, schools were still eager to proclaim their support for academic freedom and free speech. So the message to the campus community boiled down to something like this—all speech is free except for hate speech. But what was hate speech? The definitions were broad and malleable.
Temple University, for example, banned “generalized sexist remarks.” Penn State University declared that “acts of intolerance will not be tolerated,” and defined harassment as “unwelcome banter, teasing, or jokes that are derogatory, or depict members of a protected class in a stereotypical and demeaning manner.”
One of the worst speech codes I ever read was enacted at Shippensburg University, a public university in Pennsylvania. The policy was remarkably broad: “Shippensburg University’s commitment to racial tolerance, cultural diversity and social justice will require every member of this community to ensure that the principles of these ideals be mirrored in their attitudes and behaviors.”
It doesn’t take a legal genius to realize that these speech rules were so broad that they granted administrators extraordinary power over free speech. Combine that power with the ideological blinders that are inherent to any political monoculture, and you have a recipe for staggering double standards in censoring political and religious speech. I could fill an entire newsletter with stories of such abuses.
Back in my litigation days, I led legal teams that followed a few simple rules. First, public institutions must comply with the First Amendment, and they should be sued if they don’t. Second, private universities have the freedom to craft their own rules, but if they promise free speech, they should deliver, and there is no better model for delivering free speech than the First Amendment.
The same message should apply to social media. As a private company, you can choose to become, say, a “progressive social media platform” or a “website for Christian connection and expression” and govern yourself accordingly. But if you hold yourself out as a place that welcomes all Americans, then you’re courting disaster if you depart from the lessons learned from constitutional law.
To be clear, to say that First Amendment principles should guide private platforms is not the same thing as saying “anything goes” any more than protecting the First Amendment on campus creates chaos. Far from it. Campuses must and do protect individuals from targeted harassment, for example, and they can use reasonable time, place, and manner regulations to channel speech into particular places and specific hours of the day.
For example, it’s one thing to yell, “Trump 2024!” on the quad in the middle of the day, it’s another thing entirely to walk up and down the halls of a dorm at 2:00 a.m. yelling the same thing. Yelling in the quad is free speech, while interrupting sleep in the dark of night can be a form of harassment.
To take another example of appropriate speech restrictions, while there are sharp limits on the ability of the government to regulate pornography, it can absolutely restrict access to graphic images when children are present. For example, the FCC prohibits “obscene, indecent, and profane broadcasts” on the radio and network television.
The FCC exercises authority over radio and television network content because the federal government controls access to airways. It grants licenses to use the finite number of frequencies available. It does not exercise that same control over subscription services, which is why prime-time programming on CBS looks very different from prime-time programming on HBO.
But not even the FCC has the power to prefer one political point of view over the other. If it promulgated regulations that granted Democrats preferential access, they’d be struck down immediately. The reason is related to the core principle of the First Amendment, a core principle that social media platforms should adopt as well: viewpoint neutrality.
The principle of viewpoint neutrality means that any regulation of speech, including time, place, and manner regulations, should be crafted and enforced without regard to the underlying viewpoint of the speaker. The same rules apply to Democrats and Republicans alike, to Christians and atheists, to soldiers and pacifists. The same rules apply even to people who hold the most reprehensible viewpoints, including communists and fascists.
Along with viewpoint neutrality, there’s another key constitutional principle that’s critical to maintaining the marketplace of ideas—clarity. Rules that are vague or overbroad can chill free speech every bit as effectively as a rule that specifically targets disfavored speech for censorship. Even otherwise-acceptable time, place, and manner regulations can be unlawful if they grant to public officials too much discretion to restrict speech.
How does all this apply to Twitter, Facebook and every other large social media platform on the planet? First, it means giving up the quest for a free speech utopia and embracing viewpoint neutrality. There is no way to create any meaningful free speech environment that allows for actual debate while protecting participants from hurtful ideas or painful speech. Executives at Twitter or Meta are no better than college administrators at crafting the perfect speech code. The brightest minds have already made that effort, and even the brightest minds have failed.
Second, it means moderating on the basis of traditional speech limits. Even institutions that embrace viewpoint neutrality will place limits on speech. They’ll have to. If there is one thing we know from decades of experience with the internet is that completely unmoderated spaces can and do become open sewers that are often unsafe for children and deeply unpleasant for adults. Unmoderated spaces can become so grotesque that they’re simply not commercially viable.
“Viewpoint neutral” is thus not a synonym for “unmoderated.” Consistent with viewpoint neutrality, a platform can impose restrictions that echo offline speech limitations. Defamation isn’t protected speech. Neither is obscenity. Harassment is unlawful. Invasions of privacy (doxxing, for example) should face sanctions. Threats and incitement violate criminal law. A platform can say, “Children are present. No nudity.”
It is easy to imagine different rules that make it easier to talk about issues and harder to target individuals. Examples of viewpoint-neutral time, place, and manner regulations that could prevent, for example, some of the worst conduct on Twitter could include limiting or eradicating the quote-tweet function, limiting the visibility of replies to other users’ tweets, or limiting the ability of users to reply or interact with tweets of people they don’t follow.
Third, it means embracing clarity and transparency. Make rules clear. Create an appeals process when users are penalized. No human institution is ever going to apply its rules perfectly, and accountability is necessary. Secrecy in decision-making can impair trust every bit as thoroughly as flaws in the substance of the decisions made.
Indeed, one of the interesting lessons of the last few years is that social media censorship is both divisive and ineffective. It often backfires. In a free society, attempts to censor speech often create a demand for that speech. Twitter censoring the Hunter Biden story, for example, didn’t squelch its reach. Internet searches for Hunter Biden skyrocketed after Twitter took action.
The idea that censoring speech can have the opposite effect is so well-known that it has a term—the Streisand Effect. In 2003, Barbra Streisand sued to have a picture of her home removed from an internet site. At the time she filed the suit, the image had only been downloaded a grand total of six times (twice by her lawyers). After her suit hit the news, the image was downloaded 420,000 times in a single month.
The reality of the Streisand Effect can create perverse incentives. Bad actors will intentionally court suspensions or flirt with outright bans to generate attention and sympathy.
I don’t believe that Twitter or any other social media company monopolizes the marketplace of ideas. I also believe they have the right to set their own policies. If Twitter wants its moderation policy to simply be, “Elon Musk decides,” then it’s Twitter’s right to set that policy. I don’t have to use the service, and I can take my speech to countless other platforms to share my excellent takes on Aquaman and Ja Morant.
Universities—even private universities—eventually learned an important lesson in free speech. The latest speech code survey by the Foundation for Individual Rights and Expression (FIRE) indicates that only 18.5 percent of surveyed universities have a “red light” (clearly speech-restrictive) speech policy. That’s down more than 50 points from 2009, a year I filed multiple free speech lawsuits against public universities.
Social media companies should take note. The upheaval caused by Elon Musk’s Twitter takeover—along with the controversy generated by the “Twitter Files”—represent an ideal opportunity for a free speech rethink. New platforms can benefit from old principles, and when it comes to managing a marketplace of ideas, centuries of First Amendment jurisprudence can help light the way.
David French is a Senior Editor at The Dispatch, a contributing writer for The Atlantic, and a member of Persuasion’s Board of Advisors. He’s the author most recently of Divided We Fall: America’s Secession Threat and How to Restore Our Nation.
This article was originally published in The Dispatch.
Follow Persuasion on Twitter, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:
I don't think free speech can be so nuanced and not become a mess of power abuse. I am a free speech absolutist. "Hate speech" as a concept was always flawed and dangerous. It was THE mistake. Hate is an emotion. Speech is an action. Nobody can read the emotions of others, and there should never be a justification for prosecution of people based on the assumption or projection of an emotion. All that should matter are actions, and if those actions cause MATERIAL harm. Psychological discomfort is NOT material harm.
Material harm should be the one and only benchmark. My words might sting. And that sting might be justified, or not... but it is never a right of someone to silence or persecute me because my words sting them. Now, if I glue myself to the road in protest over a climate change hoax, and that action prevents people from traveling to their school, place of business, doctor appointment, etc... well then THAT is speech that causes material harm and should be persecuted for the harm it causes.
Persecute for the material harm, not the speech.
"Material" is a legal and business term. It is subjective in that two people can have a different opinion about the line. However, there is a line... and that is the key. Today there is no line... everything can be claimed to be "hate speech".
What has happened is that by allowing in this concept of "hate speech" we have empowered Karens, crybullies and political charlatans to exploit it for their own power and greed pursuits. And within that cabal of terrible actors are the teachers unions... a radical 3rd wave feminist Marxist hive having been executing a plan for decades to brainwash every K-12 student with the toxic Critical Theory fake scholarship so that these little bots of destruction would go out into the world and create the social chaos needed to secure a cultural revolution to transform America into yet another failed collectivist hell hole.
Canceling conservative voices was always going to be a necessary step for that plan to succeed. Idiot virtue signaling Republicans (Mitt Romney types) failed to fight back when the left began pushing this concept of hate speech... when it was Republicans that freed slaves and passed Civil Rights legislation. What the dolts on the right failed to recognize is that the left was basically laying the groundwork for a form of neoracism. Woke is their manifestation of that. Racism as practiced is simply defined as attempts at tribal dominance by denigrating and discriminating against other tribes that pose a competitive threat. Today the left has adopted the woke ideology because it empowers them to be biased against an entire class of people that they compete with for social and economic dominance. This was always the plan of the radical left... and we all fell into it.
The remedy is civil rights 2.0... a new day where people are judged 100% on demonstrated character, behavior and ability and not anything else... and that free speech is absolute.
What pre-Musk Twitter was doing in the Twitter Files is worse than described here. They were engaging in Calvinball censorship. It's not simply that the rules were flawed or inconsistently enforced across political lines (though both are also true). It's that Twitter execs were simply making up the rules as they went along.
In the case of the Hunter Biden laptop story, first they censored it, and then when the PR team asked the content moderation team what the rationale was, they openly didn't have that part figured out yet. They were hoping to hang their hat on "hacked materials" even though the knew full well that wouldn't pass muster as there was no evidence of hacking being involved. With the World Cup going on, an easy analogy is a ref giving a player a red card, kicking them out of the game, and only then trying to determine if any rules were actually broken in order to justify a decision that's already been made. Or in the words of Alice in Wonderland "Sentence First. Verdict Afterwards"
In the case of Libs of TikTok, when Twitter execs couldn't find any rules being broken (which makes sense since LoTT just recirculates content already posted by others), they had to make up brand new bullshit on the spot like "indirect rule violations". If a rule isn't being directly violated, then it's not being violated at all. They also couldn't even say which specific rule was allegedly being "indirectly" violated.
Straight up Calvinball.