In a recent Wall Street Journal op-ed, President Biden argued that “Big Tech companies” must “take responsibility for the content they spread and the algorithms they use.” To that end, Biden wants to “fundamentally reform” the law commonly known as Section 230, which protects online platforms from liability for most content their users post.
The president’s not alone.
For many politicians and critics on both sides of the aisle, the law has become a scapegoat for everything they don’t like about social media. Democrats think it facilitates the spread of hate speech and misinformation. Republicans complain that it lets social media companies freely censor conservatives. Proposals to reform or eliminate Section 230 abound.
But the attacks on Section 230 miss one important thing: it’s vital to free speech and innovation on the internet.
To understand why, you first have to understand the context in which the law was introduced. Congress passed Section 230 as part of the Communications Decency Act of 1996 in the wake of two court decisions that addressed when online services could be held liable for third-party content.
The first case involved the early internet giant CompuServe, which hosted numerous discussion forums. Because CompuServe did not moderate what users posted in the forums, the court ruled that it acted like a newsstand or bookstore rather than like a publisher. CompuServe was therefore not liable for defamatory statements posted on its forums. There was a catch, however: this was only the case so long as the company was unaware of potentially illegal content. If someone informed the company of specific instances of illegal content, CompuServe would become liable for it.
The second court case came a few years later, when the brokerage firm Stratton Oakmont sued CompuServe’s competitor Prodigy in New York state court for libel, based on user posts on Prodigy’s message board accusing the company of fraud. Unlike CompuServe, Prodigy did exercise editorial control over its message boards by moderating posts. In the court’s view, that made Prodigy less like a newsstand and more like a newspaper legally responsible for everything in its pages.
These two cases had the potential to place onerous burdens on tech companies. Platforms like Prodigy that engaged in moderation faced the prospect of having to review everything their users posted, just like a newspaper. But even in that early internet era, Prodigy’s message boards hosted about 60,000 new posts each day, far more content than you’ll find in any newspaper. This threat of liability would have created strong incentives for platforms to pre-screen user content—destroying real-time engagement—and to remove anything that could result in a lawsuit, if not stop hosting user content altogether.
Meanwhile, platforms like CompuServe which did not moderate also faced severe burdens, because they would be responsible for any potentially unlawful content brought to their attention. People upset with others’ speech could flood platforms with complaints. As one federal court put it: “Each notification would require careful yet rapid investigation of the circumstances surrounding the posted information, a legal judgment concerning the information's defamatory character, and an on-the-spot editorial decision whether to risk liability by allowing the continued publication of that information.”
If adopted by courts more widely, these two rulings threatened to stall the internet’s growth and cap its potential as a democratizing force for free expression.
Enter Section 230. It frees websites from the specter of crushing liability for hosting or refusing to host third-party speech. So if someone defames you on Twitter, you can sue that person, but you can’t sue Twitter. Widespread focus on social media also makes it easy to forget all the other online services Section 230 protects—crowdsourced encyclopedias, Amazon and Yelp reviews, dating sites, crowdfunding platforms, online marketplaces like eBay and Etsy, Substack-style publishing, blog and newspaper comment sections… the list goes on.
There are some exceptions—the statute doesn’t, for example, affect the enforcement of federal criminal law. But outside these exceptions, Section 230 immunizes websites from liability for what others say. It protects free speech by ensuring that platforms’ decisions about what to keep up or take down don’t hinge on fear of exorbitant legal costs. Platforms don’t have to spend time and money litigating whether a user’s speech was defamatory or otherwise unlawful, which would impose a tremendous burden even in cases where the platform ultimately won the case.
Those wishing to reform or abolish Section 230 make several common complaints. One is that social media companies are not neutral platforms: they often remove or reduce the visibility of content for ideological reasons. Critics argue that platforms which arbitrarily moderate content in this way shouldn’t receive special protection.
Also read: “Against Shadow Banning” by Beatrice Frum.
It’s certainly true that large social media companies have come to police users’ speech in sometimes arbitrary and vexing ways, and their concentrated power gives them outsized influence over public discourse. But the solution is not, as some have proposed, to mandate that platforms adhere to “viewpoint neutrality” or “political neutrality.”
This requirement would raise First Amendment issues. Under the First Amendment, platforms are free to decide what speech to keep up, remove, or promote. The government cannot lawfully punish those decisions by eliminating Section 230 protections only for those platforms that moderate content in ways the government doesn’t like.
“Neutrality” is also a vague and unworkable standard. It’s often difficult, if not impossible, to tell if a post was taken down for “political” reasons or because it, say, violated a platform’s “abusive speech” policy. And the government—with its own political motivations—would be tasked with making these subjective determinations. Other solutions, such as measures to increase competition in the industry, are far preferable.
Other proposed reforms to the law target the way information is presented online. In Gonzalez v. Google, the Supreme Court is currently considering whether algorithmic recommendations are protected under Section 230. The argument in favor of stripping this protection is that Section 230 immunity should not apply to platforms’ decisions to amplify or suppress certain content.
But algorithmic recommendations are a key feature of today’s internet. Search engines, social media platforms, and many other websites rely on algorithms to help users sift through a mass of information to find what interests them. Even a top Google search result could be considered a “recommendation.” Making websites liable for the way they organize and display third-party content wouldn’t be meaningfully different from making them liable for the content itself, undermining the purpose of Section 230.
Finally, some people have proposed seemingly modest reforms that target specific forms of disfavored content: it is a common left-wing criticism, for example, that Section 230 is an engine of “hate speech” and/or “misinformation.” Yet the vast majority of that speech is protected by the First Amendment—and for good reason. Even without Section 230 protection, platforms wouldn’t be liable for hosting it.
And history shows that targeting specific types of content can have far-reaching, unexpected consequences. In 2018 Congress passed FOSTA/SESTA, which carves out new, broadly worded exceptions to Section 230 immunity by exposing online services to liability for speech that “promotes” or “facilitates” prostitution or that “supports” or “facilitates” sex trafficking offenses. Craigslist reacted by completely shuttering its personals section, and other platforms limited discussion of the topics. As the Woodhull Freedom Foundation and others challenging the law argue, it could even impact speech that provides health advice to sex workers or advocates decriminalizing prostitution. Carving out additional exceptions to Section 230 will result in further unintended consequences of this sort.
In the last 25 years, the internet has democratized the exchange of ideas on an unprecedented scale. Section 230 was—and remains—essential to that democratization. It has fostered the creation and growth of a wide variety of online communities and platforms for people to speak their minds and trade ideas, information, and creative content. Without it, these communities would be rarer and less freewheeling, with controversial and unpopular speech facing the greatest risk of suppression. Big Tech’s competitors and other websites—those less able to afford litigation and content-moderation costs—would suffer most. If there’s any doubt about this, pay attention to which market participants are inviting or advocating Section 230 reform. Spoiler alert: It’s not the upstarts and new entrants.
Whatever problems the internet has created, undoing the law that has done the most to promote online free expression and viewpoint diversity is not the solution.
Aaron Terr is Director of Public Advocacy at the Foundation for Individual Rights and Expression (FIRE).
Follow Persuasion on Twitter, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:
Section 230 needs to be reformed to better define platform vs publisher. Big tech even today is manipulating content allowance and placement, censoring and generally controlling what they allow on their sites. This is publishing. They cannot have it both ways. The threat to free speech is exactly this... platforms controlling content without liability for the content. What if your Internet service provider did this? Or your cell phone service provider? What about your utilities provider... say your electricity provider cut your service because you liked a comment that criticized some woke meme?
The entire concept of Section 230 protections for big tech was based on the premise of free and open use. Either the site is a platform that minimizes content manipulation only to comply with the law, or it is a publisher that takes agency responsibility for the content and should be liable for the content (or censorship of content).
Certainly this is a complicated and critical topic which is made more problematic because of Sullivan which was a classic case of good facts and bad law. Moreover, the transparent intrusion of the government working with big tech promoting disinformation revealed by the Twitter Files has complicated the issue even further. The law that was meant to protect public square facilitators from defamation risk from random people instead creates a multiplier effect for governments, large media, and tech giants to lie with impunity all while make sure the *little people* get algorithmed out of the equation.