The Supreme Court Is About to Decide the Future of Free Speech
A cluster of rulings this month could define online communication for years to come.
This month represents a critical moment for freedom of speech—with a series of pending Supreme Court decisions likely to set standards for speech that, for better or worse, could last for generations. We turned to the expert on these issues—longtime ACLU president Nadine Strossen—to help disentangle the Court cases in play and explain why they matter.
As ever, if you value what we do, please consider joining thousands of other readers and becoming a paid supporter of Persuasion!
– Sam and the editorial team.
The current Supreme Court term includes a cluster of cases that could well shape the future of online free speech. These cases invite the Court to determine the power of both government officials and social media platforms concerning “content moderation” policies, which in turn define platform users’ speech rights. Given the unparalleled importance of these platforms for all manner of communication—personal, professional, and political—meaningful free speech rights depend on the platforms’ policies. It is hardly an exaggeration to say that the Court’s rulings over the next weeks may well determine the shape of speech online for years to come.
Current moderation policies have warranted strong critiques, including for unjustifiably disfavoring certain expression and speakers. Yet legitimate government efforts to regulate these policies, even for the asserted purpose of making them more fair, in turn raise serious free speech problems. Government control of the companies’ editorial decisions violates the free speech rights of not only the platforms themselves, but also of all of us who use them. Four cases on the Court’s current docket address these critically important issues: three cases that directly involve social media, which the Court has not yet decided, and one that involves an analogous offline situation, which the Court decided last week.
The Court also recently ruled on two social media cases that present yet another key issue that will shape free speech in our time: the extent to which government officials may selectively exclude certain speakers or messages from social media accounts that the officials use for both personal and governmental communications.
While these six pertinent cases are complicated, both legally and factually, a few core principles help to illuminate their powerful impact on our future free speech.
Re-fighting the Battles for Online Free Speech
In the Court’s landmark 1997 Reno v. ACLU ruling, it unanimously held that online expression is entitled to the same robust First Amendment protection as print media, rejecting the Clinton administration’s then-fashionable arguments that the internet, along with broadcast media, should be subject to extensive government regulation. The Court hailed the internet’s unparalleled potential as “the most participatory form of mass speech yet developed.”
Two decades later, in its 2017 Packingham v. North Carolina decision, the Court—again without dissent—reaffirmed its strong speech-protective stance toward online expression, declaring that “social media in particular” constitutes “the most important places…for the exchange of views.”
However, as ACLU founder Roger Baldwin observed, “no fight for civil liberties ever stays won.” That insight applies to the Court’s current cases. They force defenders of online free speech to fight for it yet again, with the outcome unclear.
In the time since the 2017 Packingham ruling, social media expression has been increasingly demonized, all across the ideological spectrum, as the purported cause of a wide range of individual and societal problems. Correspondingly, many government measures seek to curb social media expression—even though some experts have questioned the evidentiary basis for this targeting of social media.
The State Action Doctrine
A core First Amendment theme links all of the free speech cases that the Court is hearing this term and is crucial to understanding them: the “state action” doctrine, which holds that the First Amendment’s bar on measures “abridging the freedom of speech” generally constrains only “state”—i.e. government—actors, not private sector actors. Each one of these current cases involves a knotty set of circumstances that seems to straddle the government/private sector divide. For instance, in the set of cases concerning informal government actions, the question is whether the government has exerted sufficient pressure on private companies’ actions—including social media companies’ content moderation decisions—that these ostensibly private actions should in fact be attributed to the government and hence be subject to First Amendment constraints.
Although the Court has grappled with similar First Amendment issues in prior cases involving other communications media, unique aspects of social media have fueled sharp debates about how the old principles apply to this new media environment.
Public Officials’ Social Media Pages
In its two recent decisions in the cases involving government officials’ use of social media accounts for both personal and public business, the justices unanimously agreed upon a newly-formulated legal standard.
The bellwether 2020 case of Knight First Amendment Institute v. Trump turned on President Trump’s use of his private Twitter account for official business and his penchant for blocking critical commenters. Lower courts had ruled that Trump’s blocking of commenters constituted state action, which violated the “bedrock” First Amendment principle of “viewpoint neutrality”—i.e. that government may never disfavor expression solely because of disagreement with its viewpoint. The Court did not review the Trump case itself (because he had left office), but in this term’s two cases on this topic, it stressed that an official with authority to speak for the government “cannot insulate government business from scrutiny by conducting it on a personal page.” In such situations, the Court held, a communication will be attributed to the state and hence subject to First Amendment constraints if the official was exercising governmental power when making it.
Since virtually all government officials, all over the country, maintain social media accounts, often with mixed personal and official content, these newly-announced standards will have enormous impact in determining our freedom to engage in the most important speech we have: speech about public policies and officials, including critical commentary.
Laws Regulating Companies’ Content Moderation Decisions
Meanwhile, challenges to laws in Florida and Texas raise the issue of whether government may regulate private platforms’ content moderation policies for the asserted purpose of protecting users’ free speech. The rationale behind those laws is essentially that the companies should be treated as public utilities, required to honor the same viewpoint-neutral speech policies as the government itself. The particular laws reflect concerns that giant social media platforms were disproportionately suppressing conservative speakers. While the laws’ proponents accuse the companies of engaging in “censorship,” its opponents retort that the companies are in fact engaged in “editing,” as is their First Amendment right.
The most direct precedent here is the Supreme Court’s unanimous 1974 decision in Miami Herald v. Tornillo, which overturned another, highly analogous, Florida law, also seeking to restrict editorial decisions by powerful media companies—in that case, major newspaper publishers with monopoly power in important media markets. Rebuffing arguments that were uncannily similar to those now being made about social media companies, the Court stressed that the First Amendment conclusively treats private platforms’ editorial judgments—no matter how flawed and consequential they might be—as the lesser of two evils, in contrast with government regulation of those judgments. The Court declared: “A responsible press is an undoubtedly desirable goal, but press responsibility is not mandated by the Constitution, and, like many other virtues, it cannot be legislated.”
Based on the justices’ questions during oral arguments, it seems that a majority are poised to reaffirm that position. This means that social media companies would retain editorial discretion to design and implement their content moderation standards.
Informal Censorship
It is possible that the final set of cases—querying whether the government exerts undue pressure in seeking to influence social media companies to regulate speech—is the most consequential of all.
In the pending Murthy v. Missouri case, the lower courts held that the Biden White House and various executive branch agencies had improperly pressured social media platforms to block expression that was inconsistent with administration policies, especially concerning COVID. The plaintiffs claim that the administration had worked with social media giants to suppress “truthful information… under the guise of combating ‘misinformation.’” NRA v. Vullo, which the Court decided on May 30, dealt with similar general questions, although not specifically in a social media context. The fundamental general issue is how to draw the line between legitimate government efforts to persuade or encourage private actors not to promote certain communications and those that constitute undue pressure or coercion. This distinction turns on the factual circumstances in each case.
In Vullo, the justices all agreed that the facts alleged in the plaintiff’s complaint supported a conclusion that the officials had crossed the line from permitted encouragement to unpermitted coercion. In Murthy, the factual record is more complicated and contested, making it hard to predict how the justices will rule.
Even free speech advocates have been sharply divided about Murthy, which calls into question wide-ranging Biden administration efforts to influence social media’s content moderation practices (although it should be noted that some of these efforts began in 2018, under the Trump administration) and hence could have an outsized impact on those practices. However, it is worth recalling the critical role that the state action doctrine plays in the Court’s First Amendment analysis: above all, the Court should bar undue government intervention in private sector communications channels.
Online platforms’ content moderation practices have been subject to many just critiques. But free speech may well be even more endangered by government regulation of these practices through either overt laws or covert backstage strong-arming. Perhaps all these cases should turn on the same timeless verity that the whole Court forcefully implemented in Miami Herald v. Tornillo: namely, that we should avoid a cure that is worse than the disease.
Nadine Strossen is a Senior Fellow with FIRE (the Foundation for Individual Rights and Expression), a former national president of the American Civil Liberties Union, and a professor emerita at New York Law School. Her most recent book is Free Speech: What Everyone Needs To Know.
Follow Persuasion on X, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:
"the “state action” doctrine" is why authoritarians want the government to take over full industries like Health Care.
I think if the Justices vote to allow the social media tech companies to keep censoring, banning, etc. they are no longer qualified for Section 230 protections. They will be in fact seen as publishers and not platforms.
Pick one, but you don't get both.