A Dangerous Victory For Social Media Companies
In a pair of rulings, the Supreme Court gives Big Tech control over content moderation.
In a piece for Persuasion in early June, First Amendment scholar Nadine Strossen contended that the Supreme Court’s decisions this term could “well determine the shape of speech online for years to come.”
Well, the Court has ruled, the results are in, and what we are left with is… the worst of all possible worlds.
In Moody v. NetChoice/NetChoice v. Paxton, the Court espoused a doctrine in which social media companies are viewed as curators or compilers of online material, with the feed constituting a “distinctive expressive offering”—even when those curatorial choices are in fact made by algorithms or AI tools. That philosophy gives the social media companies carte blanche to moderate—or censor—as they wish. Meanwhile, in the highly consequential case Murthy v. Missouri, the Court found that plaintiffs lacked standing to sue the government even when the government had copiously interfered in tech companies’ moderation practices.
What was missed in the Court’s decisions was a golden opportunity to depict social media companies as “common carriers”—like telephone or electric companies—that have a “general requirement to serve all comers.” A framing like that would have helped to put the public back in control of its own discourse. Instead, social media companies got everything they could have possibly wanted—and with the Supreme Court effectively washing its hands of the matter.
The cases themselves are a muddle, and in a sense it was that sprawl that proved decisive—with Justice Samuel Alito complaining in his dissent in Murthy that the Court’s majority wanted “a series of ironclad links” and blithely ruled in favor of the defendants when the facts of the case were messier than that. Press reports have tended to focus on the narrow partisan implications, with The New York Times for instance describing the Murthy decision as handing “the Biden administration a major practical victory” against “a Republican challenge.” But liberals will feel less sanguine about the Court’s decision if it should happen that, let’s say, officials from an incoming Trump administration browbeat social media companies to “demote” content about climate change or about Trump’s felony conviction—and the companies have Murthy to shield them from any legal consequences.
It’s necessary to understand the cases—and the divisions they sparked within the Court—at a fairly deep level to recognize how far-reaching a victory the social media companies have had.
Of the cases, the NetChoice ruling is a bit easier to parse. The Court was addressing two state laws—one in Texas, one in Florida—both of which limited the tech companies’ ability to engage in content moderation that disfavored users’ expression on the basis of viewpoint. State legislatures were clearly a peculiar venue to impose restrictions on tech companies with global reach, and no one really expected the Florida or Texas laws to endure. The Court’s ruling remanded the cases back to lower courts for further consideration of the issues.
From the beginning, the laws were intended to force the courts to grapple with social media’s reach. And more significant than any judgment on the laws themselves was the philosophy the Court’s majority laid out in its opinion—a philosophy which gives social media companies all the prerogatives (and none of the responsibilities) of traditional publishers.
The Court chose to view NetChoice as a direct continuity of the seminal 1974 case Miami Herald v. Tornillo, in which a state law insisting that newspapers offer equal representation for political candidates was deemed unconstitutional for interfering with a newspaper’s “exercise of editorial control and judgment.” Somewhat complacently, the Court in NetChoice deemed social media companies to be carrying out the same functions as editors, if on a greater scale. “[Platforms] include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression,” the majority wrote of social media platforms. “While much about social media is new, the essence of that project is something this Court has seen before.”
One figure who wasn’t having any of this logic was Justice Samuel Alito, who contended that what social media companies do is categorically different from newspaper editors. In a dyspeptic concurrence (which was really more of a dissent), he argued that the scale alone invalidated the analogy: Facebook and YouTube each produced around 1.3 billion times as many bytes as there are in a single issue of The New York Times. The content moderation work is done predominantly by algorithms and increasingly by AI—and represents an entirely different exercise from editors marking up typescripts in blue pencil, as was the backdrop for Miami Herald v. Tornillo.
For Alito (and for Thomas and Gorsuch, who agreed with his analysis), the better analogy was to a telephone directory rather than a newspaper—but a directory in which an algorithm constantly manipulated the results that users saw. By “mechanically accepting [the] analogy” to Miami Herald, Alito wrote, the Court’s majority simply missed that social media is a very different entity. “Are such decisions [content moderation done by AI algorithm] equally expressive as the decisions made by humans? Should we at least think about this?” Alito pleadingly wrote.
The majority’s instructions make clear that algorithms—even ones that amount to censorship—are to be treated as a form of expression. Under a “common carrier” framework, by contrast, the Court could have opened a path for the government to regulate social media companies who adopt discriminatory algorithms, on the basis that the companies have a duty to provide a service to all users.
The decisive 6-3 ruling in Murthy v. Missouri was even more of a gift to Big Tech.
Murthy v. Missouri was the culmination of what the Court called a “sprawling suit” which centered on the Biden administration’s concerted effort, particularly in 2021, to have social media companies “demote” content critical of the administration’s Covid-19 response. The Court conceded that the plaintiffs who brought the suit “faced speech restrictions on different platforms, about different topics, at different times”—but because the social media companies were engaged in their own censorious content moderation practices it was never completely possible to establish a “causal link” showing that the White House had censored the social media companies, against their own wishes, in a way that then resulted in direct injury to the plaintiffs. For the majority, the decision was all about the plaintiffs’ standing: “We begin—and end—with standing,” the Court modestly held, unable to see a smoking gun from a White House official that resulted in the direct censorship of one of the plaintiffs.
Justice Alito, on the other hand, was more confident in his powers of deduction. In a furious dissent (in which he was once again joined by Thomas and Gorsuch), he argued that “For months in 2021 and 2022, a coterie of officials at the highest levels of the Federal Government continuously harried and implicitly threatened Facebook with potentially crippling consequences if it did not comply with their wishes about the suppression of certain COVID–19-related speech.” That “browbeat[ing]” occurred in public statements, such as President Biden’s “They’re killing people” remark in July 2021. Even more consequentially, it took place in a steady drumbeat of private correspondence, with officials making various demands of social media companies and backing them up with threats. “Internally, we have been considering our options of what to do about [Facebook’s slow-footing algorithm changes],” White House Senior Advisor for the Covid-19 response Andy Slavitt wrote in an email to Facebook in March 2021, and White House Press Secretary Jen Psaki made clear two months later that one of the options being considered was “a robust anti-trust program”—i.e. a clear threat to the social media companies.
As Alito put it, “The picture is clear … The message was unmistakable, and it was duly received,” and Facebook reacted as… well… as less than a brave and independent-minded defender of the Fourth Estate. Facebook told the White House that it would “work … to gain your trust,” that “we thought we were doing a better job,” and at an ebb in the relationship asked how it was possible “to get back to a good place” with the White House. Internal communications revealed that Facebook simply didn’t care that much about protecting the freedom of expression of users. It had “bigger fish … to fry with the Administration,” like an ongoing EU-U.S. dispute over data privacy. As Alito wrote, “Facebook’s responses resembled that of a subservient entity determined to stay in the good graces of a powerful taskmaster.”
But even with Alito doing his best Hercule Poirot, the Court’s majority remained somehow unconvinced that the months of browbeating by the administration and the concurrent “demoting” of millions of posts by social media companies were causally linked—a see-no-evil approach that sets a troubling precedent. To Alito, the Court’s ruling “thus permits the successful campaign of coercion in this case to stand as an attractive model for future officials who want to control what the people say, hear, and think.” For the social media companies the lesson is obvious: By hiding their robust content moderation practices under various terms of art and by maintaining a threadbare degree of deniability in their relationships with administration officials, they are able to stay both on the “good side” of the White House and to withstand scrutiny from the Supreme Court.
The deep concern here is that, when it comes to social media, the Supreme Court just doesn’t get it. Social media has, as the Court has previously recognized, become “the modern public square.” It’s where social and political discourse—in its most democratic, often rawest form—takes place. When people want to freely express themselves, per their First Amendment rights, their immediate impulse is, as often as not, to take to social media. Certainly, nothing else has that immediate, far-ranging reach.
But here’s the thing. Nobody ever expected, at the time of the Constitution’s framing, that the “public square” would be mediated by a group of monopolistic, for-profit private corporations each customizing users’ experiences so as best to generate advertising revenue. The Supreme Court, with the Murthy and NetChoice rulings, has simply kicked over control of that public square to the social media companies, who are deemed the “curators” of it and now have almost limitless control over moderation decisions—and, in so doing, the Court forgot who a public square is really for. As the Solicitor General for Louisiana put it in oral arguments, “Remember that the third party [i.e. the public] is completely absent from this discussion.”
And that’s what we’re left with. In Murthy, the Court found that plaintiffs lacked standing to sue the government for what Alito regarded as “blatantly unconstitutional” pressure on the social media companies. And in NetChoice, the Court was untroubled by the social media companies’ use of AI algorithms to make their content moderation decisions—including ones which may constitute viewpoint censorship. “Algorithms are people too” was the gist of the Court’s rulings—and the social media companies now find themselves unburdened by judicial scrutiny as they censor in any way they wish and kowtow to administration officials in any way they find politically expedient.
Sam Kahn is an associate editor at Persuasion and writes the Substack Castalia.
Follow Persuasion on X, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:
Think these decisions get it right. Social media sites need the ability to moderate content, and it will be impossible to decide when they're moderating for what everyone thinks are good reasons (to keep the content quality high) versus moderation for "political reasons". If platforms lean too heavily politically in one direction, we'll see what we saw with Truth Social: the emergence of new platforms meant for content that was perceived to be disfavored on the "mainstream" platforms.
This is all very confusing. If the social media companies were to have a "general requirement to serve all comers", then I suppose all postings would have to be equally available at all times. Is this what the author is suggesting? If so then the situation would be similar to a library in which every volume in its collection would be cataloged and available. However, even public libraries get to choose what to display in their lobbies, what to purchase, and what to keep in storage with limited interference.
The problems as described are that the social media companies use AI algorithms for content moderation, and that the companies themselves have too much influence. When the first printing presses started churning out material, the printers faced the same scrutiny as the social media companies and still do in certain countries. However, at least in liberal democracies, printers are allowed to print what they get paid to print, and customers can buy what they want to read.