The Internet’s Ultimate Censors
A few companies have the power to banish entire websites and applications from the web. They have begun caving to activist pressure.
Debating whether social media platforms should suspend or ban offensive users has become a national pastime. It's only rarely that the behind-the-scenes services powering the internet get caught up in this—and thank goodness, since the stakes for free speech are much higher in this scenario. But there's worrying evidence that internet infrastructure companies — entities that provide essential tools and services that other websites and applications need to stay online and accessible — will soon be subject to the same scrutiny as their more forward-facing counterparts in social.
Take, for instance, the recent dustup involving Cloudflare and Kiwi Farms.
Cloudflare provides a range of backend website services, including domain registration and protection from coordinated attempts to take sites down by overwhelming them with traffic. It provided both services to Kiwi Farms, a fringe web forum known for mocking people it deems "lolcows"—“people with eccentric behavior who can be ‘milked’ for entertainment,” per the forum’s definition. Kiwi Farms users have been accused of instigating harassment campaigns against some of these people, including Canadian Twitch streamer and transgender activist Clara Sorrenti. Over the summer, Sorrenti—better known as Keffals—launched a campaign to get Cloudflare to drop Kiwi Farms.
Cloudflare initially resisted. "Some argue that we should terminate these services to content we find reprehensible so that others can launch attacks to knock it offline," wrote Cloudflare CEO Matthew Prince and Head of Public Policy Alissa Starzak in an August 31 post on the company’s blog.
That is the equivalent argument in the physical world that the fire department shouldn't respond to fires in the homes of people who do not possess sufficient moral character. Both in the physical world and online, that is a dangerous precedent, and one that is over the long term most likely to disproportionately harm vulnerable and marginalized communities.
But a few days later, Cloudflare backtracked. "We have blocked Kiwifarms. Visitors to any of the Kiwifarms sites that use any of Cloudflare's services will see a Cloudflare block page and a link to this post," Prince wrote in a September 3 blog post. He swore that Kiwi Farms presented a special case—"an unprecedented emergency and immediate threat to human life." Yet Cloudflare has done this twice before—first with neo-Nazi site The Daily Stormer in 2017 and then, in 2019, with the infamous fringe-right message board site 8chan, which came under fire after three mass shooters posted manifestos there. The "dangerous precedent" Prince and Starzak warned about had already been set.
The 8chan and Daily Stormer decisions likely made dropping Kiwi Farms easier, and canceling services to Kiwi Farms will make it easier for Cloudflare to take similar action in the future. Once you've accepted the premise that your company is in the business of policing the acceptable bounds of online speech, there will always be more bad actors to evaluate and find beyond the pale. And Cloudflare's decisions don't just set a precedent for itself but for other internet infrastructure companies too. If evaluating the moral character of clients is categorically not something your business does, then it shouldn't be difficult to resist pressure to make exceptions to this policy. But if other backend service providers are going to start… well, who wants to be known as the one provider protecting internet trolls or Nazis?
The Kiwi Farms situation echoes what happened last year with Parler. In the wake of the January 6 riots, Google and Apple both banned the MAGA-leaning, Twitter-esque social platform from app stores, saying that it had failed to properly moderate content promoting violence. Amazon Web Services also canceled web hosting services to Parler.
Of course, as private companies, Amazon, Apple, Cloudflare, and Google are well within their rights to make decisions like these. I'm certainly not suggesting that tech companies should be forced to provide services to entities or individuals they find questionable. Nor do I think that the government should mandate that these companies accept any and all potential customers engaged in legal activity. Most, if not all, of the policies proposed by politicians aimed at addressing censorship online—new tech-targeted antitrust laws, banning algorithmic recommendations, or rescinding the law that helps protect tech companies from liability for user and customer speech—would make the internet less pleasant for the vast majority of us.
But scenarios like the one at Kiwi Farms raise fundamental questions about how internet infrastructure and intermediary companies should ideally operate. It was once pretty uncontroversial that the backend businesses of the digital sphere didn't—and shouldn't—exercise a sort of editorial discretion when choosing who they provide services to. Registering a domain was not an endorsement of that domain. Helping secure a site against malicious attacks didn't mean you were aligned with that site's message.
Unfortunately, the public—or at least the extremely online or political portions of it—seems to be moving away from this understanding. More and more, we see social, legal, and political pressure being applied to internet infrastructure companies, urging them to stop providing services for social media applications or websites because of content they publish, communication they permit, or users they allow. Sometimes, as with Kiwi Farms, 8chan, and Parler, these pressure campaigns are successful.
When this happens, it is fundamentally different from an individual person or group being kicked off of platforms like Twitter, Facebook, or YouTube. Even if someone is booted from all of the most popular social media platforms, they can still do things like keep a WordPress blog, start a Substack newsletter, post to web forums like Kiwi Farms, join alternative networks like Parler, or start a website of their own. But when internet infrastructure companies stop providing services to entire websites and social media applications, it really does threaten to leave certain people and groups with few or no options for speaking and operating publicly online.
And leaving people without such options is bad for a number of reasons.
One key problem—which unfortunately seems to be the least persuasive these days—is that a pluralistic, liberal society should be comfortable letting "bad" ideas, speech, and enterprise exist. The correct remedy for bad speech is more speech, not prohibition. While the former can educate, persuade, and ultimately change some minds, the latter simply hides hate, misinformation, and offensive views.
There are also utilitarian reasons to be opposed. For instance, it's arguably better for even the worst sorts of speech to be more public—where people can keep tabs on it, counter it, and subject it to legal requests if necessary—than to exist only in more private spaces, like encrypted messaging apps. The combination of persecution and privacy is only more likely to let extremism flourish.
And while people might not have sympathy for, say, neo-Nazis being deplatformed, these things tend to quickly reverberate beyond just the worst cases, thus threatening all sorts of speech that people—including progressives—do sympathize with. For instance, websites that discuss how to procure an abortion, promote the decriminalization of prostitution, or plan political rallies for controversial causes could all find themselves burned by an increasingly risk-averse internet infrastructure.
Meanwhile, private calls for backend services to make moral decisions about their clients may also embolden authoritarian governments—something Cloudflare's Prince and Starzak noted back in August. After dropping 8chan and Daily Stormer, they write, the company "saw a dramatic increase in authoritarian regimes attempting to have us terminate security services for human rights organizations—often citing the language from our own justification back to us."
The more internet infrastructure companies start acting as arbiters of online thought, speech, and conduct, the more intense the bid to control how these companies operate will become. Such a norm would be a disaster for internet freedom—and ratchet up, not decrease, social tensions.
Elizabeth Nolan Brown is a senior editor at Reason magazine.
Follow Persuasion on Twitter, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:
"Of course, as private companies, Amazon, Apple, Cloudflare, and Google are well within their rights to make decisions like these. I'm certainly not suggesting that tech companies should be forced to provide services to entities or individuals they find questionable. Nor do I think that the government should mandate that these companies accept any and all potential customers engaged in legal activity. "
I fully support regulation (government) in these cases. The Post Office and the phone company don't get to decide if I can get mail or a phone number based on my politics.
Amazon, Apple, Cloudflare, and Google should be treated as common carriers and not allowed to engage in any discrimination. If it works for the Post Office/phone company, it will work for them to. If someone posts illegal content, they should be prosecuted for it. Otherwise, tech companies should not engage in any form of censorship. That includes Twitter, Facebook, YouTube, etc.. by the way.
So, are we just going to end up with two (more?) distinct politically segregated internet infrastructures? I don't have any idea what the implications of that are, but it would be a grimly amusing end to the techno-optimism that many of us once felt. Who would have thought one could be wistful thinking about the halcyon days of 2014 🤣. How fuckin depressing.