Even in the world’s largest democracy, freedom has its limits. January 17 saw the release of the first part of “The Modi Question,” a British documentary examining the role played by Indian Prime Minister Narendra Modi in the 2002 riots in Gujarat state. The documentary presented newly-disclosed evidence that Modi, then Gujarat’s Chief Minister, permitted Hindu rioters to kill hundreds of Muslims with impunity in one of the worst outbreaks of religious violence in the country’s modern history.
The government’s response to the documentary was quick and effective. By January 21, the Ministry of Information and Broadcasting had issued orders banning “The Modi Question” from social media. Indian users on Twitter and YouTube were confronted with a legal notice stating that the content had been deleted.
The move did not come out of the blue. Like many countries, India has long tried to regulate content, including online content, and in recent years these efforts have ramped up significantly. In 2014 Twitter received a total of 20 legal demands to remove disfavored material in India. In 2021 it received almost 9000.
Social media platforms haven’t always complied: In 2021 the government publicly denounced Twitter for failing to adhere to requests related to coverage of farmers protesting a new agricultural law. But days later, the Ministry of Electronics and Information Technology issued new regulations strengthening the government’s ability to force companies to remove content deemed harmful to the “sovereignty and integrity of India, the security of the State, friendly relations with foreign States, or public order.” With this month’s act of suppression, the full extent of the power grab became clear.
Can we blame the social media companies for how they acted? They are subject to the laws of the countries in which they operate, and the consequences of non-compliance can be severe. In 2019, the government threatened seven-year jail sentences for Indian Twitter executives who failed to adhere to requests to remove “objectionable and inflammatory content.” Removing a few dozen posts begins to look like the lesser of two evils.
And it’s not like content suppression actually works. Unlike one-party authoritarian states, democratic countries like India are highly susceptible to the Streisand effect, in which the act of censorship causes the contraband material to spread further. Accordingly, searches for “The Modi Question” were highest in the days immediately following the announcement of the crackdown, and viewing parties sprung up all over the country.
Nevertheless, there is an important principle at stake. Protecting individuals from their government is Free Speech 101—the very bedrock of a free society. India has been creeping towards overt censorship for years, and social media companies must know that once states realize they can successfully force platforms to remove content, the requests to do so will come thicker and faster.
What’s more, this incident did not take place in a vacuum. Beyond questions of legal suppression, social media is embroiled in a serious cultural reckoning over content moderation and the bounds of acceptable speech. Last year Elon Musk became Twitter’s CEO with the explicit promise of promoting free speech—but by caving to censorship in India, he has failed a crucial test.
Surveying this landscape, it’s hard to escape the feeling that social media companies lack a principled and consistent strategy for handling speech. Tech executives are essentially reactive, squashing problems if and when they arise. They bend to cultural and legal pressures whenever they are compelled to, jeopardizing consistency and transparency in the process.
There is a solution. Social media companies can pivot in the one direction that will allow them to take a principled stance while avoiding the embarrassments that have plagued them over recent years: they can become active platforms for speech, rather than simply passive hosts of speech.
What does this entail? On the thorny issue of moderation, it means adopting a clear set of policies informed by First Amendment principles, something 1A experts are already calling for. Platforms should prioritize preventing imminent likely harm and work from there. This does not mean that anything goes: efforts to protect minors from pornographic material or to suspend people calling for direct violence would still be permitted. But it does mean an end to the status quo whereby armies of moderators haphazardly try to stamp out fake news, or worse, make decisions based on political pressure.
Importantly, companies must strive for consistency and clarity rather than the seat-of-the-pants improvisation we’ve seen over recent years. Any new rules must be public, transparent, and rational. Users should know where these companies stand and what we can expect from them.
It would also require them to abandon markets that force them to blatantly violate their principles. True, Twitter has 24 million users in India who presumably don’t want to be disconnected. But social media are far from the only means by which people communicate and access information. Facebook and Twitter are more than simple messaging services, and these companies must accept that, powerful as they are, they cannot be everything to everyone. They can deny censorship requests for as long as it is safe to do so—but ultimately, withdrawing from jurisdictions that do not respect free speech is the only way of cutting the Gordian knot that is currently ruining their reputation.
One consequence of platforms reorienting themselves to be “for speech” would be the continued proliferation of fake news, something many people will find unacceptable. But there are sensible and proportional ways to mitigate this that don’t involve compromising on principles. For one, speech can be used to fight speech. Just last year, Twitter introduced a feature allowing users to “add context” beneath misleading tweets. This simple tool has the potential to seriously deflate sensationalism and falsehoods.
Additionally, companies can implement structural reforms to create a healthier public square. Alterations can be made to the retweet function or the ordering of timelines to ensure that the most incendiary voices are not amplified by default. Jonathan Haidt has proposed the idea that anyone wishing to create a social media account, even an anonymous one, must first verify that they are a human being (perhaps through a third party non-profit to protect privacy). If implemented, this reform is likely to be controversial. Yet the idea that people have a “right” to, say, create bots with impunity is a strange product of the Internet age, one which doesn’t chime with any reasonable vision of free speech handed down to us by history. Being “for speech” should mean adopting a wide range of tools for making platforms saner and more truth-friendly.
These changes are unlikely to occur anytime soon. They represent a significant psychological and financial leap, and there’s just no major incentive for social media companies to abandon a market like India or to rethink the current click-bait model of engagement.
And yet the incentives are ultimately driven by users continuing to inhabit the platforms. As negative stories like the restriction of “The Modi Question” continue to emerge, people will start asking themselves how many capitulations, hypocrisies, and U-turns they are willing to put up with from social media executives. If reform comes from anywhere, it will come from below. Isn’t that the democratic vision Big Tech has tried to sell us all along?
Luke Hallam is a senior editor at Persuasion.
Follow Persuasion on Twitter, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:
I am shocked, shocked that Musk turned out to be a hypocrite on free speech. He's got plenty of company; Nat Hentoff's title "Free Speech For Me But Not For Thee" seems like the default position for most people. (FIRE seems to be the most prominent exception these days.) Then again, even Hentoff violated his proclaimed civil liberties absolutism when he opposed the not actually a mosque that wasn't actually at Ground Zero.
As the Twitter files show, the greatest threats to free speech (in the USA) don't come from government, They come from 'private parties' that censor any opinion they disagree with. The statement that "platforms should prioritize preventing imminent likely harm" allows for all sorts of mischief. Recently, Stanford University went after a student who was caught reading a book ('Mein Kampf'). What was the charge? PIH (Protected Identity Harm).
Of course, the old Twitter was all too willing to cooperate with some government officials to censor opinions they didn't like.
Twitter should follow the law, nothing more, nothing less. Child pornography is illegal. Twitter should enforce the law. Death threats are illegal. Once again, Twitter should enforce the law. Since 'harm' can be (and has been) used for any and all types of censorship, the standard should be 'legality', not 'harm'.
The new Twitter is better than it was. The following are quotes from "The ‘Twitter Files’ have opened the company's censorship decisions to public scrutiny " (https://www.thefire.org/news/twitter-files-have-opened-companys-censorship-decisions-public-scrutiny).
"New Twitter owner Elon Musk is rocking the worlds of both politics and the internet with the release of what he calls the “Twitter Files,” exposing the internal workings of how the social media platform decided what speech was acceptable — and just how acceptable — under prior management.
Releases to independent journalists Matt Taibbi (here and here), Bari Weiss (here and here), and Michael Shellenberger (here) exposed what many have long suspected: Twitter’s “trust and safety” team was far from an objective referee of the company’s stated rules. Instead, Twitter relied on politics, prejudice, and cronyism in how it would treat both fact and opinion, with the shadow of federal law enforcement looming nearby."