The shooting in Buffalo last month was but the latest example of terrorist attacks becoming a lethal outlet for white supremacist grievances. What connects much of this violent right-wing extremism is that the perpetrators are radicalized or share their ideas—sometimes even stream their terrorist attacks—online. In fact, several terrorists have explicitly cited other terrorists as inspiration for their deadly deeds in online messages or manifestos.
Understandably, the toxic combination of hate-infused violence and online virality has prompted many politicians and experts to call for tougher regulation of social media and hate speech. After the Buffalo attack, New York governor Kathy Hochul took aim at social media platforms and called for the imposition of “a legal responsibility to ensure that such hate cannot populate these sites.” Similarly, Bloomberg technology columnist Parmy Olson decried the First Amendment’s protection of hate speech and argued that “the world’s best hope for weeding out extremism on mainstream social media is coming from Europe, and specifically from two new laws—the Online Safety Act from the United Kingdom and Digital Services Act from the European Union.” These regulatory efforts follow in the footsteps of the German Network Enforcement Act of 2017 and oblige online platforms to remove illegal content, including categories such as hate speech and glorification of terrorism, or risk huge fines. However, in liberal democracies committed to both equality and free expression, this approach raises a number of questions and dilemmas.
It is undoubtedly true that words and ideas can cause harm by inspiring or inciting people to commit violent acts. Susan Benesch, an expert on the interaction between words and violence, has coined the term “dangerous speech” to describe speech that might “increase the risk that its audience will condone or commit violence against members of another group.” Dangerous speech can contribute to mass atrocities like the Rwandan genocide of 1994, in which radio played a vital role.
Social media can sometimes serve as a vector for dangerous speech, from fueling jihadist and white supremacist terrorism to large-scale campaigns of ethnic cleansing. But while online expression may sometimes lead to real-life harm, placing restrictions on free speech is not necessarily an effective remedy. Nor is it certain that any benefits of repression outweigh its negative and unintended consequences.
On the contrary, studies suggest that freedom of expression is associated with less rather than more violent extremism, terrorism, and social conflict in democracies. A 2017 study concluded that in Western Europe, violent far-right extremism was accelerated by “extensive public repression of radical right actors and opinions.” Other research has come to similar conclusions, suggesting that free speech is more likely to serve as a safety valve than a lightning rod for extremist violence and that people are more likely to view violence as justified when governments repress free expression.
Thus, the idea that tougher laws against hate speech constitute an efficient deterrent against violent hate crimes rests on shaky empirical ground. Indeed, the Buffalo shooting shared many characteristics with recent deadly right-wing extremist terrorist attacks carried out in Norway, New Zealand, and Germany. And yet, in all of these three countries, hate speech laws prohibit various types of extreme speech. Notably, in 2019—two years after Germany adopted the Network Enforcement Act to counter the dangerous effects of hate speech—the German government estimated that the country was home to more than 30,000 far-right extremists, about 40% of whom were inclined toward violence. The following year, German authorities recorded the highest level of violent right-wing extremist crime in 20 years, including several murders, prompting the government to warn that right-wing extremism constituted the “greatest threat to security in our country.”
If repressive policies were unable to stop violent right-wing extremism, it is unlikely that further speech restrictions will succeed. Moreover, current hate speech laws have already caused collateral damage to political speech and protests in Europe. Further restrictions risk significantly suffocating pluralism and open debate—the flow of vital oxygen without which democracies cannot thrive.
An alternative strategy to outright legal bans is to combine legal measures with social and political pressure on technology companies to more aggressively police hate speech on their platforms. The benefit of this, from the perspective of pro-censorship advocates, is that since these are private companies, they do not have to follow the same legal procedures as government bodies.
To a significant extent, this has already happened. Platforms like Facebook and YouTube have expanded their definitions of hate speech and extremism, and have adopted automated content moderation that flags and removes such content before most users can view it. However, while such purges can help limit the visibility of hate speech on larger platforms, they can not cleanse the internet of hate speech altogether. Additionally, such policies put lawful and empowering speech at risk, with evidence suggesting that zealous content moderation leads to over-removal that can affect anti-racist activists and human rights defenders.
Moreover, far-right extremists and white supremacists often migrate to smaller alternative platforms or messaging services when banned from mainstream platforms. The very lightly moderated website 4Chan, and its even more ghastly cousin, 8Chan, have become landing spots for radicals who find themselves kicked off of Facebook or Twitter. Similarly, Telegram is an encrypted messaging service where extremists may reconnect and network with minimal publicity.
The withdrawal of extremists from popular platforms onto more obscure and anonymous ones not only impedes the efforts of law enforcement agencies to track down future attacks but also hinders targeted counterspeech, which some studies have shown to be effective in reducing hate speech. In the words of one researcher, the evolution of far-right communities on alternative platforms “cast[s] doubt on the effectiveness of deplatforming for curbing the influence of far-right and other extremist actors.”
In the aftermath of deadly terrorist attacks, promising a zero-tolerance policy toward hate speech and extremism might be a forceful way for activists and politicians to demonstrate their resolve and an emotionally satisfying rallying cry for shocked populations. But continuous demands to limit more and more online free speech—whether that restriction comes from government fiat or technology companies themselves—is unlikely to efficiently deter this kind of violence.
Fortunately, combating extremist violence does not consist of a binary choice between indifference and repression. Evidence and experience show that the most effective strategy would be to develop trustworthy public institutions that can identify potential violent extremists and intervene before it’s too late, create a digital sphere that encourages trust and cooperation rather than outrage and polarization, and strengthen our collective ability to engage in controversial and difficult conversations. To a great extent, however, these initiatives will rely on expanding, rather than restricting, freedom of expression and access to information.
Jacob Mchangama is the founder of Justitia, a Copenhagen-based think tank promoting the rule of law and fundamental human rights. He is also the author of Free Speech: A History From Socrates to Social Media.
"The shooting in Buffalo last month was but the latest example of terrorist attacks becoming a lethal outlet for white supremacist grievances."
No, because, no more than the black Darrell Brooks plowing a car into a parade of white people killing 6 and injuring 60, it is not terrorism and it is not any evidence of a lethal outlet for white , not any other color, supremacist grievances. This is woke language and the user of this language is branding himself as being indoctrinated into that cult. Please stop.
The manifesto from the Buffalo shooter was that of a mentally disturbed person. He says he was a communist. That he rejects Christianity. Rejects conservatism. Rejects capitalism. Is a "green nationalist" and an "echo-fascist". One could more easily make the case that he is a liberal environmental extremist.
I have to ask, is going there... going to the "white supremacist" catch phrase without any real evidence that it is a material thing today... is that not an example of hate speech? Is that not an example of institutional neoracist impulses? Because the impetus of institutional racism is to punch down a racial group of people so that the punchers feel superior. It seems to me that coastal liberals afflicted with the woke mind virus are the new racists in this modern political arena. For some reason they feel the need to punch down working class whites... the same demographic having their lives shattered by coastal liberals running the government and media (same industry). The voters are really tired of it.
Exactly! Banning of "hate" speech further divides and polarizes attitudes and serves as justification for their actions and behaviour. The larger point is who gets to define and monitor what is "hate" speech? That scares me even more. Censorship by mob, hectoring, canceling or shouting makes things worse and will motivate other to take even worse action. It shows a total lack of respect or even civility. Too many college campuses permit silencing of differing views that do not conform to their orthodoxy. This has to stop.