Twitter's Other Free Speech Problem
Social media platforms are arbitrarily banning users because their AI algorithms can't take a joke.
It took a single tweet about autism for Twitter to suspend me for life. The tweet, part of my “life with #autism” series, quoted a clumsy joke from my autistic son. It contained the words “smash your head.”
The fate of those who accidentally post the wrong words on social media should set off alarm bells for anyone concerned about due process and free speech.
Shortly after posting what turned out to be my last tweet about life with autism, I discovered I had been permanently suspended for violating Twitter’s rules against violent threats. I also discovered that Twitter won’t tell you what your offending tweet was. But when, stunned, I scrolled through my history, I found that one tweet—and one only—had been expunged.
It is highly unlikely that a human would mistake the quotation of a joke threat for an actual threat. But artificial intelligence has no sense of humor. And most artificial intelligence looks only for keywords and phrases, not for whether they are embedded within a quoted dialogue.
I am a computational linguist and have long known about the limitations of AI. But only after becoming a Twitter outcast did I learn the dirty secret of moderation on social media. While Twitter's policy for reviewing tweets is ambiguous (likely purposefully so), prominent figures, like the former president, are almost certainly monitored by real humans who examine their every utterance. But regular people are more frequently relegated to AI—an AI that not only erases tweets, but indefinitely suspends entire accounts. And though Twitter claims not to ban accounts solely based on AI, my own experience and many similar anecdotes make me incredibly skeptical of that claim.
A scroll through tweets directed at @TwitterSupport, Twitter’s customer support account, shows scores of people using alternative accounts, along with their supporters, protesting that no Twitter rules were violated. Some report making joke threats like “I’ll kill you”; others have no idea what went wrong.
But this problem has flown under the radar. Most people writing about free speech and social media are focused on partisan politics, not on artificial intelligence. They appear to be unaware of, or unconcerned about, the thousands of ordinary folks who are suspended indefinitely because a clumsy and indifferent AI flagged a perfectly legitimate tweet.
Nor are we ordinary folks merely tweeting cat videos and political outrage. Many of us use Twitter for advocacy and awareness, making connections, promoting our work, and furthering our careers. Aspiring writers, for example, know they must attract substantial followings on social media before they can land a good book deal.
Twitter, both quantitatively and qualitatively, is unique. It is a place where ideas are exchanged, relationships are forged, and reputations are built among hundreds of millions of strangers around the world. It is a place from which permanent banishment can stop career building and advocacy work in its tracks.
Adding insult to injury, once Twitter suspends you, anyone who searches your name and lands on your profile—your supporters, detractors, colleagues, students, employers, potential employers, potential publishers—will now see an account that looks something like this:
Technically speaking, Twitter has an appeals process: You submit a ticket and provide details. But since it is Twitter’s policy not to disclose what the offending tweet was, or who (if anyone) reported it, there are often few details to provide. All that most appellants can say is that their accounts were suspended and that they didn’t violate any rules. Some people never hear back; others get a form letter denying the appeal. Repeated appeals yield similar results.
Few people manage to affirmatively bypass Twitter’s AI and get human review. One who did is Kate Klonick, a professor of internet law who was suspended back in September—only to be reinstated that same day. Her offense, like mine, was quoting a joke threat (her tweet contained the words “kill you”). Unlike me, Klonick had over 22,000 Twitter followers. Many of them, amused by “a ‘leading expert’ on content moderation being banned for violating content rules”—as Klonick recounts in an article for The Atlantic—retweeted a screenshot of her suspension notification. Klonick adds that she also communicates regularly with journalists.
In other words, while she likely wasn’t prominent enough for automatic human review, Klonick had the following and connections needed to motivate Twitter’s humans to quickly override its AI. But what about new voices—budding advocates, undiscovered writers, people exchanging ideas and content, folks who are only just starting to make connections and build followings? While only a fraction of them will trigger AI, those who do might never reappear. Twitter may have begun as an egalitarian platform, but the growing elusiveness of its human review only widens the gap between establishment insiders and everyone else.
The underlying problem is that social media has been increasingly incentivized—by both governmental and market forces—to eliminate all posts that possibly contain hate speech or violent threats. Human moderation is costly; so is better AI—and AI is nowhere near being able to distinguish joke threats from actual ones.
So what is the solution?
Some legal scholars note that social media platforms, though privately owned, have become the equivalent of public squares, and they are calling for legislation to make platform access a civil right. But if everyone has platform access, what are social media companies supposed to do about all the hate speech and incitements to violence?
The answer is clear: Go ahead and let AI disappear the posts and tweets that trigger it. But don’t put AI in charge of choosing which accounts to suspend indefinitely.
There is, after all, a huge difference between deleting a tweet and deleting a human.
Katharine Beals teaches courses on autism at the University of Pennsylvania and Drexel University and creates linguistic software for students with autism. She is the author of the forthcoming Cutting-Edge Language and Literacy Tools for Students on the Autism Spectrum.