7 Comments

Your ideas are good, but we should also work to reduce "safetyism" and the stigma attached to "bad speech". There'd be much less reason for Twitter to be so finicky if people could return to the sticks-and-stones attitude prevalent when I was growing up.

Expand full comment

True enough, but as someone who works in the field I can say first hand just how many legitimately bad actors, very motivated actors, there are out there. Given the volumes you have to use automated means to identify and remove these posts (and accounts- one bad account can literally be millions of bad posts). The crux of the problem, IMHO, is that social media succeeded in it's vision to hyper-connect the world and it turns out there is a lot of badness out there, enough to overwhelm the system if you let it. And we don't have the tools, social or technical, to sufficiently handle that problem so we're stuck dealing with a highly imperfect system.

Expand full comment

I don't think it's an indication of how much badness is out there, since a single bot can spew endless numbers of tweets. I'm not saying how many actual bad actors there are, just that it's hard to tell.

It seems to me that the things you describe amount to social denial-of-service attacks, in that they make a part of the social network unusable by flooding it. We should look at how network DoS attacks are identified and thwarted -- perhaps we'll learn something useful.

Expand full comment

Recognizing the bad actors that are spewing mountains of bad content is difficult because it is an arms race. It is more difficult when you get into the nuance of individual communication- context matters more (which means the AI is more difficult to get right) and even if you can get that right often it comes down to a subjective call where no matter what you do one party is going to think you're doing the wrong thing and be seriously upset. And you can't hire enough humans, or get consistency across all those humans, to actually sort through all the cases. And you could probably deal with the margins of error in the AI with a robust customer service department, but again with the volumes...

Not that I am defending Twitter in this case. Just pointing out that curatng content online is a really difficult problem and nobody has figured how to do it well enough yet. And until (?) that is done these types of experiences will continue to occur.

Expand full comment

Oh, I quite agree on the difficulty. My point is simply that the problem should be approached both from the side of preventing "bad" tweets and the side of raising people's resilience in the face of bad tweets. No need to waste resources policing things that people should be able to shrug off.

Expand full comment

I had a similar experience — in the 2020 primary season I tweeted the Democratic Party would be “committing suicide” if it continued with the vicious fights between Bernie Bros and moderates. Was banned for “encouraging self-harm.”

There was a button to appeal the suspension, which I clicked; a denial came back in about 45 seconds, so obviously no human had seen it.

Apparently AI has a way to go. As does Twitter.

Expand full comment

Thank you, had wondered for a long time the seemly vagueness of twitter bans, now it all makes sense!

Expand full comment