Twitter is in the hot seat this week for declining to take down tweets from the handle that appears to belong to Cesar A. Sayoc, the man authorities have charged with sending pipe bombs to many high level Democratic leaders and news organizations last week.
A Twitter account with the handle @hardrock2016 appears to belong to Sayoc. It’s filled with hate speech, anti-Democrat memes, and violent threats to members of the media. Earlier this month, Sayoc’s account was flagged by Rochelle Ritchie, a political commentator who regularly appears on Fox News. Sayoc had been tweeting at her for some time with threatening messages, like on Oct. 11 when he instructed her to “Hug your loved ones real close every time you leave home.”
Twitter declined Ritchie’s request, saying Sayoc’s account didn’t violate Twitter’s rules for what constitutes abusive behavior, and just two weeks later Sayoc was charged with sending explosive devises to the homes of prominent Democratic leaders and many media outlets.
The social media company has since apologized. On Oct. 26th, the company wrote, “An update. We made a mistake when Rochelle Ritchie first alerted us to the threat made against her. The Tweet clearly violated our rules and should have been removed. We are deeply sorry for that error.”
Instagram followed suit, and apologized for refusing to delete a post by right-wing figure Milo Yiannopoulos praising the recent mail bombs sent to prominent Democratic officials, after it was flagged on the platform for hate speech.
The thing is, this is nothing new. Twitter has a long history of allowing abusive and hate speech on the platform, claiming that tweets must meet a high standard to qualify as a violation of their policy. Twitter’s rules state that abuse is not tolerated but that “context matters” when evaluating what constitutes abuse. Factors considered include whether “the behavior is targeted at an individual or group of people; the report has been filed by the target of the abuse or a bystander; the behavior is newsworthy and in the legitimate public interest.”
But enforcement of these rules vary widely. A 2017 Buzzfeed report found that Twitter has failed to block certain users when “clear and credible threats of violence” were made. Victims of online abuse say Twitter’s failure to stop threats on their site is frustrating because there doesn’t seem to be much accountability for spreading violence online, and it creates a scary environment for Twitter users who are the target of this type of speech.
In the past year, Twitter has taken a few steps to address online harassment. According to Buzzfeed’s report, “Twitter rolled out a keyword filter and a mute tool for conversation threads, as well as a ‘hateful conduct’ report option. In February, the company made changes to its timeline and search designed to hide ‘potentially abusive or low-quality’ tweets, and added a policy update intended to crack down on abusive accounts from repeat offenders. (And)…Twitter rolled out a few more muting tools for users, including the ability to mute new (formerly known as egg) accounts, as well as accounts that don’t follow you.”
And, in June, Twitter bought Smyte, a San Francisco-based startup that aimed to help companies curb spam, fraud, and abuse online.
But, given the events in recent weeks, including the killing of 11 people at Tree of Life Synagogue in Pittsburgh fueled by of a rise in anti-Semitic tweets, Twitter is clearly not doing enough. Even in clear-cut cases of intimidation like Ritchie’s, Twitter is still either slow to act or decides not to ban accounts.
So where does that leave us? How can Twitter and other social media sites better block abusive behavior online? A couple of ways, actually:
- Twitter could hire more engineers to apply a more even application of its own rules. Twitter has promised to add new algorithms that will hide controversial content, and new tools to make reporting abuse easier, but humans are better at spotting harassment than bots. Hiring more people would also allow their employees to escalate certain cases internally when needed.
- In Germany, Twitter is legally obligated to hide white nationalist and Nazi content from its users. Twitter could decide to offer that feature for every country.
- Twitter needs to ensure their anti-abuse measures aren’t themselves being abused. According to The Fader, “a worrying trend in 2017 has been for left-wing figures to be reported, en masse, by right-wing trolls and bots, until their accounts are suspended. This is an outright manipulation of measures that are there to protect people from bullying.”
- Twitter should lean into the idea of consulting with mental health professionals – something it’s already dabbling in – to better inform their policies.
At the end of the day, I can understand that finding a middle ground between free speech and harassment is a monumental problem that can’t be easily solved. But Twitter can do better and we, as users of social media, should push it to do so.