EU blasts Twitter, YouTube & Facebook for slow action on hate speech

EU blasts Twitter, YouTube & Facebook for slow action on hate speech
US tech companies are too slow to remove hate speech after it is flagged, a European Union-commissioned report said. Research found that of 600 incidences of hate speech, tracked over six weeks, only 40 percent of posts were removed within 24 hours.

“The last weeks and months have shown that social media companies need to live up to their important role and take up their share of responsibility when it comes to phenomena like online radicalization, illegal hate speech or fake news,” said Justice Commissioner Vera Jourova in a statement.

Researchers tracked what happened to 600 incidences of hate speech reported to the tech firms during a six-week period in October and November. During that time, 12 organizations from nine EU member states monitored racist and fascist movements across Europe and used the notification systems on different social media for Twitter, YouTube and Facebook to report these incidences and track how long it took for tech firms to act.

The grounds for reporting hatred were race, color, national origin, anti-Muslim hatred, anti-Semitism, and gender-related insults. A large number of instances corresponded to some form of anti-immigrant speech. About 20 percent of the messages were anti-Muslim, with 23.7 percent anti-Semitic and 21 percent nationalistic. Three of the groups participating specialized in monitoring anti-Semitic hate speech online.

US tech companies pledged in May they would review notifications of illegal hate speech on their service and remove or disable access to such content within 24 hours.
Under the pledge, they were also required to establish “rules of community guidelines” prohibiting “promotion of incitement to violence and hateful conduct.”

Of the 600 reported cases, 270 came from Facebook, 163 were from Twitter, and 123 were from YouTube, while none came from Microsoft. There were seven reports for social media groups that hadn’t signed the pledge.

In 169 cases, or 28.2 percent, content was removed after being flagged as hate speech. Facebook removed the content of 28.3 percent of cases, Twitter 19.1 percent, and YouTube 38.5 percent.

Of those incidences reported, only 40 percent were removed in less than 24 hours and 43 percent in less than 48 hours.

The report comes as many governments are looking to tech companies to curb hate speech on digital platforms as well as clamp down on how terrorist groups circulate information online.

Freedom of speech advocates have warned that such measures could limit people’s ability to communicate across the internet and threaten legitimate political discourse.