October 11, 2021
A voluntary hate-speech removal agreement among tech platforms in the European Union is trending in the opposite direction, according to the sixth evaluation report of the EU’s Code of Conduct, which produced a mixed picture. Social networks reviewing 81 percent of notifications within 24 hours removed an average of 62.5 percent of content flagged as hate speech, which is lower than the averages recorded in 2019 and 2020, according to the European Commission. The self-regulation policy was begun in 2016 with Facebook, Microsoft, Twitter and YouTube agreeing to remove speech that falls outside their community guidelines in under 24 hours.
Since then Instagram, Dailymotion, Google Plus, LinkedIn, Snapchat, TikTok and gaming site JeuxVideo have also signed on to self-police. “While the headline promises were bold, the reality of how platforms have performed has often fallen short of what was pledged” with some positive trends “now stopped or stalled per the Commission,” writes TechCrunch, citing Facebook and YouTube among those performing worse than in earlier rounds.
A driving force for EU’s establishment of the code five years ago was concern over the spread of terrorist content online. But in April the EU adopted a law setting a one-hour default takedown timeframe for terrorist content.
EU legislators also proposed a regulatory update that would broadly expand requirements in regards to the handling by digital services of illegal content and goods. “This Digital Services Act (DSA) has not yet been passed, so the self-regulatory code is still operational — for now,” writes TechCrunch. “So whether the code gets retired entirely — or beefed up as a supplement to the incoming legal framework — remains to be seen.”
The Commission says it intends to keep disinformation obligations voluntary while strengthening measures and linking them to compliance with the legally binding DSA for the larger platforms. EU lawmakers also flagged insufficient user feedback via notifications as a serious weakness of the voluntary code and is exploring legal requirements in that area as well. The DSA proposes rules for reporting procedures.
“Our unique Code has brought good results but the platforms cannot let the guard down and need to address the gaps,” the Commission’s VP for values and transparency Věra Jourová said in a statement. “And gentlemen agreement alone will not suffice here. The Digital Services Act will provide strong regulatory tools to fight against illegal hate speech online.”
For the first time signatories reported information about measures taken outside the monitoring exercise, including actions to automatically detect and remove offensive content. Other findings from the hate speech report, as documented in TechCrunch, include:
– Removal rates varied depending on the severity of hateful content; 69 percent of content calling for murder or violence against specific groups was removed, while 55 percent of the content using defamatory words or pictures aiming at certain groups was removed. Conversely, in 2020, the respective results were 83.5 percent and 57.8 percent.
– IT companies gave feedback to 60.3 percent of the notifications received, which is lower than during the previous monitoring exercise (67.1 percent).
– In this monitoring exercise, sexual orientation is the most commonly reported ground of hate speech (18.2 percent) followed by xenophobia (18 percent) and anti-Gypsyism (12.5 percent).