The European Union unveiled a new code of practice for disinformation, a glimpse at the regulation Big Tech companies will be dealing with under upcoming digital content laws. Meta Platforms, Twitter, TikTok and Google have agreed to the new rules, which update voluntary guidelines. The revised standards direct social media companies to avoid advertising adjacent to intentionally false or misleading content. EU policymakers have said they will make parts of the new code mandatory under the Digital Services Act. Platforms agreeing to comply with the new rules must submit implementation reports by early 2023.
Signed last week, the new EU Code of Practice on Disinformation strengthens rules implemented in 2018, seeking a broader range of commitments from participating social media platforms. Described as a “first-of-its kind tool,” the provision contains “44 commitments and 128 specific measures,” according to an announcement by the European Commission. These include:
- Demonetizing financial incentives for purveyors of disinformation
- Making political advertising transparent
- Ensuring service integrity by reducing manipulative behavior (including fake accounts, bot-driven amplification and malicious deepfakes)
- Empowering users, researchers and fact-checkers
- Strengthening the monitoring framework by requiring “very large online platforms” (as defined in the Digital Services Act), to report every six-months while other signatories will report yearly
The new misinformation code integrates with the Digital Services Act, agreed to by European lawmakers and member states in April and pending final legislative approval by the European Parliament and Council. Once adopted, the DSA will be directly applicable across the EU. Although it is unclear whether the UK will enact similar legislation, as a practical matter, social media platforms doing business in the UK will find themselves subject to the DSA.
The DSA, which could take effect as early as next year, establishes rules for removing illegal content. “It will also require the largest social media platforms to conduct risk assessments on content that regulators view as potentially harmful,” writes The Wall Street Journal. “The largest platforms, defined as those with more than 45 million users in the EU, that repeatedly break the code and fail to properly address risks could be subject to fines of up to 6 percent of their global annual revenue once the new law comes into effect.”
The new anti-disinformation Code “comes at a time when Russia is weaponizing disinformation as part of its military aggression against Ukraine, but also when we see attacks on democracy more broadly,” European Commission vice president for values and transparency Vera Jourova said in a statement. “We now have very significant commitments to reduce the impact of disinformation online and much more robust tools to measure how these are implemented across the EU in all countries and in all its languages. Users will also have better tools to flag disinformation and understand what they are seeing.”