Tech giants like Meta (parent of Facebook and Instagram), X (formerly Twitter), YouTube, and TikTok are stepping up their commitment to combat illegal hate speech online.
Under the European Union’s newly revised Code of Conduct on Countering Illegal Hate Speech Online Plus, these platforms have pledged to improve their content moderation efforts.
The new code is part of the EU’s Digital Services Act (DSA) framework, which aims to make online spaces safer and more transparent. First introduced in 2016, the code has now been updated to reflect the rapidly evolving digital landscape.
Under the “Code of Conduct Plus,” tech companies have agreed to:
- Review hate speech reports within 24 hours: Platforms must evaluate at least two-thirds of flagged content within this timeframe.
- Increase transparency: They’ll provide detailed reports on how they detect and reduce illegal hate speech.
- Allow third-party monitoring: Independent assessors will evaluate how effectively these platforms review hate speech reports.
These commitments aim to address the growing issue of online hate speech, which the EU sees as a threat to democratic values and individual rights.
Hate speech online isn’t just a personal attack; it’s a societal issue. According to EU Commissioner Michael McGrath, “Hatred and polarization threaten EU values, fundamental rights, and the stability of our democracies. The internet amplifies these effects.” By enforcing these voluntary commitments, the EU hopes to foster a healthier digital environment.
However, these rules remain voluntary. Companies face no legal penalties for opting out, as Elon Musk’s X did in 2022 when it withdrew from a related EU code on disinformation. This raises questions about the effectiveness of such agreements without binding consequences.
Who’s Involved?
Major platforms participating in the EU hate speech code include:
- Facebook, Instagram (Meta)
- X (formerly Twitter)
- TikTok
- YouTube
- Snapchat
- Microsoft-hosted services
Other smaller platforms like Dailymotion have also signed on, demonstrating a broader industry commitment.
What Challenges Does It Face?
While the revised EU hate speech code is a step forward, it’s not without challenges. These include:
- Voluntary nature: Companies can withdraw at any time, which could undermine the initiative’s long-term impact.
- Scope of enforcement: The code focuses on illegal hate speech, which varies in definition across jurisdictions.
The EU’s efforts are part of a larger push to regulate Big Tech and hold platforms accountable for the content they host. For users, this means a potential reduction in harmful content and a safer online environment. For platforms, it’s a chance to demonstrate their commitment to ethical practices and maintain user trust.
The EU’s “Code of Conduct Plus” represents a crucial step in addressing hate speech, but its success will hinge on consistent implementation and industry buy-in. Without legal enforcement, there’s a risk that these voluntary measures could lose momentum over time.
The question remains: will tech giants rise to the challenge, or will this initiative falter under its voluntary nature?