Meta Platforms’ recent overhaul of its content moderation policies has generated widespread criticism, especially from its independent Oversight Board, as well as from civil society groups and fact-checking organisations.
The changes, implemented in January 2025, saw a reduction in fact-checking efforts and the relaxation of controls over discussions on sensitive topics such as immigration, gender identity, and other controversial issues.
The timing of these shifts, coinciding with the start of U.S. President Donald Trump’s second term, has raised concerns about the political motivations behind these decisions and their potential global implications, particularly in regions already grappling with misinformation and political instability.
The independent Oversight Board, which Meta funds but operates separately, issued a harsh rebuke, accusing the company of hastily implementing the policy changes without adequate consideration of human rights and the global impact.
The Board warned that the relaxation of content moderation efforts could lead to a surge in harmful content, including hate speech, misinformation, and incitement to violence, particularly in crisis-affected regions.
Meta’s decision to reduce its fact-checking programme and replace it with an AI-driven “Community Notes” tool was also met with scepticism.
Critics argue that the new system may not be as effective as human-driven fact-checking in tackling misinformation, especially in politically sensitive contexts where false narratives can be manipulated by various groups.
The Board responded by issuing 17 detailed recommendations. It called on Meta to reassess the global consequences of these changes, particularly in vulnerable regions where misinformation and harmful content can have serious real-world effects.
In Africa, the policy shifts have raised additional concerns. Civil society organisations, including groups like PesaCheck and Africa Check, which have relied on Meta’s funding to combat misinformation across the continent, worry about the future of their work.
With the reduced emphasis on fact-checking and the replacement of the programme with AI-driven tools, these organisations fear that their ability to effectively counter misinformation could be severely diminished.
Meta has yet to respond directly to all of the Board’s recommendations, particularly those calling for content removal, but the company has stated that it supports efforts to promote free expression.
Despite the criticism, Meta has committed to continuing its funding of the Oversight Board through 2027, with resources secured in an irrevocable trust to ensure its independence.