Meta Loosens Content Rules, Ends Fact-Checking Amid Controversy
Meta, the parent company of Facebook, Instagram, and Threads, has announced significant changes to its content moderation policies. The social media giant is eliminating its third-party fact-checking program and adopting a community-sourced moderation model similar to X’s “Community Notes.” Additionally, Meta is relaxing restrictions on politically sensitive topics, allowing broader speech on issues that critics expect to fuel hate speech and misinformation.
The decision comes amid speculation that the changes are designed to align with the policies of incoming U.S. President Donald Trump, a vocal critic of Meta. The shift also coincides with Meta’s efforts to strengthen ties with Washington, possibly to mitigate regulatory challenges and ensure favorable conditions for its business interests.
Meta’s new guidelines permit content previously flagged as hate speech, including derogatory comments about immigrants and LGBTQ+ individuals. The company has framed these changes as part of an effort to foster open debate on politically charged issues. However, detractors argue this move will embolden harmful rhetoric. In tandem, Meta is phasing out its fact-checking program – CEO Mark Zuckerberg has criticized fact-checkers as politically biased, arguing that a crowd-sourced moderation model is more neutral.
Observers suggest that Meta’s policy shift is a strategic move to align with President Trump, whose administration could influence regulations that affect the company. These include tariffs on Chinese imports critical to Meta’s hardware production, scrutiny from the Federal Trade Commission (FTC), and potential restrictions on AI development. Additionally, Trump’s potential support for banning TikTok and its related platforms could benefit Meta by reducing competition in the U.S. social media market.
The changes have drawn sharp criticism from advocacy groups and researchers, who warn that the rollback of fact-checking could exacerbate the spread of misinformation. Past studies have shown that false information gains significantly more engagement on social platforms. However, while some argue that this will damage the user experience and massively weaken overall moderation on the platform, there’s a good chance that these changes might still be a good business move for Meta – at least in terms of engagement and getting into the government’s good graces.