2 min read

Meta Ditches Fact-Checkers, Embracing User-Generated Verification

Meta's decision to eliminate its third-party fact-checking program and replace it with a user-generated system, similar to Twitter's new approach, has raised eyebrows and sparked speculation about the company's motives, The Hill reports. The move comes as President-elect Donald Trump prepares to return to office, and some observers see it as an attempt to appease the Republican leader, who has long accused social media companies of censoring certain viewpoints.

Meta's shift away from independent fact-checkers, announced Tuesday, is part of its "return to roots" initiative, aiming to "restore" free speech on its platforms, Zuckerberg said. The company will instead rely on a "community notes" system, allowing users to flag potentially misleading content and provide context.

"Meta, like all companies, wants to make life as simple for themselves as they possibly can," said Peter Loge, a former Obama administration advisor and current director of George Washington University's School of Media and Public Affairs. "The President of the United States and others don't like fact-checking, so Meta will take fact-checking away."

Trump himself suggested the changes were partly in response to his previous criticisms of Zuckerberg and Meta. "Honestly, I think they've come a long way — Meta, Facebook, I think they've come a long way," Trump told reporters.

The move follows a pattern of Meta making personnel changes and policy adjustments that appear to align with Trump's priorities. The company recently appointed Joel Kaplan, a prominent Republican lobbyist, as its chief global affairs officer, and Dana White, a Trump ally, to its board of directors. Zuckerberg has also pledged to work with Trump to combat what he describes as a global push for censorship on social media platforms.

However, some observers contend that Meta's content moderation changes are driven by broader trends rather than solely appeasing Trump. Republican strategist Chris Johnson argued that the company is responding to a broader sentiment among voters who are frustrated with perceived elite control over public discourse.

Meta's new approach, which relies heavily on users to identify and correct misinformation, has raised concerns from experts who warn that it could lead to an increase in disinformation.

"There are no gatekeepers," Loge said. "If the problem of mis and disinformation is a whole lot of people gathering online, shouting this nonsense and making that nonsense louder, then the solution is not to invite more people to participate in that conversation, which is what community standards ends up doing."

The change echoes Elon Musk's overhaul of Twitter's content moderation practices after his acquisition of the platform in 2021. Twitter is now frequently accused of being a breeding ground for unchecked hate speech and disinformation.

Meta's outgoing chief legal officer, Paul Farrell, last month acknowledged the company's high error rates in content moderation, while Zuckerberg told Congress in August that he regrets not speaking out more forcefully against government pressure to remove COVID-19 related content. The company's shift to a community-based system aligns with Zuckerberg's stated goal of pushing back against what he sees as excessive government censorship.