Meta to retool fact-checking system with focus on free speech | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Meta to retool fact-checking system with focus on free speech


Meta CEO Mark Zuckerberg Associated Press / Photo by Godofredo A. Vásquez, file

Meta to retool fact-checking system with focus on free speech

The Instagram and Facebook parent company on Tuesday said it will no longer be using a third-party fact-checking system for its platform in the United States. Rather, over the next couple of months, Meta will pivot to a fact-checking system modeled after the Community Notes system used by Elon Musk’s X, formerly known as Twitter. Community Notes allows users to add context to potentially misleading posts.

Additionally, the company said it will loosen restrictions on content discussing certain topics. The platform will instead focus its rules-enforcement efforts on illegal and high-severity violations of its content moderation policies. Meta said it will also allow political speech to spread more easily on its platforms.

Why is Meta doing all this? Meta began using a third-party fact-checking system in 2016, but made clear that it didn’t intend to be the arbiter of truth, the company said. The platform wanted to enlist independent experts to use their understanding of certain topics to counter viral hoaxes online. But those experts brought their own biases and perspectives into their work, while flagging content that was, in fact, legitimate speech, the company said. In doing so, they caused Meta’s automated systems to mislabel speech and reduce its distribution.

Meta said it witnessed the success of the Community Notes approach used by X. A Community Notes approach allows a diverse collection of social media users to provide context about different statements, videos, and photos, Meta said. As a result, users should have more comprehensive and less biased information about what they’re seeing, the company said.

Regarding its plans to loosen restrictions on certain speech, the company admitted it had previously gone too far in regulating certain content. The company confessed that it regulated too much content and was heavy-handed in the enforcement of its rules. In December 2024, Meta said it removed roughly 1% of the total content posted on its site, which amounted to millions of pieces of content daily. And roughly one in every five of those pieces of content didn’t violate the company’s content moderation policies, the company admitted.

Meta said that it was using automated systems to scan for all policy violations. But focusing on every little violation has resulted in too many mistakes and too much censorship, the company said. As such, Meta said it will now reengineer its automated systems to scan for only illegal content and high-level violations of its rules, the company said. Such content would include child sexual abuse material, scams, terrorism-related content, and drug use.

Dig deeper: Read Lauren Canterberry’s report in The Sift about how the European Union has cracked down on the trending social media platform Bluesky.


Josh Schumacher

Josh is a breaking news reporter for WORLD. He’s a graduate of World Journalism Institute and Patrick Henry College.


An actual newsletter worth subscribing to instead of just a collection of links. —Adam

Sign up to receive The Sift email newsletter each weekday morning for the latest headlines from WORLD’s breaking news team.
COMMENT BELOW

Please wait while we load the latest comments...

Comments