Wubi News

Meta is ditching fact checkers for X-style community notes. Will they work?

2025-01-27 17:00:11
Meta owner Mark Zuckerberg

Adopting a fact checking system inspired by an Elon-Musk-owned platform was always going to raise hackles. The world's richest man is regularly accused of using his X account to amplify misinformation and conspiracy theories.

But the system predates his ownership.

"Birdwatch", as it was then known, began in 2021 and drew inspiration from Wikipedia, which is written and edited by volunteers.

Mark Zuckerberg announced the changes in an online video

Like Wikipedia, community notes rely on unpaid contributors to correct misinformation.

Contributors rate corrective notes under false or misleading posts and, over time, some users earn the ability to write them. According to the platform, this group of contributors is now almost a million strong.

Mr Mantzarlis - who himself once ran a "crowd-sourced" fact checking project - argues this type of system potentially allows platforms to "get more fact checks, more contributions, faster".

One of the key attractions of community-notes-style systems are their ability to scale: as a platform's userbase grows, so does the pool of volunteer contributors (if you can persuade them to participate).

According to X, community notes produce hundreds of fact checks per day.

By contrast, Facebook's expert fact checkers may manage less than 10 per day, suggests an article by Jonathan Stray of the UC Berkeley Center for Human-Compatible AI and journalist Eve Sneider.

And one study suggests community notes can deliver good quality fact checks: an analysis of 205 notes about Covid found 98% were accurate.

A note appended to a misleading post can also organically cut its viral spread by more than half, X maintains, and research suggests they also increase the chance that the original poster will delete the tweet by 80% .

Keith Coleman, who oversees community notes for X, argues Meta is switching to a more capable fact checking programme.

"Community notes are already covering a vastly wider range of content than previous systems," he told me.

"That is rarely mentioned. I see stories that say 'Meta ends fact checking program'," he said.

"But I think the real story is, 'Meta replaces existing fact checking program with approach that can scale to cover more content, respond faster and is trusted across the political spectrum'."

The solution that X uses in an attempt to keep community notes trusted across the political spectrum is to take a key part of the process out of human hands, relying instead on an algorithm.

The algorithm is used to select which notes are shown, and also to ensure they are found helpful by a range of users.

In very simple terms, according to X, this "bridging" algorithm selects proposed notes that are rated helpful by volunteers who would normally disagree with each other.

The result, it argues, is that notes are viewed positively across the political spectrum. This is confirmed, according to X, by regular internal testing. Some independent research also backs up that view.

Meta says its community notes system will require agreement between people with a range of perspectives to help prevent biased ratings, "just like they do on X".

But this wide acceptance is a high bar to reach.

Research indicates that more than 90% of proposed community notes are never used.

This means accurate notes may go unused.

But according to X, showing more notes would undermine the aim of displaying only notes that will be found helpful by the most users and this would reduce trust in the system.