Like Wikipedia, community notes rely on unpaid contributors to correct misinformation.
Contributors rate corrective notes under false or misleading posts and, over time, some users earn the ability to write them. According to the platform, this group of contributors is now almost a million strong.
Mr Mantzarlis - who himself once ran a "crowd-sourced" fact checking project - argues this type of system potentially allows platforms to "get more fact checks, more contributions, faster".
One of the key attractions of community-notes-style systems are their ability to scale: as a platform's userbase grows, so does the pool of volunteer contributors (if you can persuade them to participate).
According to X, community notes produce hundreds of fact checks per day.
By contrast, Facebook's expert fact checkers may manage less than 10 per day, suggests an article by Jonathan Stray of the UC Berkeley Center for Human-Compatible AI and journalist Eve Sneider.
And one study suggests community notes can deliver good quality fact checks: an analysis of 205 notes about Covid found 98% were accurate.
A note appended to a misleading post can also organically cut its viral spread by more than half, X maintains, and research suggests they also increase the chance that the original poster will delete the tweet by 80% .
Keith Coleman, who oversees community notes for X, argues Meta is switching to a more capable fact checking programme.
"Community notes are already covering a vastly wider range of content than previous systems," he told me.
"That is rarely mentioned. I see stories that say 'Meta ends fact checking program'," he said.
"But I think the real story is, 'Meta replaces existing fact checking program with approach that can scale to cover more content, respond faster and is trusted across the political spectrum'."