Wubi News

'I was moderating hundreds of horrific and traumatising videos'

2024-11-11 10:00:02
Social media moderators check for distressing or illegal photos and videos which they then remove

In 2020, Meta then known as Facebook, agreed to pay a settlement of $52m (£40m) to moderators who had developed mental health issues because of their jobs.

The legal action was initiated by a former moderator in the US called Selena Scola. She described moderators as the “keepers of souls”, because of the amount of footage they see containing the final moments of people’s lives.

The ex-moderators I spoke to all used the word “trauma” in describing the impact the work had on them. Some had difficulty sleeping and eating.

One described how hearing a baby cry had made a colleague panic. Another said he found it difficult to interact with his wife and children because of the child abuse he had witnessed.

I was expecting them to say that this work was so emotionally and mentally gruelling, that no human should have to do it – I thought they would fully support the entire industry becoming automated, with AI tools evolving to scale up to the job.

But they didn’t.

What came across, very powerfully, was the immense pride the moderators had in the roles they had played in protecting the world from online harm.

They saw themselves as a vital emergency service. One says he wanted a uniform and a badge, comparing himself to a paramedic or firefighter.

“Not even one second was wasted,” says someone who we called David. He asked to remain anonymous, but he had worked on material that was used to train the viral AI chatbot ChatGPT, so that it was programmed not to regurgitate horrific material.

“I am proud of the individuals who trained this model to be what it is today.”

Martha Dark campaigns in support of social media moderators

But the very tool David had helped to train, might one day compete with him.

Dave Willner is former head of trust and safety at OpenAI, the creator of ChatGPT. He says his team built a rudimentary moderation tool, based on the chatbot’s tech, which managed to identify harmful content with an accuracy rate of around 90%.

“When I sort of fully realised, ‘oh, this is gonna work’, I honestly choked up a little bit,” he says. “[AI tools] don't get bored. And they don't get tired and they don't get shocked…. they are indefatigable.”

Not everyone, however, is confident that AI is a silver bullet for the troubled moderation sector.

“I think it’s problematic,” says Dr Paul Reilly, senior lecturer in media and democracy at the University of Glasgow. “Clearly AI can be a quite blunt, binary way of moderating content.

“It can lead to over-blocking freedom of speech issues, and of course it may miss nuance human moderators would be able to identify. Human moderation is essential to platforms,” he adds.

“The problem is there’s not enough of them, and the job is incredibly harmful to those who do it.”