Wubi News

Facebook and X must comply with UK law - minister

2025-01-12 21:00:08

Internet safety campaigners complain that there are gaps in the UK's laws including a lack of specific rules covering live streaming or content that promotes suicide and self-harm.

Kyle said current laws on online safety were "very uneven" and "unsatisfactory".

The Online Safety Act, passed in 2023 by the previous government, had originally included plans to compel social media companies to remove some "legal-but-harmful" content such as posts promoting eating disorders.

However the proposal triggered a backlash from critics, including the current Conservative leader Kemi Badenoch, concerned it could lead to censorship.

In July 2022, Badenoch, who was not then a minister, said the bill was in "no fit state to become law" adding: "We should not be legislating for hurt feelings."

Another Conservative MP, David Davis, said it risked "the biggest accidental curtailment of free speech in modern history".

The plan was dropped for adult social media users and instead companies were required to give users more control to filter out content they did not want to see. The law still expects companies to protect children from legal-but-harmful content.

Kyle expressed frustration over the change but did not say if he would be reintroducing the proposal.

He said the act contained some "very good powers" he was using to "assertively" tackle new safety concerns and that in the coming months ministers would get the powers to make sure online platforms were providing age-appropriate content.

Companies that did not comply with the law would face "very strident" sanctions, he said.

He also said Parliament needed to get faster at updating the law to adapt to new technologies and that he was "very open-minded" about introducing new legislation.

Rules in the Online Safety Act, due to come into force later this year, compel social media firms to show that they are removing illegal content - such as child sexual abuse, material inciting violence and posts promoting or facilitating suicide.

They also says companies have to protect children from harmful material including pornography, material promoting self-harm, bullying and content encouraging dangerous stunts.

Platforms will be expected to adopt "age assurance technologies" to prevent children from seeing harmful content.

The law also requires companies to take action against illegal, state-sponsored disinformation. If their services are likely to be accessed by children they should also take steps to protect users against misinformation.

In 2016, Meta established a fact checking programmer where by third party moderators would check posts on Facebook and Instagram that appeared to be false or misleading.

Content flagged as inaccurate would be moved lower in users' feeds and accompanied by labels offering viewers more information on the subject.

However, on Tuesday, Zuckerberg said Meta would be replacing the fact checkers, and instead adopt a system – introduced by X - of allowing users to add "community notes" to posts they deemed to be untrue.

Defending the change, Zuckerberg said moderators were "too politically biased" and it was "time to get back to our roots around free expression".

The step comes as Meta seeks to improve relations with incoming US President Donald Trump who has previously accused the company of censoring right-wing voices.