Wubi News

'A predator in your home': Mothers say chatbots encouraged their sons to kill themselves

2025-11-12 00:00:12

Warning - this story contains distressing content and discussion of suicide

Megan Garcia had no idea her teenage son Sewell, a "bright and beautiful boy", had started spending hours and hours obsessively talking to an online character on the Character.ai app in late spring 2023.

"It's like having a predator or a stranger in your home," Ms Garcia tells me in her first UK interview. "And it is much more dangerous because a lot of the times children hide it - so parents don't know."

Within ten months, Sewell, 14, was dead. He had taken his own life.

It was only then Ms Garcia and her family discovered a huge cache of messages between Sewell and a chatbot based on Game of Thrones character Daenerys Targaryen.

She says the messages were romantic and explicit, and, in her view, caused Sewell's death by encouraging suicidal thoughts and asking him to "come home to me".

Ms Garcia, who lives in the United States, was the first parent to sue Character.ai for what she believes is the wrongful death of her son. As well as justice for him, she is desperate for other families to understand the risks of chatbots.

"I know the pain that I'm going through," she says, "and I could just see the writing on the wall that this was going to be a disaster for a lot of families and teenagers."

The use of chatbots is growing incredibly fast. Data from the advice and research group Internet Matters says the number of children using ChatGPT in the UK has nearly doubled since 2023, and that two-thirds of 9-17 year olds have used AI chatbots. The most popular are ChatGPT, Google's Gemini and Snapchat's My AI.

For many, they can be a bit of fun. But there is increasing evidence the risks are all too real.

So what is the answer to these concerns?

Remember the government did, after many years of arguments, pass a wide-ranging law to protect the public - particularly children - from harmful and illegal online content.

The Online Safety Act became law in 2023, but its rules are being brought into force gradually. For many the problem is it's already being outpaced by new products and platforms - so it's unclear whether it really covers all chatbots, or all of their risks.

"The law is clear but doesn't match the market," Lorna Woods, a University of Essex internet law professor - whose work contributed to the legal framework - told me.

"The problem is it doesn't catch all services where users engage with a chatbot one-to-one."

Ofcom, the regulator whose job it is to make sure platforms are following the rules, believes many chatbots including Character.ai and the in-app bots of SnapChat and WhatsApp, should be covered by the new laws.

"The Act covers 'user chatbots' and AI search chatbots, which must protect all UK users from illegal content and protect children from material that's harmful to them," the regulator said. "We've set out the measures tech firms can take to safeguard their users, and we've shown we'll take action if evidence suggests companies are failing to comply."

But until there is a test case, it's not exactly clear what the rules do and do not cover.

Andy Burrows, head of the Molly Rose Foundation, set up in memory of 14-year-old Molly Russell who died by suicide after being exposed to harmful content online, said the government and Ofcom had been too slow to clarify the extent to which chatbots were covered by the Act.

"This has exacerbated uncertainty and allowed preventable harm to remain unchecked," he said. "It's so disheartening that politicians seem unable to learn the lessons from a decade of social media."

As we have previously reported, some ministers in government would like to see No 10 take a more aggressive approach to protecting against internet harms, and fear the eagerness to woo AI and tech firms to spend big in the UK has put safety in the backseat.

The Conservatives are still campaigning to ban phones in schools in England outright. Many Labour MPs are sympathetic to this move, which could make a future vote awkward for a restive party because the leadership has always resisted calls to go that far. And the crossbench peer, Baroness Kidron, is trying to get ministers to create new offences around the creation of chatbots that could make illegal content.

But the rapid growth in the use of chatbots is just the latest challenge in the genuine dilemma for modern governments everywhere. The balance between protecting children, and adults, from the worst excesses of the internet without losing out on its enormous potential - both technological and economic - is elusive.

Tech Secretary Liz Kendall has not yet made any moves on restricting phone use for children

But Ms Garcia is convinced that if her son had never downloaded Character.ai, he'd still be alive.

"Without a doubt. I kind of started to see his light dim. The best way I could describe it is you're trying to pull him out of the water as fast as possible, trying to help him and figure out what's wrong.

"But I just ran out of time."

If you would like to share your story you can reach Laura at [email protected]