In February the Internet Watch Foundation (IWF) - a UK-based charity partly funded by tech firms - had confirmed 245 reports of AI-generated child sexual abuse in 2024 compared with 51 in 2023, a 380% increase.
"We know these apps are being abused in schools, and that imagery quickly gets out of control," IWF Interim Chief Executive Derek Ray-Hill said on Monday.
A spokesperson for the Department for Science, Innovation and Technology said creating, possessing or distributing child sexual abuse material, including AI-generated images, is "abhorrent and illegal".
"Under the Online Safety Act platforms of all sizes now have to remove this kind of content, or they could face significant fines," they added.
"The UK is the first country in the world to introduce further AI child sexual abuse offences - making it illegal to possess, create or distribute AI tools designed to generate heinous child sex abuse material."
Dame Rachel also called for the government to:
Paul Whiteman, general secretary of school leaders' union NAHT, said members shared the commissioner's concerns.
He said: "This is an area that urgently needs to be reviewed as the technology risks outpacing the law and education around it."
Media regulator Ofcom published the final version of its Children's Code on Friday, which puts legal requirements on platforms hosting pornography and content encouraging self-harm, suicide or eating disorders, to take more action to prevent access by children.
Websites must introduce beefed-up age checks or face big fines, the regulator said.
Dame Rachel has criticised the code saying it prioritises "business interests of technology companies over children's safety".