The government said its proposed changes to the law would also equip AI developers and charities to make sure AI models have adequate safeguards around extreme pornography and non-consensual intimate images.
Child safety experts and organisations have frequently warned AI tools developed, in part, using huge volumes of wide-ranging online content are being used to create highly realistic abuse imagery of children or non-consenting adults.
Some, including the IWF and child safety charity Thorn, have said these risk jeopardising efforts to police such material by making it difficult to identify whether such content is real or AI-generated.
Researchers have suggested there is growing demand for these images online, particularly on the dark web, and that some are being created by children.
Earlier this year, the Home Office said the UK would be the first country in the world to make it illegal to possess, create or distribute AI tools designed to create child sexual abuse material (CSAM), with a punishment of up to five years in prison.
Ms Kendall said on Wednesday that "by empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought".
"We will not allow technological advancement to outpace our ability to keep children safe," she said.
Safeguarding Minister Jess Phillips said the measures would also "mean legitimate AI tools cannot be manipulated into creating vile material and more children will be protected from predators as a result".