"Eight months after Meta rolled out Teen Accounts on Instagram, we've had silence from Mark Zuckerberg about whether this has actually been effective and even what sensitive content it actually tackles," said Andy Burrows, chief executive of the Molly Rose Foundation.
He added it was "appalling" that parents still did not know whether the settings prevented their children being "algorithmically recommended" inappropriate or harmful content.
Matthew Sowemimo, associate head of policy for child safety online at the NSPCC, said Meta's changes "must be combined with proactive measures so dangerous content doesn't proliferate on Instagram, Facebook and Messenger in the first place".
But Drew Benvie, chief executive of social media consultancy Battenhall, said it was a step in the right direction.
"For once, big social are fighting for the leadership position not for the most highly engaged teen user base, but for the safest," he said.
However he also pointed out there was a risk, as with all platforms, that teens could "find a way around safety settings."
The expanded roll-out of Teen Accounts is beginning in the UK, US, Australia and Canada from Tuesday.
Companies that provide services popular with children have faced pressure to introduce parental controls or safety mechanisms to safeguard their experiences.
In the UK, they also face legal requirements to prevent children from encountering harmful and illegal content on their platforms, under the Online Safety Act.
Roblox recently enabled parents to block specific games or experiences on the hugely popular platform as part of its suite of controls.