There is currently no legal obligation for streaming platforms to label AI-generated songs, despite increasing calls for them to signpost such tracks.
In January, the streaming platform Deezer launched an AI detection tool, followed this summer by a system which tags AI-generated music.
Deezer says its detection system can flag tracks made with the most prolific AI music creation tools, and is working on expanding its ability to detect music made by others. It says the risk of false positives - eg incorrectly flagging a track created by a human - is very low.
This week, the company said a third (34%) of content uploaded to its platform was fully AI-generated – about 50,000 tracks a day.
Manuel Moussallam, Deezer's director of research, says his team was so surprised by how many tracks were flagged up by the detector when it first launched they were "pretty convinced we had an issue".
The tool quickly flagged up the music by The Velvet Sundown – the band who went viral over the summer – as being "100% AI-generated".
Other platforms have recently announced steps toward more transparency.
In September, Spotify said it would roll out a new spam filter later this year to identify "bad actors", and prevent "slop" being recommended to listeners. In the past year, it has removed more than 75 million spam tracks.
It is also supporting a way to enable artists to say where and how AI was used in a track, through a system by a consortium of industry members called DDEX. This information will be included in the metadata of a track and displayed on its app.
Spotify says it is about recognising listeners' desire for more information, as well as "strengthening trust".
"It's not about punishing artists who use AI responsibly or down-ranking tracks for disclosing information about how they were made."