Watchdog reports doubling of AI child sex abuse material online
Trigger warning: This article contains mentions of child sexual abuse.
Europe's biggest hotline dedicated to finding and removing AI-generated child sexual abuse material said that it had removed more websites in the past six months than the entire previous year.
Many of the images and videos of children being "hurt and abused are so realistic that they can be very difficult to tell apart from imagery of real children," said the UK-based Internet Watch Foundation (IWF).
The rise of powerful generative AI models has facilitated the creation of the material, which is regarded as criminal content in the UK.
From April 2023 to March 2024, the IWF said it had "actioned"—or found and removed—70 reports. From April 2024 up to the end of September 2024, it actioned 74 reports.
Almost all the content was found on publicly available areas of the internet, with most of it hosted in Russian (36%), the United States (22%) and Japan (11%).
More than three-quarters of the reports came from members of the public who stumbled across the criminal imagery, with the rest actioned by IWF analysts.
"People can be under no illusion that AI generated child sexual abuse material causes horrific harm, not only to those who might see it but to those survivors who are repeatedly victimised every time images and videos of their abuse are mercilessly exploited," said Derek Ray-Hill, interim chief executive of the IWF.
"To create the level of sophistication seen in the AI imagery, the software used has also had to be trained on existing sexual abuse images and videos of real child victims shared and distributed on the internet.
"Recent months show that this problem is not going away and is in fact getting worse," he warned, urging lawmakers to bring legislation "up to speed for the digital age." (AFP)