IWF reports sharp rise in AI‑generated child sexual abuse material online

The Internet Watch Foundation (IWF) has reported that the amount of AI-generated child sexual abuse material found online rose by 14 per cent in 2025, with the majority of videos showing the most extreme type of content under UK law.

Mar 26, 2026
By Paul Jacques
Picture: IWF

Across the year, IWF identified 8,029 AI-generated images and videos depicting realistic child sexual abuse material (CSAM). Of the 3,443 videos analysed, 65 per cent of the were classified as category A, the term for the most severe material classification.

In comparison, 43 per cent of non-AI videos fell into the same category, highlighting the escalating severity of AI-generated material.

The National Crime Agency (NCA) has warned that policing cannot tackle AI CSAM alone and needs industry around the world to “invest its money, expertise and innovation in stopping this harm at source”.

The Government has announced new measures that will allow designated AI companies and child safety organisations to examine generative AI models, in order to strengthen protections and prevent the creation of illegal content.

Helen Rance, Deputy Director of CSA threat at the NCA, said: “AI generated child sexual abuse material is illegal. It harms children. And it fuels and escalates offending.

“Alongside policing colleagues, we are arresting nearly 1,000 offenders and safeguarding over 1,200 children every month in relation to online sexual abuse. Offenders should be under no illusion that they will be caught and the consequences for them and their families will be life changing.

“However, policing cannot tackle AI CSAM alone. We need industry around the world to invest its money, expertise and innovation in stopping this harm at source. We need to keep investing in the tools that help policing protect children at scale. And we need to equip children, parents, carers and professionals with the confidence and skills to navigate the challenges that AI brings.

“We welcome this important report from IWF and will continue to work with them and other partners to disrupt this evolving ecosystem and keep children safe.”

The IWF report, titled ‘Harm without limits: AI child sexual abuse material through the eyes of our Analysts’, also gives “unsettling” insight on the kind of offender conversations IWF analysts are witnessing as criminals vie with each other to create more and more lifelike and extreme child sexual abuse scenarios.

Chillingly, offenders even discuss setting up and using hidden cameras to source still footage of real children, which they can then transform into AI sexual abuse video content.

They also predict how, in a few years’ time, agentic AI tools may be able to create full child sexual abuse ‘movies’ by feeding a prompt to an uncensored AI agent. “No skills with editing or tech will be required,” remarked one dark web forum user.

In January, the IWF, which is Europe’s largest hotline dedicated to disrupting the spread of child sexual abuse imagery online, published data showing a more than 260-fold increase in videos of AI-generated child sexual abuse.

This new report shows the combined surge in still images and videos, as well as horrifying details of the intentions of those producing them.

One IWF senior analyst said: “It is very apparent from the unsettling dark web conversations observed by the IWF Hotline that AI innovations are regarded with delight by users of child sexual abuse material.

“Every new development in generative AI is extolled for its ability to enhance the realism, to heighten the severity, or make more immersive, any conceivable sexual scenario with a child. This could be through adding audio to video, being able to depict multiple people interacting or even being able to successfully manipulate imagery of a real child known to an offender.

“Instead of being a vehicle for connection, the technology only deepens offenders’ capacity to view children and victims as abstract playthings, whose likenesses can be altered endlessly for their own enjoyment.

“We know this affects victims and survivors, as its creation and distribution is just as keenly felt as with traditional forms of child sexual abuse.”

The IWF is calling on the Government to tighten up laws around AI and make it mandatory for tech companies to evaluate and safeguard AI models before release to make it harder for criminals to abuse AI image generators and create child sexual abuse imagery.

This is echoed by new polling that shows more than four in five, or 82 per cent, of UK adults say the Government should introduce regulation to ensure AI systems are safe by design and futureproofed from causing harm.

A further 78 per cent of survey respondents agreed that AI companies should be made to test for AI-related harms before products are released to market.

IWF CEO Kerry Smith said: “Advances in technology should never come at the expense of a child’s safety and wellbeing. While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child’s life. This material is dangerous.

“The UK Government has made great strides in recognising the wide-reaching harms of AI child sexual abuse imagery and we welcome the move to allow designated authorities like the IWF to test AI models.

“But this report’s in-depth view of the risks posed to children by AI, as well as emerging areas of concern, only serves to highlight the need for companies to adopt a safety-by-design approach that ensures child protection is baked into product development. This non-negotiable standard in AI development must be mandated by a clear government framework.

“Children, victims and survivors cannot afford for us to be complacent. New technology must be held to the highest standard. In some cases, lives are on the line.”

The report also highlights how offenders are already anticipating the next generation of AI tools and how they might exploit them.

IWF analysts have observed offenders discussing the possibilities of ‘agentic AI’, systems designed to carry out complex tasks autonomously. One offender wrote: “I believe in a year or two we will be able to create our own movies just by feeding a prompt to an uncensored AI agent. No skills with editing or tech will be required.”

AI child sexual abuse content with an audio component is also an emerging area of concern. This may be in the form of recordings – audio deepfakes – which synthetically generate the sexualised voices of children.

While typically the IWF does not assess audio only reports, one example identified by analysts was of a fully synthetic video showing a child who appeared to be between three and six years old speaking to the camera and performing a sexual act on an adult man. Both the video and audio were generated by AI.

Sean McConnell, GovTech lead at Datactics commented: “The increase in AI-generated child sexual abuse material reflects a growing recognition that this is not just a content moderation issue, but a data infrastructure challenge. As harmful content becomes easier to produce and distribute, it can scale rapidly across platforms, requiring systems capable of detecting and responding to risk in real time.”

“For these measures to deliver meaningful protection, technology providers need to strengthen the quality and use of their data to improve how harmful content is detected and prevented from reappearing. With better data practices and oversight in place, platforms can move beyond simply reacting to content and start identifying patterns earlier, ensuring faster intervention and safer online environments.”

Heather Barnhart, Cellebrite senior digital forensics expert and SANS Curriculum lead, said the sharp rise in AI-generated child sexual abuse material shows just how rapidly this threat is evolving.

“AI systems are increasingly becoming more powerful and accessible, and it is essential that child safety remains the top priority through robust safeguards built in alongside greater awareness and education at home,” she said.

“AI tools are becoming part of everyday life, and parents need to take an active role in guiding children on appropriate use. This gets at the larger issue, which is parents having open conversations about the dangers that lurk online – and having concrete guardrails on who their kids are interacting with and what they’re sharing.

“With proactive monitoring, education and healthy engagement, families can help their children navigate the online world, including AI tools, responsibly and safely.”

Related News

Select Vacancies

Regional Chief Officer

Northumbria Police

Divisional Commander

Sovereign Base Areas - Cyprus

Deputy Chief Constable

Dyfed-Powys Police

Copyright © 2026 Police Professional