New laws to crack down on AI-generated child sex abuse materials
Home Secretary Yvette Cooper has announced a raft of new legislation designed to curb the rise of life-like child sexual abuse material generated by artificial intelligence (AI).
The new rules will outlaw the possession and distribution of AI models that have been optimised to create child sexual abuse imagery, and will also criminalise the possession of manuals which provide instructions on how offenders can use AI to generate child sexual abuse imagery.
The UK will be the first country in the world to make it illegal to possess, create or distribute AI tools designed to create child sexual abuse material (CSAM), with a punishment of up to five years in prison.
Possessing AI ‘paedophile manuals’, which teach people how to use AI to sexually abuse children, will also be made illegal, punishable by up to three years in prison.
The Home Office says AI tools are being used to generate child sexual abuse images in “a number of sickening ways”, including by ‘nudeifying’ real life images of children or by stitching the faces of other children onto existing child sexual abuse images.
“The real-life voices of children are also often used in this sickening material, meaning innocent survivors of traumatic abuse are being revictimised,” it adds.
“Perpetrators are also using those fake images to blackmail children and force victims into further horrific abuse including streaming live images. AI tools are being used to help perpetrators disguise their initial identity and more effectively groom and abuse children online.”
At the same time, the Home Office will introduce a specific offence for predators who run websites designed for other paedophiles to share “vile child sexual abuse content” or advice on how to groom children, punishable by up to ten years in prison.
Border Force will also be given the necessary powers to prevent the distribution of CSAM, which is often filmed abroad, by allowing officers to compel an individual who they reasonably suspect poses a sexual risk to children to unlock their digital devices for inspection. This is punishable by up to three years in prison, depending on the severity.
All four measures will be introduced as part of the Crime and Policing Bill.
The Internet Watch Foundation (IWF) has warned that more and more sexual abuse AI images of children are being produced.
Over a 30-day period in 2024, IWF analysts identified 3,512 AI CSAM images on a single dark web site. Compared with their 2023 analysis, the prevalence of Category A images (the most severe category) had risen by ten per cent.
New data from the charity shows that reports showing AI generated CSAM have risen 380 per cent, with 245 confirmed reports in 2024 compared with 51 in 2023. Each report can contain thousands of images.
The charity also warns that some of this AI generated content is so realistic that sometimes they are unable to tell the difference between AI generated content and abuse that is filmed in real life. Of the 245 reports the IWF took action against, 193 included AI-generated images which were so sophisticated and life-like, they were actioned under UK law as though they were actual, photographic images of child sexual abuse.
Derek Ray-Hill, interim chief executive of the IWF, said: “We have long been calling for the law to be tightened up, and are pleased the Government has adopted our recommendations. These steps will have a concrete impact on online safety.
“The frightening speed with which AI imagery has become indistinguishable from photographic abuse has shown the need for legislation to keep pace with new technologies.
“Children who have suffered sexual abuse in the past are now being made victims all over again, with images of their abuse being commodified to train AI models. It is a nightmare scenario, and any child can now be made a victim, with life-like images of them being sexually abused obtainable with only a few prompts, and a few clicks.
“The availability of this AI content further fuels sexual violence against children. It emboldens and encourages abusers, and it makes real children less safe. There is certainly more to be done to prevent AI technology from being exploited, but we welcome today’s announcement, and believe these measures are a vital starting point.”
The increased availability of AI CSAM imagery not only poses a real risk to the public by normalising sexual violence against children, but it can lead those who view and create it to go on to offend in real life, the Home Office warns.
The Home Secretary said: “We know that sick predators’ activities online often lead to them carrying out the most horrific abuse in person. This government will not hesitate to act to ensure the safety of children online by ensuring our laws keep pace with the latest threats.
“These four new laws are bold measures designed to keep our children safe online as technologies evolve. It is vital that we tackle child sexual abuse online as well as offline so we can better protect the public from new and emerging crimes.”
Ms Cooper said the predators who run or moderate websites designed for other paedophiles to share vile child sexual abuse content or advice on how to groom children are often the most dangerous to society by encouraging others to view even more extreme content.
Covert law enforcement officials warn that these individuals often acting as ‘mentors’ for others with an interest in harming in children by offering advice on how to avoid detection and how to manipulate AI tools to generate CSAM.
Technology Secretary Peter Kyle said: “For too long abusers have hidden behind their screens, manipulating technology to commit vile crimes and the law has failed to keep up. It’s meant too many children, young people, and their families have been suffering the dire and lasting impacts of this abuse.
“That is why we are cracking down with some of the most far-reaching laws anywhere in the world. These laws will close loopholes, imprison more abusers, and put a stop to the trafficking of this abhorrent material from abroad. Our message is clear – nothing will get in the way from keeping children safe, and to abusers, the time for cowering behind a keyboard is over.”
Deborah Denis, chief executive of child protection charity the Lucy Faithfull Foundation said: “Child sexual abuse, in all its forms, is preventable. We welcome the much-needed changes that the Crime and Policing Bill will bring. But as the legislation races to keep up with the fast-evolving technology, it’s clear that we cannot arrest our way out of this problem. We need to prevent child sexual abuse before it happens, and we need to be innovative in our solutions.
“Our research shows there are serious knowledge gaps amongst the public regarding AI child sexual abuse material and the harm it causes to children – there is a common and dangerous misconception that it is not harmful. People believe that the children in these images are not ‘real’, which helps to ease the guilt of many who view or make this material.
“But we need to be clear – not only does this material normalise the sexualisation of children, but AI is being used to manipulate images of real children, some of whom have previously been victims of sexual abuse.”
Gregor Poynton MP, chair of the All-Party Parliamentary Group on Children’s Online Safety, said the rise of AI-generated child sexual abuse material is “a deeply disturbing development” that threatens to escalate online child exploitation.
“Protecting our children must always be our top priority.,” he said.
“While AI innovation offers many benefits, it must never come at the expense of child safety.”
Minister for Safeguarding and Violence Against Women and Girls, Jess Phillips, said: “As technology evolves so does the risk to the most vulnerable in society, especially children.
“It is vital that our laws are robust enough to protect children from these changes online. We will not allow gaps and loopholes in legislation to facilitate this abhorrent abuse.”
Crossbench Peer and chair of 5Rights Foundation, Baroness Kidron said it has been a long fight to get the AI child sexual abuse offences into law and the Home Secretary’s announcement was “a milestone”.
“AI-enabled crime normalises the abuse of children and amplifies its spread. Our laws must reflect the reality of children’s experience, and ensure that technology is safe by design and default,” she said.
“I pay tribute to my friends and colleagues in the specialist police unit that brought this to my attention, and commend them for their extraordinary efforts to keep children safe.
“All children whose identity has been stolen or who have suffered abuse deserve our relentless attention and unwavering support. It is they – and not politicians – who are the focus of our efforts.”
Policy manager for Child Safety Online at the NSPCC, Rani Govender, said: “Our Childline service is hearing from children and young people about the devastating impact it can have when AI generated images are created of them and shared. And, concerningly, often victims won’t even know these images have been created in the first place.
“It is vital the development of AI does not race ahead of child safety online. Wherever possible, these abhorrent harms must be prevented from happening in the first place.
“To achieve this, we must see robust regulation of this technology to ensure children are protected and tech companies undertake thorough risk assessments before new AI products are rolled out.”