Technology platforms pledge to combat ‘online scourge’ of AI-generated child sex abuse images
The Home Secretary has gained a commitment from technology platforms and international partners to tackle the “online scourge” of artificial intelligence (AI)-generated child sexual abuse images.
It comes as the Internet Watch Foundation (IWF) warned that thousands of AI images depicting the worst kind of abuse could be found on the dark web and was realistic enough to be treated as real imagery under UK law.
Twenty-seven organisations, including the IWF, TikTok, Snapchat and Stability AI, together with the governments of the US and Australia, the National Crime Agency, National Police Chiefs’ Council, charities and academics, signed a pledge to tackle the threat of AI-generated child abuse imagery at an event hosted by Home Secretary Suella Braverman on Monday (October 30).
Ms Braverman said they will commit to “work at pace” to clamp down on the recent spread of AI-generated child sex abuse material that threatens to “overwhelm” the internet.
“Child sexual abuse images generated by AI are an online scourge,” said Ms Braverman.
“This is why tech giants must work alongside law enforcement to clamp down on their spread. The pictures are computer-generated but they often show real people – it’s depraved and damages lives.
“The pace at which these images have spread online is shocking and that’s why we have convened such a wide group of organisations to tackle this issue head-on. We cannot let this go on unchecked.”
She said the government is also exploring further investment into the use of AI to combat child sexual abuse, and will continue to examine potential options for innovation to tackle the threat from AI-generated child sexual abuse material.
The IWF, which co-hosted the event in the lead up to this week’s global AI Safety Summit at Bletchley Park, has warned that the increased availability of this imagery not only poses a real risk to the public by normalising sexual violence against children, but some of the imagery is also based on children who have appeared in ‘real’ child sexual abuse material in the past.
This means innocent survivors of traumatic abuse are being revictimised, said the IWF.
It warned the surge in AI-generated images could also slow law enforcement agencies from tracking down and identifying victims of child sexual abuse, and detecting offenders and bringing them to justice.
Signatories to the joint statement, have pledged to sustain “technical innovation around tackling child sexual abuse in the age of AI”. The statement affirms that AI must be developed in “a way that is for the common good of protecting children from sexual abuse across all nations”.
Statistics released by the IWF last week showed that in a single month, it investigated more than 11,000 AI images which had been shared on a dark web child abuse forum. Almost 3,000 of these images were confirmed to breach UK law – meaning they depicted child sexual abuse.
Some of the images are based on celebrities, whom AI has ‘de-aged’ and are then depicted being abused, said the IWF. There are even images which are based on entirely innocuous images of children posted online, which AI has been able to ‘nudify’.
Susie Hargreaves OBE, chief executive of the IWF, said: “We first raised the alarm about this in July. In a few short months, we have seen all our worst fears about AI realised.
“The realism of these images is astounding, and improving all the time. The majority of what we’re seeing is now so real, and so serious, it would need to be treated exactly as though it were real imagery under UK law.
“It is essential, now, we set an example and stamp out the abuse of this emerging technology before it has a chance to fully take root. It is already posing significant challenges. It is great to see the Prime Minister acknowledge the threat posed by the creation of child sexual abuse images in his speech last week following the publication of our report.
“We are delighted the Government has listened to our calls to make this a top international priority ahead of the AI summit, and are grateful to the Home Secretary for convening such a powerful discussion.”
Sir Peter Wanless, chief executive of the children’s charity NSPCC, said: “AI is being developed at such speed that it’s vital the safety of children is considered explicitly and not as an afterthought in the wake of avoidable tragedy.
“Already we are seeing AI child abuse imagery having a horrific impact on children, traumatising and retraumatising victims who see images of their likeness being created and shared.
“This technology is giving offenders new ways to organise and risks enhancing their ability to groom large numbers of victims with ease.
“It was important to see child safety on the agenda today. Further international and cross-sector collaboration will be crucial to achieve safety by design.”
The Home Secretary emphasised at the event that AI also poses opportunities to improve the way child sexual abuse is tackled.
Together with the police and other partners, the Home Office has developed the world-leading Child Abuse Image Database (CAID), which is already using AI to grade the severity of child sexual abuse material.
This helps police officers sort through large volumes of data at a faster pace, bringing certain images to the surface for the officer to focus on to aid investigations. This enables officers to more rapidly identify and safeguard children, as well as identify offenders. These tools also support the welfare of officers, as they reduce prolonged exposure to these images.
Ms Braverman said other tools are also in development, which will use AI to safeguard children and identify perpetrators more quickly.
“While the opportunities posed in this space are promising, AI is advancing much quicker than anyone could have realised,” the Home Office says.
“Without appropriate safety measures that keep pace with its development, this technology still poses significant risks, and that is why the Home Secretary is placing an emphasis on working constructively with a wide range of partners to mitigate these risks and ultimately, protect the public.”
In a joint statement, the 27 signatories of the pledge to tackle the threat of AI-generated child abuse imagery said: “Child sexual abuse takes many forms. It can occur in the home, online, or in institutions and has a life-long impact on the victim.
“WeProtect Global Alliance’s 2023 Global Threat Assessment finds that child sexual abuse and exploitation online is escalating worldwide, in both scale and methods. As the online world is borderless, we must work as an international community to tackle this horrific crime.
“AI presents enormous opportunities to help tackle the threat of online child sexual abuse. It has the potential to transform and enhance the ability of industry and law enforcement to detect child sexual abuse cases. To realise this, we affirm that we must develop AI in a way that is for the common good of protecting children from sexual abuse across all nations.
“Alongside these opportunities, AI also poses significant risks to our efforts to tackle the proliferation of child sexual abuse material and prevent the grooming of children.
“AI tools can be utilised by child sexual offenders to create child sexual abuse material, thereby leading to an epidemic in the proliferation of this material. Data from the IWF found that in just one dark web forum, over a one-month period, 11,108 AI-generated images had been shared, and the IWF were able to confirm 2,978 of these depicted AI generated child sexual abuse material.
“The increase in the creation and proliferation of AI-generated child sexual abuse material poses significant risks to fuelling the normalisation of offending behaviour and to law enforcement’s ability around the world to identify children who need safeguarding. In addition, AI can also enable grooming interactions, scripting sexual extortive interactions with children.
“Whilst these technologies are evolving at an exponential rate, the safety of our children cannot be an afterthought, and we must all work in collaboration to make sure these technologies have robust measures in place.
“Issues in tackling child sexual abuse arising from AI are inherently international in nature, and so action to address them requires international cooperation. We resolve to work together to ensure that we utilise responsible AI for tackling the threat of child sexual abuse and commit to continue to work collaboratively to ensure the risks posed by AI to tackling child sexual abuse do not become insurmountable. We will seek to understand and, as appropriate, act on the risks arising from AI to tackling child sexual abuse through existing fora.
“All actors have a role to play in ensuring the safety of children from the risks of frontier AI. We note that companies developing frontier AI capabilities have a particularly strong responsibility for ensuring the safety of these capabilities. We encourage all relevant actors to provide transparency on their plans to measure, monitor and mitigate the capabilities which may be exploited by child sexual offenders. At a country level, we will seek to build respective policies across our countries to ensure safety in light of the child sexual abuse risks.
“We affirm that the safe development of AI will enable the transformative opportunities of AI to be used for good to tackle child sexual abuse and support partners in their quest to prioritise and streamline their processes.
“As part of wider international cooperation, we resolve to sustain the dialogue and technical innovation around tackling child sexual abuse in the age of AI.”
This week’s AI Safety Summit will bring together key nations, technology companies, researchers and civil society groups to “turbocharge global action on the safe and responsible development of frontier AI around the world”.