UK and US pledge to combat rise in AI-generated images of child abuse
The UK and the US have pledged to work together to tackle the “sickening rise” in artificial intelligence (AI)-generated images of child abuse.
It comes after the Internet Watch Foundation (IWF) warned AI-generated imagery of child sexual abuse being shared online was growing, with some of the children depicted as young as three.
Between May 24 and June 30, the IWF said seven URLs it investigated contained AI-generated child sexual abuse imagery, including babies and toddlers, with some depicting the “worst kind of offending” under UK and US law.
The IWF has also uncovered an online ‘manual’ dedicated to helping offenders refine their prompts and train AI to return more and more realistic results.
AI-generated images of child sexual abuse are illegal in the UK, but the IWF says far from being a victimless crime, this imagery can “normalise and ingrain the sexual abuse of children”. It can also make it harder to spot when real children may be in danger.
Home Secretary Suella Braverman said: “Child sexual abuse is a truly abhorrent crime and one of the challenges of our age. Its proliferation online does not respect borders and must be combatted across the globe.
“That is why we are working to tackle the sickening rise of AI-generated child sexual abuse imagery which incites paedophiles to commit more offences and also obstructs law enforcement from finding real victims online.
“It is therefore vital we work hand-in-glove with our close partners in the US to tackle it. I commend the National Center for Missing and Exploited Children (NCMEC), who work tirelessly to keep children safe around the world. Social media companies must take responsibility and prioritise child safety on their platforms.”
During her visit to Washington this week, Ms Braverman and US Homeland Security Secretary Alejandro Mayorkas made joint statement vowing to tackle the rise in images which hampers law enforcement and incites more child sexual abuse.
Together they have committed to exploring further joint action to tackle the “alarming rise in despicable AI-generated images of children being sexually exploited by paedophiles”.
The two countries have issued a joint statement pledging to work together to innovate and explore development of new solutions to fight the spread of this imagery, created by “depraved predators”, and have called on other nations to join them.
Ms Braverman and Mr Mayorkas say the rise is concerning, with law enforcement agencies and charities convinced an increase in child sexual abuse material will fuel a “normalisation of offending” and lead to more children being targeted as a whole.
The surge in AI-generated images could also slow law enforcement agencies from tracking down and identifying victims of child sexual abuse, and detecting offenders and bringing them to justice.
In addition, some AI technologies provide offenders with the capability to create new pictures from benign imagery. For example; through a process known as inpainting, offenders can remove articles of clothing completely or swap someone’s face into indecent images of real children.
The IWF says some examples are so “astoundingly realistic” they would be indistinguishable from real imagery to most people.
IWF chief executive Susie Hargreaves OBE said: “We are not currently seeing these images in huge numbers, but it is clear to us the potential exists for criminals to produce unprecedented quantities of life-like child sexual abuse imagery. This would be potentially devastating for internet safety and for the safety of children online.
“Offenders are now using AI image generators to produce sometimes astoundingly realistic images of children suffering sexual abuse.”
The Home Secretary’s visit comes a week after launching a campaign calling on Meta not to roll out end-to-end encryption on its platforms without robust safety measures that ensure children are protected from sexual abuse and exploitation in messaging channels.
Currently, 800 predators a month are arrested by UK law enforcement agencies and up to 1,200 children are safeguarded from child sexual abuse following information provided by social media companies. If Meta proceeds with its plans, it will no longer be able to detect child abuse on their platforms, says Ms Braverman.
The National Crime Agency estimates 92 per cent of Facebook Messenger and 85 per cent of Instagram Direct referrals could be lost – meaning thousands of criminals a year could go undetected.
The UK’s partnership with the US also follows the Online Safety Bill’s passage through Parliament last week.
AI-generated child sexual exploitation and abuse content is illegal, regardless of whether it depicts a real child or not. Under the Government’s landmark Bill, tech companies will be required to identify content and remove it.