New research reveals concerning public attitudes towards non-consensual, sexual deepfakes
Around 25 per cent of people agree with or feel neutral about the legal and moral acceptability of viewing, sharing, creating or selling a sexual or intimate deepfake – even when the person depicted has not consented, according to new research from Crest Advisory.
In this new research, published today (November 24), a survey of 1,700 people aged 16 and over in England and Wales, commissioned by the Office of the Police Chief Scientific Adviser, explored the public understanding and attitudes towards so-called deepfakes – a form of content becoming increasingly widespread – 67 per cent of survey respondents saying they had seen, or might have seen, a deepfake.
This survey found that those who found it morally and legally acceptable to create, view, share and sell sexual or intimate deepfakes were more likely to:
- Hold views that were aligned with misogyny;
- Currently watch pornography;
- Feel positively about AI (artificial intelligence); and
- Be younger males under the age of 45.
By contrast, the survey highlighted that six in ten people are ‘very’ or ‘somewhat’ worried about having a deepfake made of them, with women more likely to feel this way.
Most described the content they viewed as humorous (43 per cent), scam (43 per cent) or political (42 per cent), while fewer admitted to viewing a sexual deepfake – 21 per cent of someone they do not know and 14 per cent of someone they do know. But previous research shows the vast majority of video deepfakes online are sexually explicit in nature and disproportionately target women.
Crest researchers are calling for further research to understand this discrepancy, such as exploring whether a small proportion of people view and create a large proportion of sexual deepfakes.
Ninety-two per cent of those surveyed agreed that sexual deepfakes are harmful, and Crest Advisory’s review of existing evidence found that the psychological and emotional impact of deepfake abuse can mirror many of the effects of sexual assault. However, most people responding to our survey believed that other crimes, such as having a phone stolen, are more harmful than sexual deepfakes. Where the impact of harm caused by sexual deepfakes is underestimated, this can mean that victims of abuse are hesitant to report the crime to the police because they do not think they will be taken seriously.
Only 14 per cent of those surveyed in April of this year were aware of the current legislation relating to deepfakes – despite the Government announcing its intention to introduce a new offence, making it illegal to create non-consensual sexually explicit deepfakes. Legislation which makes this a criminal offence was passed into law in June with the Data (Use and Access) Act (2025).
Report author, Crest Advisory head of Policy and Strategy Callyane Desroches, said: “The creation of deepfakes is rapidly rising and becoming increasingly normalised as the technology to make them becomes cheaper and more accessible.
“Existing surveys have focused on political or financial deepfakes. We have sought to fill this gap to better understand public attitudes towards the growing use of deepfake technology as a form of abuse against women and girls.
“While some deepfake content may seem harmless, the vast majority of video content is sexualised – and women are overwhelmingly the targets.
“We are deeply concerned about what our research has highlighted – that there is a cohort of young men who actively watch pornography and hold views that align with misogyny who see no harm in viewing, creating and sharing sexual deepfakes of people without their consent.”
She added: “People under the age of 45 are more likely to be aware of and exposed to deepfakes. And at the same time, there is a lack of awareness about the legal implications of creating and sharing deepfakes.
“More research is needed to understand how we can best educate the public about the harm that this content can cause – and how what is being watched online impacts a person’s attitudes and actions offline.
“Lastly, we want to look further into what policing and technology companies require to effectively enable intelligence led policing in this new and evolving space.”
Detective Chief Superintendent Claire Hammond from the National Centre for VAWG and Public Protection, said: “Sharing intimate images of someone without their consent, whether they are real images or not, is deeply violating.
“The rise of AI technology is accelerating the epidemic of violence against women and girls across the world. Technology companies are complicit in this abuse and have made creating and sharing abusive material as simple as clicking a button, and they have to act now to stop it.
“However, taking away the technology is only part of the solution. Until we address the deeply engrained drivers of misogyny and harmful attitudes towards women and girls across society, we will not make progress.
“If someone has shared or threatened to share intimate images of you without your consent, please come forward. This is a serious crime, and we will support you. No one should suffer in silence or shame.”
Paul Taylor, Chief Scientific Adviser for Policing, added: “This research was commissioned as part of a wider programme to support policing in understanding and responding to the evolving threat of deepfakes, and align with government and policing priorities to improve the response to online harms and halve VAWG within a decade
“Crest Advisory’s work provides a scientifically grounded and nationally representative view of public attitudes towards deepfakes, highlighting concerns around the growing use of deepfake technology as a form of gender-based violence against women and girls. Their focus on the psychological and emotional impact of this abuse is essential to influencing the approach required across the justice system to tackle this threat.”
The Internet Watch Foundation (IWF) has already welcomed proposed new rules that would allow AI tools to be thoroughly tested to make sure they cannot be used to create child sexual abuse imagery.
Currently, legal restrictions make it difficult to test AI products to ensure criminals cannot use them to make images or videos of child sexual abuse without committing an offence if they inadvertently created criminal imagery in the process.
A proposed new legal defence, announced by the Government earlier this month would mean designated bodies such as the IWF, as well as AI developers and other child protection organisations, will be empowered to scrutinise AI models robustly to make sure they cannot be used to create nude or sexual imagery of children.
The announcement came as the IWF published new data showing reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 (January 1 to October 31) to 426 in the same period in 2025.
According to the data, the material being created has also become more extreme, with the most serious Category A content (which can include imagery involving penetrative sexual activity, sexual activity with an animal, or sadism) having risen from 2,621 to 3,086 items in the same period last year.
Category A content now accounts for 56 per cent of all illegal AI material, compared with 41 per cent last year, suggesting criminals are using the technology to make the most extreme and serious imagery.
The data showed that girls have been most commonly depicted, accounting for 94 per cent of illegal AI images in 2025.


