‘Deepfake abuse is abuse’
NEW YORK, 4 February 2026 – “UNICEF is increasingly alarmed by reports of a rapid rise in the volume of AI-generated sexualised images circulating, including cases where photographs of children have been manipulated and sexualised.
“Deepfakes – images, videos, or audio generated or manipulated with Artificial Intelligence (AI) designed to look real – are increasingly being used to produce sexualised content involving children, including through “nudification,” where AI tools are used to strip or alter clothing in photos to create fabricated nude or sexualised images.
“New evidence confirms the scale of this fast-growing threat: In a UNICEF, ECPAT and INTERPOL study* across 11 countries, at least 1.2 million children disclosed having had their images manipulated into sexually explicit deepfakes in the past year. In some countries, this represents 1 in 25 children – the equivalent of one child in a typical classroom.
“Children themselves are deeply aware of this risk. In some of the study countries, up to two thirds of children said they worry that AI could be used to create fake sexual images or videos. Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention, and protection measures.
“We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material (CSAM). Deepfake abuse is abuse, and there is nothing fake about the harm it causes.
“When a child's image or identity is used, that child is directly victimised. Even without an identifiable victim, AI-generated child sexual abuse material normalises the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children that need help.
“UNICEF strongly welcomes the efforts of those AI developers that are implementing safety-by-design approaches and robust guardrails to prevent misuse of their systems. However, the landscape remains uneven, and too many AI models are not being developed with adequate safeguards. The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly.
“UNICEF urgently calls for the following actions to confront the escalating threat of AI-generated child sexual abuse material:
- All governments expand definitions of child sexual abuse material (CSAM) to include AI-generated content, and criminalise its creation, procurement, possession and distribution.
- AI developers implement safety-by-design approaches and robust guardrails to prevent misuse of AI models.
- Digital companies prevent the circulation of AI-generated child sexual abuse material – not merely remove it after the abuse has occurred; and to strengthen content moderation with investment in detection technologies, so such material can be removed immediately – not days after a report by a victim or their representative.
“The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up.”
About UNICEF
UNICEF is the world’s leading humanitarian organization focused on children. We work in the most challenging areas to provide protection, healthcare and immunizations, education, safe water and sanitation and nutrition. As part of the United Nations, our unrivaled reach spans more than 190 countries and territories, ensuring we are on the ground to help the most disadvantaged children. While part of the UN system, UNICEF relies entirely on voluntary donations to finance our life-saving work. Please visit unicef.ca and follow us on Twitter, Facebook and Instagram.