Deepfake Abuse Is Abuse: Statement by UNICEF on AI-Generated Sexualized Images of Children
SDG 5: Gender Equality | SDG 16: Peace, Justice and Strong Institutions | SDG 17: Partnerships for the Goals
Ministry of Electronics and Information Technology (MeitY) | Ministry of Women and Child Development | Ministry of Home Affairs
UNICEF has raised alarm over the rapid rise in AI-generated sexualized images of children. In a statement released on February 4, 2026, the organisation clarified that sexualised images created or manipulated using AI tools constitute Child Sexual Abuse Material (CSAM). UNICEF highlights a growing trend of “nudification,” where AI tools are used to strip or alter clothing in photographs to create fabricated nude images, directly victimizing children.
Scale of the Threat and Technological Risks New research involving UNICEF, ECPAT, and INTERPOL has quantified the global crisis and the specific risks associated with generative AI:
Prevalence: At least 1.2 million children — across 11 countries — disclosed having had their images manipulated into sexually explicit deepfakes in the past year.
Statistical Impact: In some nations, this affects 1 in 25 children, which is roughly equivalent to one child in a typical classroom.
Social Media Integration: The risk is compounded when generative AI tools are embedded directly into social media platforms, allowing manipulated content to spread rapidly.
Impact on Law Enforcement: AI-generated CSAM presents significant challenges for identifying and protecting children who need help by normalising the sexual exploitation of children.
What is “Nudification” in the context of AI-generated child abuse? Nudification refers to the use of specific generative AI software designed to digitally remove or alter clothing from a photograph of a person—often without their consent—to create a realistic nude or sexualized image. In the context of child protection, UNICEF warns that these tools are being used to turn everyday photos of children into fabricated sexual abuse material. Because these “deepfakes” are designed to look real, they cause genuine psychological harm to the victimized child and present significant challenges for law enforcement in distinguishing between real and synthetic exploitation.
Policy Relevance
The UNICEF statement highlights the urgent need to strengthen India’s digital safety laws and technological guardrails to protect children from synthetic abuse.
Legislative Alignment: Expanding the definition of Child Sexual Abuse Material (CSAM) under the POCSO Act and Information Technology Rules to explicitly include AI-generated content is necessary to criminalize its creation and distribution.
Safety-by-Design Mandates: MeitY can issue advisories requiring AI developers in India to implement robust guardrails and watermarking to prevent the misuse of models for “nudification”.
Proactive Content Moderation: Social media platforms operating in India must be encouraged to invest in AI-based detection technologies that block the upload of synthetic abuse material in real-time, moving beyond reactive removal.
National Awareness Campaigns: The Ministry of Women and Child Development should lead initiatives to sensitize parents and children about the risks of AI-driven exploitation and the available reporting mechanisms.
Relevant Question for Policy Stakeholders: How can the Ministry of Home Affairs collaborate with Indian AI startups to develop an indigenous ‘Deepfake Detection API’ that can be integrated into all domestic social media platforms to neutralize synthetic CSAM at the source?
Follow the full news here: Deepfake abuse is abuse | UNICEF

