From Deepfakes to Deterrence: How can India Build a Safer Digital World for Women
A coordinated approach to platform design, legal reform, and digital literacy is essential to protect women online
A background note can be accessed here: AI and Anonymity Fuel Surge in Digital Violence Against Women
Dr. Aastha Dang: Gender Strategy Specialist, Vertelo (Macquarie Asset Management)
SDG 9: Industry, Innovation and Infrastructure
Ministry of Electronics and Information Technology
UN Women notes that AI tools, especially deepfakes and algorithmic amplification, are worsening patterns of online harassment and abuse. What behavioural or platform-design factors enable this escalation, and how might policy or technical interventions shift these behaviours to reduce harm?
AI-driven tools such as deepfakes and algorithmic amplification escalate online harassment because platform design often overlooks gendered risks. Anonymity lowers accountability, while engagement-driven feeds prioritise sensational or hostile content, making abuse more visible and harder to contain. When AI systems learn from data shaped by existing social hierarchies, they replicate and accelerate these inequalities. Deepfakes intensify harm by giving fabricated content a sense of credibility, enabling harassment, blackmail, and reputational damage to spread rapidly. Embedding “gender by design” requires recognising how power structures shape women’s digital experiences and ensuring technology reflects the disproportionate risks they face.
Reducing harm depends on clearer regulation and better technical choices. Gender-disaggregated data and transparency on algorithmic outcomes can improve representation within AI systems, while greater inclusion of women in engineering and trust-and-safety roles strengthens product decisions. Iterative, feminist-informed algorithmic impact assessments help identify emerging risks in high-impact applications. Simpler, more responsive grievance-redressal mechanisms and sustained digital-safety awareness initiatives empower users. Platform-level friction–such as watermarking AI-generated media, limiting mass forwarding, and rate-limiting known abuse patterns–can disrupt escalation pathways. And when platforms know that deepfakes, doxxing, and targeted harassment attract clear legal consequences, incentives shift toward prevention– making digital spaces safer and more equitable.
The combination of anonymity, cross-border platforms and regulatory gaps leaves many women without clear avenues for protection or redress. What legislative or regulatory steps should India prioritise to ensure that anonymity does not translate into impunity, especially given AI’s role in scaling and personalising online violence?
Anonymity remains important for privacy, but without legal safeguards it can turn into impunity, especially as AI can personalise and scale harassment across borders. India’s legal architecture has not fully caught up with harms such as deepfake pornography, identity manipulation, or algorithmically amplified targeting. The priority is not a standalone AI law but updating existing statutes to reflect the full spectrum of digital-age abuse.
A key step is expanding gender-protection laws and workplace regulations to explicitly recognise online and technology-facilitated violations. The POSH Act, for instance, should cover online harassment, data misuse, and the preventive duties employers must undertake – awareness programmes, safer digital practices, and protective tools. Organisations also need robust gender-disaggregated data systems to surface bias and strengthen accountability for AI-enabled harms. Regulators should require platforms and employers through periodic risk assessments, harm-reduction protocols, and transparent incident reporting from platforms and employers. Finally, embedding offences like deepfake abuse, identity theft, and targeted digital harassment within anti-violence laws can create deterrence. These reforms ensure anonymity protects vulnerable users rather than enabling impunity or shielding perpetrators.
With an estimated 1.8 billion women and girls lacking legal protection from online harassment, digital violence now directly affects women’s participation, voice, and mobility online. How can India’s AI governance framework, platform-level rules, and user-education initiatives reinforce one another to safeguard women’s participation, not merely respond after violations occur?
With 1.8 billion women and girls lacking legal protection from online harassment, safeguarding participation requires a coordinated ecosystem that links national AI governance, platform rules, and community-level support. India’s AI advisory guidelines signal ethical intent, but preventing harm demands operational and enforceable safeguards: clear standards for content provenance, watermarking, and reliable detection of synthetic media; and system-wide algorithmic fairness checks, including regular audits and risk-scoring for high-risk AI applications to catch disproportionate harms early.
Protection also depends on building an offline–online safety continuum. Many women need practical training, guided support, and clear grievance channels to navigate digital spaces confidently. Civil society organisations working on violence against women can extend their expertise to technology-enabled abuse, offering awareness programmes through trusted community networks. Government infrastructure, such as Common Service Centres, can act as local access points for both information and support. Sustained digital-literacy initiatives, especially those highlighting gendered risks and reporting mechanisms, further strengthen women’s ability to engage online on their own terms. When governance, platform responsibility, and user empowerment operate in tandem, safety becomes preventive rather than reactive, and participation becomes genuinely secure rather than precarious.
Author:
Dr. Aastha Dang is a Gender Strategy Specialist at Vertelo (Macquarie Asset Management).
Views are personal.


