IT Rules Updates Strengthen Complaints Systems but Not Structural Accountability
The IT Rules Updates expand institutional recourse, but without systemic platform reform and due-process protections, both safety and expression remain vulnerable
A background note can be accessed here: IT Rules Updates 2026
Dr. Saumya Uma: Professor and Director, Centre for Women’s Rights, Jindal Global Law School
SDG 5: Gender Equality | SDG 16: Peace, Justice and Strong Institutions
Ministry of Electronics and Information Technology | Ministry of Women and Child Development
The expanded grievance and appellate mechanisms aim to improve user recourse. How effective are these institutional remedies likely to be in addressing gendered harms such as deepfakes, harassment, and non-consensual content circulation?
The National Cyber Crime Reporting Portal (NCCRP) recorded 22,188 cyber crimes against women in 2020, rising to 48,475 in 2024 – an increase of 118.4 percent. Against this backdrop, the 2026 Rules are significant in strengthening formal redress mechanisms, particularly where a complaint is successfully registered. Provisions mandating intermediaries to remove offensive content within two hours of receiving a complaint, and within three hours of receiving a government order, offer the possibility of prompt relief in cases involving harassment, deepfakes, morphing, doxxing, and identity theft. The establishment of a Grievance Appellate Committee with time-bound disposal of appeals seeks to reduce delays that have historically discouraged complainants.
However, institutional design alone does not guarantee accessibility. Not all women are aware that complaints may be registered through the NCCRP portal, or that the 1930 helpline provides assistance. Access barriers include digital literacy gaps, fear of retaliation, stigma associated with reporting, and uncertainty about evidentiary requirements in cases involving morphed or AI-generated content. Where evidentiary burdens fall heavily on victims – for example, in proving manipulation or lack of consent – remedies may remain underutilised despite formal availability. Capacity-building efforts for law enforcement, including through Massive Open Online Course (MOOC)-based training platforms, must therefore incorporate gender-sensitive approaches that ensure respectful, non-judgmental treatment of complainants. Without awareness, institutional accountability, and victim-sensitive implementation, procedural reform will not translate into meaningful access to justice.
Do stricter takedown and transparency obligations meaningfully shift platform incentives in moderating misogynistic or abusive content, or are deeper algorithmic reforms required ?
The strengthened takedown timelines and clearer obligations on intermediaries increase accountability, particularly in cases involving identity theft and morphed images. This is significant given the prevalence of non-consensual image circulation and targeted harassment of women online. By imposing time-bound compliance requirements, the Rules may alter platform cost calculations regarding delayed moderation and under-enforcement.
That said, the effectiveness of these measures depends not only on removal obligations but also on how platforms structure visibility and amplification. Misogynistic or abusive content often spreads rapidly due to ranking algorithms that privilege engagement, controversy, or virality. If harmful content is algorithmically amplified before it is reported, post-facto takedown may mitigate harm only partially.
Transparency requirements can improve oversight, but they do not automatically address uneven enforcement, where certain users may experience inconsistent moderation outcomes. A sustained shift in platform incentives may therefore require greater scrutiny of algorithmic design choices, recommender systems, and internal escalation protocols, alongside procedural compliance. Regulatory attention must extend beyond content removal to the systemic risk management practices embedded within platform design.
Could tighter content governance create unintended chilling effects on women’s online participation, especially in political or advocacy spaces? How should safeguards be structured to protect both safety and expression?
Efforts to enhance online safety for women must be carefully balanced with respect for privacy and the constitutional guarantee of freedom of speech and expression. While prompt intervention is essential where significant harm is caused or likely, overly broad or opaque enforcement may discourage legitimate expression, particularly in political or advocacy contexts where women already face disproportionate harassment.
Chilling effects may arise if content removal standards are applied inconsistently, or if fear of account suspension deters participation in contentious debates. Safeguards should therefore incorporate principles of proportionality, clarity in content standards, and accessible appellate remedies. The Grievance Appellate Committee provides one layer of review, but its credibility will depend on independence, transparency of reasoning, and timely resolution. Ensuring that interventions are targeted to demonstrable harm, rather than vague or overbroad categories, can help preserve expressive space while enhancing protection.
Ultimately, improving women’s safety in digital spaces requires a coordinated framework built on three interlinked elements: awareness among users regarding available remedies; accountability of perpetrators, intermediaries, platforms, and officials; and victim-sensitive institutional culture. Balancing safety and participation is not a zero-sum exercise, but a design challenge requiring careful calibration of enforcement speed, procedural fairness, and systemic platform governance.
Author:
Views are personal.


