When Speed Meets Due Process: Rethinking India’s 24-Hour Takedown Regime
India’s takedown model shifts power, responsibility, and expectations; and also clarifies who answers for what
A background note can be accessed here: MeitY’s SOP for 24-Hour Takedown of Non-Consensual Intimate Imagery
Abhishek Kumar: Founding Partner, Indicc Associates | Convenor, New India Consumer Initiative
SDG 9: Industry, Innovation and Infrastructure
Ministry of Electronics and Information Technology
A 24-hour takedown mandate compresses the window for verification, appeal, and due process. How can platforms and government agencies balance the urgency of removal with procedural fairness, especially where content legality is context-dependent?
A 24-hour takedown rule is justified for the most harmful content, but applying the same speed to everything risks mistakes and weakens due process. A more workable solution is a tiered response that aligns urgency with the nature of the content. First, content that’s clearly illegal and deeply harmful – non-consensual intimate imagery, child sexual abuse material, or explicit incitement to violence – should be removed immediately. The harm is unambiguous and often irreversible, so speed is the safeguard. Second, there’s content that isn’t inherently harmful but is highly context-dependent: political speech, satire, cultural expression, or reporting. These need a bit more time and careful human review because meaning changes entirely with context. Third, for borderline or uncertain cases, platforms should use provisional measures: temporarily limit visibility or add friction while a rapid human review determines whether permanent removal is warranted.
Across all three tiers, the system only works if humans remain in the loop; algorithms can flag content quickly but cannot interpret nuance. And any fast removal regime must be backed by an equally fast and credible appeals pathway, such as the Grievance Appellate Committee (GAC), so errors can be corrected without delay.
The 24-hour rule shifts part of the enforcement burden to private platforms. Does this transfer of power risk blurring lines of accountability between regulator and intermediary, and how should India’s institutional design evolve to preserve both efficacy and oversight?
The 24-hour rule does shift more enforcement work onto private platforms, but it doesn’t necessarily blur accountability if India treats this as a deliberate co-regulatory model rather than an abdication of state responsibility. Under the current framework, platforms handle the first layer of moderation – taking down content quickly and meeting process standards – while the state sets the rules, defines what counts as prohibited content, and provides appellate oversight. In other words, platforms carry procedural accountability, the government retains normative accountability, and bodies like the GAC provide adjudicative accountability. This split is by design: it avoids the risks of pure self-regulation, but also prevents a single government agency from becoming the bottleneck for millions of moderation decisions.
India’s regulatory landscape is already moving toward greater specialization. The GAC now handles content-related appeals; the Data Protection Board focuses on privacy; and MeitY can concentrate more on policy design rather than day-to-day adjudication. The natural next step is better coordination across these specialised institutions. A Digital Regulation Coordination Council, similar to cross-regulator forums in mature jurisdictions, could help avoid contradictory directives, enable information-sharing on new risks, and support consistent, evidence-based enforcement. This preserves both efficacy and oversight as the system matures.
If intermediaries must act within 24 hours or risk losing safe-harbour protection, how might this reshape upstream behaviour? Will platforms rely more on algorithmic moderation, stricter onboarding, or pre-emptive filtering? What trade-offs would these shifts create for innovation, user rights, and governance?
The threat of losing safe-harbour protection gives the 24-hour rule real bite, and it’s likely to change how platforms behave long before any takedown request arrives. Faster liability exposure pushes companies toward more proactive systems, moving from “notice-and-takedown” to a “notice-and-stay-down” mindset. We are already seeing this in the NCII protocol, where platforms use hash-matching to block re-uploads, crawler tools to detect new variants, and coordinated escalation pathways. Similar pressures could push platforms toward stricter onboarding, more automated flagging, and heavier pre-emptive filtering.
But each of these shifts brings trade-offs. More automation can help platforms meet deadlines, yet algorithms still struggle with context, especially in regional languages. The result can be over-removal – satire mistaken for abuse; reporting flagged as incitement – creating a quieter but significant chilling effect on speech. Leaning too heavily on automated tools also raises legitimacy concerns: users are far more likely to accept a decision when a human has reviewed it, explained it, and provided an avenue for appeal.
On the innovation side, clear and predictable rules aren’t necessarily obstacles. In fact, they can create stability and trust, which are essential for long-term product building. The real risk is regulatory uncertainty, not reasonable guardrails.
Author:
Abhishek Kumar is a Founding Partner at Indicc Associates and Convenor at the New India Consumer Initiative.
Views are personal.


