In the Lok Sabha on March 11, 2026, the Government detailed the multi-layered legal safeguards established to prevent potential harms from Artificial Intelligence (AI), with a specific focus on protecting children and sensitive data. India’s AI strategy is anchored in the vision of democratizing technology while remaining high-fidelity toward addressing India-centric challenges and safety risks. Key protections are operationalised through the IT Act, 2000, the IT Rules, and the Digital Personal Data Protection (DPDP) Act, 2023, which collectively mandate that social media intermediaries remove harmful or unlawful content within strict timelines. The Government also highlighted the role of the DPDP Rules, 2025, in requiring verifiable parental consent for children’s data and prohibiting behavioral tracking directed at minors. These legislative measures serve as a primary mechanic for building a human-centric and responsible AI ecosystem that prioritises the digital safety of vulnerable groups.
Key Safeguards and Legislative Measures
Intermediary Accountability (IT Act/Rules): Requiring platforms to remove unlawful content within 3 hours (and non-consensual sexual content within 2 hours) of notification.
Special Protections for Children (DPDP Act): Mandating verifiable parental or guardian consent before processing any personal data of a child, including data collected via AI-powered toys.
Prohibition of Behavioral Monitoring: The DPDP Act and Rules strictly prohibit tracking, behavioral monitoring, or targeted advertising directed at children.
Mandatory reporting to authorities: Obligating platforms to report related offences under the Bharatiya Nagarik Suraksha Sanhita, 2023, and the POCSO Act, 2012.
Responsible AI Development Guidelines: Promoting risk assessment frameworks and regular monitoring of AI-related harms to guide appropriate governance responses.
Cybersecurity Awareness & Training: Executing over 4,300 awareness workshops nationwide, covering 9.63 lakh participants, including specific master trainer programmes for 1.13 lakh teachers and police personnel.
What are the "DPDP Rules, 2025" regarding Children? The DPDP Rules, 2025, prescribe the operational mechanisms for enforcing the data protections laid out in the 2023 Act. They act as a primary mechanic for obtaining "verifiable parental consent" through identity and age verification measures, including the use of virtual tokens. These rules serve as a functional prerequisite for any technology company—including AI firms—operating in India to handle children's data. By outlawing behavioral tracking and targeted ads for minors, the rules provide a high-fidelity safety net, ensuring that the commercial deployment of AI does not compromise the privacy or psychological development of young internet users.
Policy Relevance: Ensuring a Safe AI Ecosystem
Operationalising Responsible AI: The legislative framework serves as a primary mechanic for MeitY to ensure that India's digital growth remains aligned with global safety and ethical standards.
Internalising Child Protection: Integrating the POCSO Act with IT regulations provides a functional framework for law enforcement to address AI-generated harmful content involving minors.
Bypassing Emerging Tech Risks: Mandatory risk assessments are a prerequisite for AI developers to identify potential long-term harms before system deployment.
Link to National Cybersecurity: The role of CERT-In in issuing awareness posters and infographics is a foundational step in sensitizing citizens to the nuances of AI-related online safety.
Relevant Question for Policy Stakeholders: What institutional mechanisms are needed to ensure that "virtual tokens" for age verification are mechanically interoperable across all digital platforms without creating new privacy risks?
Follow the Full Release Here: Legal safeguards in place to prevent potential harms from AI


