Testing AI for a Billion: Can India’s New AI Framework Match Its Diversity?
The framework will push AISI’s institutional limits, demanding scalable partnerships, realistic scoping and durable capacity to evaluate AI systems in real-world settings
A background note can be accessed here: India’s AI Governance Framework to Balance Innovation with Safety & Accountability
Sumeysh Srivastava: Partner, The Quantum Hub
SDG 9: Industry, Innovation and Infrastructure
Institutions: Ministry of Electronics and Information Technology
AI governance is often framed as a response to risks like algorithmic bias, misinformation, and deepfake misuse. Are these challenges realistically solvable, or only manageable within limits? What should India’s AI governance framework ultimately aspire to achieve: prevention, accountability, or adaptive risk management?
AI harms like bias, misinformation, and deepfakes aren’t fully “solvable” in a binary sense. Their impact can be reduced when prevention, accountability, and adaptive risk management work together. Prevention, through better data practices, responsible platform design, watermarking, lowers baseline risk. Accountability – graded liability, transparency reporting, grievance channels – creates incentives to correct harms. Adaptive tools such as incident databases, sandboxes, periodic review help policy evolve with evidence.
In this context, the issue is not choosing one approach but integrating all three within the phased ethos of the AI Governance Guidelines, 2025. The Guidelines operationalise this by mapping existing laws (IT Act, DPDP Act, BNS) to AI-related harms and identifying where targeted amendments may be needed. They embed graded liability tied to function and due diligence, and build feedback loops through incident reporting and sandboxes.
This balanced ambition–reducing harms, ensuring proportionate responsibility, and allowing rules to mature as capacity grows–fits India’s developmental context. Success depends on building institutional capacity now (AI Governance Group, AI Safety Institute), establishing clear enforcement mechanisms, and maintaining the voluntary-to-mandatory pathways in compliance.
The proposed AI Safety Institute (AISI) aims to develop standards suited to India’s socio-cultural diversity. Can the AISI realistically test AI models for biases across India’s multiple languages, economic conditions, and regional contexts? What aspects of this mandate are likely to test its institutional and technical capacity the most?
AISI can play a meaningful role, but only if it is resourced adequately, organized for scale, and works closely with researchers and civil society. Testing AI models for bias across India’s 22 languages, socio-economic strata, caste dynamics, and regional contexts involves a level of combinatorial complexity that few regulators worldwide have attempted.
The biggest capacity tests will be building and maintaining representative evaluation datasets, especially in regional languages, ensuring sufficient resources for model evaluations, and establishing regular post-deployment monitoring.
Pragmatic solutions exist. The hub-and-spoke model can enable partnerships with academic institutions, sectoral regulators, and state governments. Public-private testbeds, regulatory sandboxes, and existing infrastructure like Bhashini and AIKosh provide building blocks. The RBI’s FREE-AI framework shows sectoral regulators can set sophisticated requirements when needed.
But these approaches need rapid, durable funding and institutional independence to work. AISI’s success will depend on scoping its mandate realistically: starting with high-impact domains like financial services or critical infrastructure rather than attempting an all-sector evaluation from day one.
Should India’s AI framework establish explicit norms for ethical deployment, given the rapid expansion of AI use and its potential harms? Or might codifying ethics too tightly constrain innovation and experimentation? How can regulation strike a balance between fostering innovation and ensuring responsible use?
India should articulate explicit ethical norms for AI, but frame these as high-level principles rather than rigid rules that risk constraining innovation. Ethics should set the direction - child safety, non-discrimination, accountability, and respect for autonomy - while governance tools translate these principles into practice.
A principles-based approach avoids freezing technical development while still signalling clear expectations. India’s Guidelines already signal this balance: the sutras provide value-level guidance without hard-coding premature rules.
Operationalization, however, belongs to governance. Ethical principles could translate into measurable obligations such as dataset provenance documentation, explainability, grievance redress mechanisms, and safety evaluations, supported by enablers like regulatory sandboxes, certifications, and technical toolkits. India’s experience with self-regulatory regimes in advertising and media shows that such mixed models work when incentives align, but are insufficient for bad actors or slow-emerging harms.
Some principles deserve bright-line protections. Child safety and non-discrimination, especially where constitutional rights are implicated, warrant interventions from the outset. Others can remain flexible, adapting as capabilities and expectations evolve.
Drawing on India’s Digital Public Infrastructure approach, governance can embed safeguards through design requirements, not just procedural compliance. Over time, evidence from voluntary adoption, audits, and incident databases can justify targeted, proportionate mandates in high-risk domains.
This adaptive model allows India to preserve space for innovation while ensuring responsible deployment. Move too early and innovation is chilled; move too late and harmful practices entrench. The challenge is sequencing ethical clarity with evolving governance tools that keeps both innovation and rights intact.
Author:
Views are personal.


