SDG 8: Decent Work and Economic Growth | SDG 9: Industry, Innovation and Infrastructure | SDG 17: Partnerships for the Goals
Ministry of Electronics and Information Technology (MeitY) | NITI Aayog
The OECD working paper titled Exploring possible AI trajectories through 2030 explores potential scenarios for Artificial Intelligence (AI) development by 2030. Released in February 2026, this paper utilizes expert judgment and trend extrapolation to analyze the alignment of AI capabilities with human cognitive tasks across different potential paths.
AI Progress Trends
AI systems have demonstrated rapid advancements in benchmarks for mathematics, coding, reasoning, and language understanding. However, systems still lag behind human performance in areas like creativity, continual learning, metacognition, and social interaction. Current benchmarks also face limitations, including data contamination and a lack of real-world relevance.
Key Uncertainties
Future AI progress is dependent on the scaling of compute and data inputs, as well as algorithmic efficiency gains. Critical challenges include potential power constraints, data scarcity, and the need for technical breakthroughs in memory, learning, and causal reasoning.
Four Core Scenarios for AI Progress by 2030
The report outlines four plausible trajectories for AI development:
Progress Stalls: Capabilities plateau at near-2025 levels due to technical or resource limits.
Progress Slows: Incremental gains deliver continued but slower progress as current approaches mature.
Progress Continues: Rapid progress persists, driven by ongoing investment and algorithmic innovations.
Progress Accelerates: AI achieves human-level or superhuman capabilities across most dimensions.
The OECD paper emphasizes the high uncertainty surrounding AI progress due to rapid innovation and unknown factors. It calls for governments to consider all plausible scenarios when developing policies for AI governance and adoption.
What is “Machine Unlearning” in the context of emerging AI trajectories? Machine unlearning is an emerging field of AI research focused on developing methods to selectively remove specific data—such as personally identifiable information—from a pretrained AI model without necessitating a full retraining of the system. This capability ensures that the influence of that data is purged from learned parameters, allowing individuals to retain agency over their digital footprints. As AI becomes integrated into public services, machine unlearning is considered foundational for fulfilling “right to be forgotten” mandates in a high-automation society.
Policy Relevance
The OECD trajectories provide a strategic framework for India’s labor market planning and sovereign technology positioning. As India scales its IndiaAI Mission, these trajectories highlight the urgent need for flexible regulatory frameworks that can adapt to shifts in autonomous cognitive capabilities.
Labor Market Resilience: India can proactively redesign vocational training to focus on AI-augmented roles, preventing displacement as AI approaches 2030 Task Parity.
Sovereign Data Protection: Development of Machine Unlearning protocols provides a technical path to enforce privacy rules within the Digital Personal Data Protection (DPDP) Act.
DPI Integrity: Ensuring Pretraining models are free from bias is critical for the fair delivery of AI-driven public services in healthcare and education.
Global Governance Leadership: India can lead the Global Partnership on AI (GPAI) in establishing international standards for “safe-fail” AI trajectories.
Follow the full report here: Exploring possible AI trajectories through 2030

