Drawing the Line: Transparent and Accountable AI for the Judiciary
AI can transform courts if quality, capacity, and equitable deployment keep pace with ambition
A background note can be accessed here: Indian judiciary’s AI pilots across courts
Paakhhi Garg: Director, World Cyber Security Forum (WCSF)
SDG 8: Decent Work and Economic Growth
Ministry of Law and Justice | Ministry of Electronics and Information Technology (MeitY)
Current deployments keep judges as the sole decision-makers, with AI limited to assistive tasks such as summarisation, research and translation. What principles should guide which judicial functions are appropriate for AI assistance, and which must remain strictly human to preserve due process, fairness and the integrity of judicial reasoning?
The strategic integration of AI in courts requires a clear demarcation between improving judicial efficiency and safeguarding constitutional integrity. AI’s role must remain strictly augmentative – meaning it supports judges rather than replacing them. It should be limited to computational and procedural tasks that accelerate workflow: advanced legal research (Legal Research Assistance and Analysis, LegRAA), intelligent document summarisation, case-flow management (Digital Courts Version 2.1), and accurate translation/transcription tools such as PANINI and Automatic Speech Recognition – Speech to Text in Indic languages (ASR-SHRUTI). Such delegation is essential for reducing case backlogs and enabling better use of judicial time.
A strict policy boundary must be drawn when a task moves from information processing to ethical decision-making. This includes the determination of legal liability, the assessment of witness credibility, the exercise of judicial authority to decide punishment within legal limits, and the interpretation of facts against nuanced legal principles. These core judicial functions demand human empathy, ethical judgment, and contextual understanding, which no algorithm can replicate.
Policy must prohibit the use of AI as a system whose recommendations directly determine case outcomes. AI-generated outputs must remain advisory inputs that judges independently review and validate, thereby maintaining the judge as the sole authority accountable to the law and the public for the final verdict.
AI use in courts raises risks of bias, hallucinations, misinterpretation of pleadings, and opaque decision pathways. What transparency, audit and oversight mechanisms – such as documented versioning, human-in-the-loop checks, open or reviewable models, error logs, and appeals safeguards – are essential to maintain trust in judicial outcomes?
To sustain public trust in AI-enabled judicial processes, establishing a robust framework for accountability and transparency is paramount. This requires detailed mandatory audit trails – automated logs that capture every interaction with an AI tool. These audit trails must record the specific model version used, the exact segment of input data provided, and the algorithmic confidence score – a numerical estimate of how sure the AI is about its answer – associated with the output. This transparent record is crucial for facilitating the appellate process and addressing concerns regarding unexplained AI reasoning or algorithmic bias, defined as systematic errors in AI that disadvantage certain groups.
Additionally, a “Human-in-the-Loop” standard – a safeguard ensuring that humans make the final decision – must be enforced, requiring judges to provide documented rationale for accepting or rejecting AI-generated suggestions. The newly formed Sub-Committee must conduct continuous, rigorous Performance Audits to proactively identify bias, detect “hallucinations” – instances where AI produces fabricated information – and ensure the models adhere to constitutional and ethical standards. Policy should mandate that the methodology and limitations of any deployed AI are openly communicated to all stakeholders, including judges, lawyers and litigants; entrepreneurs able to use that money productively have often already moved to the established hubs.
AI tools could expand access through real-time translation, transcription and improved case-flow management, yet failures in regional-language models or uneven infrastructure could deepen inequity. What institutional safeguards are required to ensure that AI in courts strengthens, rather than undermines, accessible and equitable justice?
The real measure of India’s AI-for-justice initiative will be whether it advances inclusive digital justice, ensuring technology strengthens access rather than widening inequalities. A critical institutional safeguard is the rigorous multi-lingual quality assurance protocol – a systematic process for testing accuracy and reliability – for tools like PANINI. This will ensure high accuracy across India’s diverse linguistic landscape to prevent translation failures from prejudicing a case.
Policy must mandate that the deployment strategy prioritises underserved and regional courts, directing the ₹53.57 crore investment toward alleviating infrastructural and case-load disparities – differences in the volume of pending cases – outside of metropolitan hubs. This must be coupled with extensive capacity-building and technical-training programs for judicial officers, court staff, and the legal fraternity across all jurisdictions.
These measures are essential to ensure that disparities in technical capacity or language support do not translate into unequal access to justice across courts.
Author:
Paakhhi Garg is Director at the World Cyber Security Forum (WCSF).
Views are personal.


