As India embraces AI to reform its overburdened judicial system, Premium Justice examines the hidden trade-offs of optimising for speed and efficiency in this core public function. Set in a near-future Tamil Nadu, the story follows Judge Nathan as he encounters the consequences of continually layering technology on an overburdened public system. Through courtroom glitches, automated verdicts, and silent shifts in decision-making power from human judges to unaccountable systems, the story reflects on how technology can quietly exacerbate existing inequities. At its core, Premium Justice is a provocation: What happens when access to justice becomes a premium service, and human decision-making is quietly coded out?

Read the Accompanying Essay

For millions of Indians, the Indian judiciary is the beacon of justice. At the same time, the judicial system has been overburdened, leading to slower case processing. For hopeful litigants, seeking justice becomes a painful and long-drawn process — ‘process being the punishment’. At the end of 2024, there were 50 million pending cases in India, with a 20% increase between 2020 and 2024. This over-pendency has serious repercussions: 76% of current pending cases are undertrials — people awaiting the completion of investigation or trial.

Over the years, attempts have been made to restructure and improve judicial processes, with some improvements. In recent years, the Indian courts have turned to AI-based tools to make the judiciary efficient and enable it to deal with case backlogs and pendencies: SUPACE (Supreme Court Portal for Assistance in Court Efficiency) and SUVAS (Supreme Court Vidhik Anuvaad Software) are tools used to provide decision-making information and translate judgments into various regional languages, respectively. Similarly, AI Saransh is a tool developed to give the courts a précis of the pleadings.

There is little information in the public domain about how well these tools actually work, partially owing to how nascent these tools are. Despite that, there is a lot of excitement about how AI can improve the judiciary in the future: the government insists that the future of law and justice in India will be shaped by AI-powered legal research, blockchain-secured case records, judicial transparency through AI analytics, and enhanced cybersecurity in law enforcement. A lot of the excitement is based on speculation over what AI could do for law and justice, without weighing in nuanced discussions on the operationalisation of these tools in the current context of over-burdened and delayed courts, or comprehensively considering the risks and potential harms of deploying these tools amongst populations that are not sufficiently digitally savvy.

Premium Justice is a story that explores the implications of unchecked adoption of AI technologies in the judicial system, including the perpetuation of existing inequities and the concentration of power in the hands of techno-legal gatekeepers.

Based in Tamil Nadu in 2035, Nathan is a techno-optimist judge who sees the potential of AI to make the judicial system more efficient and improve accessibility to the public. However, through a series of unfortunate events, he soon realises that AI tools can perpetuate injustice on a large scale when used irresponsibly and without forethought. Nathan begins to question the increasing dominance of AI-based tools in delivering justice and his role as an actor within this system. It’s a fear echoed amongst real-life judges in India: Justice BR Gavai, the Chief Justice of India, has warned of the overreliance on AI and the need for caution when integrating it within justice systems.

The fear is warranted: 'shadow use’ of chatbots is already prevalent globally, where AI tools are being used in the judiciary with little transparency from court officials who use them. In the UK and US, the courts recently sanctioned lawyers for relying on fake cases and precedents generated with the help of AI tools like ChatGPT. This reality forms the basis of Judge Nathan’s crusade toward more Responsible AI usage in the judiciary, when he’s discouraged from declaring a mistrial after lawyers in the Thalugur District Courts cite fake cases and fabricated facts. His superior discourages declaring a mistrial, since retrials slow down case disposal - a key metric for a judge’s promotion. Incorporating AI in imperfect systems can push people to favour speed over accuracy.

Convinced that things would work better if he were in an environment where people did not fall subservient to the technology, Nathan signs up to become a ‘Soft Judge’, an initiative that pairs human judges with AI bots to oversee trials and arrive at verdicts quicker. Nathan’s initial trust in the system highlights the deference of judicial officers such as Nathan and his superiors towards technology - a faith that technology is inherently neutral and, therefore, can only be helpful for a just cause.

On his first day on the job as a ‘soft judge’, Nathan sees how AI-based legal support contributes to widening the gaps it promises to bridge. The government’s free Nyaya Basic AI system barely supports the litigants. In contrast, litigants who can pay for Nyaya Premium can use its sophisticated features to creatively blur facts from fiction. It takes more than a judge’s disposal rate to delay the courts; many factors determine case disposal, such as court culture, people trying to slow things down on purpose, delays in the adjunct sectors such as police, and the fact that attending courts is not equally accessible across various socio-economic classes of litigants. The AI-based Ombudsman fails to catch and consider these distortions.

Eventually, a series of absurd revelations about the system's limitations motivates Judge Nathan to join the ‘Code Scrubbers’, a new class of former-legal-staff-turned-tech-workers in the justice system, who identify mistakes in AI-generated outputs and train them to run better. These behind-the-scenes actors now shape the evidence, correct errors, and determine what information reaches the court’s attention. In doing so, they wield a silent but profound influence over legal outcomes, leaving litigants and even judges powerless to intervene when AI systems go astray. Joining the Code Scrubbers is Nathan’s ironic last-ditch attempt to regain his agency in the evolving judicial system — by working to improve the tools that infringed upon his agency to begin with.

The AI tools in the story — whether developed by the judiciary for its use, legal aid assistance provided by the government, or commercially available tools shadow-used by court officials — were meant to increase access to justice for the public. However, programming the tools to narrowly focus on ‘productivity’ rendered them unable to understand context, to hallucinate, and to introduce errors at various stages of trials. In Premium Justice, these solutions ended up reproducing some of the same inequities present today - wealthy litigants can secure better representation, even if their cases were weak, for example. In addition to changes in how judicial processes unfold, the introduction of technology disrupted the everyday roles of all key actors — judges, lawyers, and the administrative staff who keep courts running. Moreover, as portrayed in the story, when technological ‘code scrubbers’ begin to take on or steer quasi-judicial functions, they risk quietly shifting core decision-making powers to persons and systems not qualified to do so.

Premium Justice ultimately highlights a troubling shift - it showcases how AI tools built on reductive underlying assumptions can widen existing disparities for those seeking justice, if left unchecked. Small, seemingly administrative tasks like summarising arguments or evidence can distort outcomes when they bypass human judgment or fail to capture the complexity of real-world cases. Decision-making can shift away from accountable institutions into the hands of unaccountable actors and flawed algorithms. The story highlights how AI systems in public functions, such as the judiciary, should be carefully designed, tested, and governed, as they may silently reshape decision-making power. A truly responsible approach requires rethinking not just what AI can do, but what justice demands.

Premium Justice is a reminder that:

  1. Technology cannot fix systemic inequities by itself, and may even worsen them: As seen with the stratified AI services based on a litigant’s paying capacity, AI tools built upon existing structural divides risk amplifying those very inequities. Poor litigants remained at a disadvantage despite access to subsidised AI tools. At the same time, wealthier parties could use superior tools to distort the truth, with no effective check in place. Without deliberate correction, AI may simply entrench existing patterns of exclusion and injustice, and amplify them.
  2. Even routine tasks performed by AI can have profound impacts: Functions like summarising arguments or sorting evidence, when handed over to AI, seem minor. There may even be a false comfort that these are simply assistive technologies to augment a judge’s decision-making role. However, these administrative tasks can reshape case outcomes by filtering or misrepresenting critical information, based on which ultimate decisions may be made.
  3. Features of AI tools must be built collaboratively with the relevant end-users: The development of AI for the judiciary must involve judges, courtroom staff, litigants, and civil society groups, not just engineers and vendors. As the story reveals, a narrow system design focused solely on case disposal rates ignores the fundamental tensions and incentives that shape a judge's behaviour. While Judge Nathan wanted to maintain an excellent case disposal rate, he felt inclined to review the original litigants' submissions, not just AI-generated summaries, to ensure due process and dispense justice. Not being allowed to do so makes him disillusioned with the system, and his belief in the technology wavers.
  4. Evaluation, oversight, and human autonomy are non-negotiable. Judge Nathan’s inability to review original documents or challenge AI summaries reveals how deference to technology can minimise human involvement in the processes. In worst-case scenarios, this can ultimately strip judges of their core judicial function. It highlights the dangers of replacing human judgment with automated decisions built with minimal understanding of the nuances involved in making judicial decisions.

    Any AI-based administrative tool intended to be procured by the judiciary should be piloted and validated in real-world settings, with built-in mechanisms for continuous evaluation and regular updates, especially when used at scale. In addition, checks and balances by judges should be maintained in judicial processes. As Premium Justice illustrates, stripping humans of autonomy in the name of efficiency can undermine the legitimacy and fairness of justice itself.