Orders based on AI-generated judgments will be construed as misconduct: SC

The Supreme Court’s Definitive Stance: AI-Generated Judgments as Professional Misconduct

The Indian judicial landscape is currently witnessing a watershed moment as the Supreme Court of India confronts the encroaching influence of Artificial Intelligence (AI) within the hallowed halls of justice. In a move that underscores the sanctity of human intelligence in the adjudication process, a bench comprising Justice PS Narasimha and Justice Alok Aradhe has articulated a stern warning: any judicial order or judgment found to be primarily the product of AI-generated content, rather than a judge’s independent application of mind, will be construed as professional misconduct. This pronouncement marks a critical juncture in Indian jurisprudence, balancing the inevitable technological evolution with the non-negotiable requirements of judicial accountability and ethical integrity.

As a Senior Advocate with decades of practice, I view this development not merely as a resistance to technology, but as a necessary safeguard for the rule of law. The court has recognized that while AI can be a formidable tool for research and administrative efficiency, it cannot—and must not—replace the discretionary power and the moral weight of a human judge. The decision to issue notices to the Attorney General for India, R Venkataramani, the Solicitor General, Tushar Mehta, and the Bar Council of India (BCI) indicates that the apex court intends to lay down a comprehensive framework that will govern the use of AI in the legal profession for years to come.

The Gravity of “Misconduct” in a Judicial Context

To understand the weight of the Supreme Court’s warning, one must understand what “misconduct” implies in the legal and judicial hierarchy. Misconduct is not merely an error of judgment; it is a breach of the fundamental duties expected of a person holding a public office or a professional license. When the Supreme Court suggests that using AI to generate judgments constitutes misconduct, it is pointing toward a “failure to perform judicial duties.”

The core of a judgment is the “application of mind.” A judge is required to weigh the evidence, interpret the statutes in the context of unique facts, and provide a reasoned decision that accounts for the nuances of justice, equity, and good conscience. AI, by its very nature, operates on probability and pattern recognition. It does not “understand” justice; it predicts the next likely word in a sequence based on its training data. By outsourcing the drafting of an order to an algorithm, a judge effectively abdicates their constitutional responsibility. This abdication is what elevates the act from a technical lapse to an ethical violation of the highest order.

The Problem of Algorithmic Hallucinations

One of the primary concerns driving the Supreme Court’s scrutiny is the phenomenon known as “hallucination” in Large Language Models (LLMs). There have been documented instances globally where AI has cited non-existent case laws, fabricated precedents, and misinterpreted statutes with a high degree of confidence. In the Indian context, where the doctrine of stare decisis (precedent) is a cornerstone of the legal system, the introduction of a single fabricated precedent into a lower court judgment could contaminate the entire legal stream. If a judge fails to verify every citation produced by an AI, they are essentially introducing falsehoods into the official record, which is a direct affront to the integrity of the court.

Involving the Guardians of the Law: AG, SG, and the BCI

The Supreme Court’s decision to involve the Attorney General, the Solicitor General, and the Bar Council of India is a strategic move to ensure a multi-dimensional analysis of the problem. The Attorney General represents the state’s legal interests and acts as the “conscience keeper” of the legal system, while the Solicitor General manages the government’s litigation. Their input will be vital in determining how AI is currently being used within government legal departments and the extent to which it should be regulated.

The Bar Council of India’s involvement is perhaps even more critical for the practicing advocate. If a judge’s reliance on AI is misconduct, what about an advocate’s reliance on it for drafting pleadings? The BCI will need to update the Standards of Professional Conduct and Etiquette to explicitly address the use of AI. We are looking at a future where an advocate might be required to submit a certificate stating that their citations and arguments have been verified by human intelligence, ensuring that the “human element” remains at the forefront of the adversarial process.

The Constitutional Mandate and Article 142

The Supreme Court’s intervention can also be seen through the lens of its extraordinary powers under Article 142 of the Constitution, which allows it to pass any order necessary for doing “complete justice.” If the quality of justice is diluted by opaque algorithms, “complete justice” becomes an impossibility. The bench is likely exploring whether the use of AI infringes upon the right to a fair trial, as a litigant has the right to have their case heard and decided by a human mind that is capable of empathy, discretion, and contextual understanding—qualities that AI lacks.

The Risk to the Lower Judiciary and the Efficiency Trap

There is a significant concern regarding the over-burdened lower judiciary in India. With millions of cases pending, the temptation for a trial court judge or a magistrate to use AI tools for drafting routine orders is immense. While the intention might be to reduce pendency, the result could be a mechanical, heartless form of justice. The Supreme Court’s warning serves as a preemptive strike against this “efficiency trap.”

The integrity of the “reasoned order” is what distinguishes a democratic legal system from an arbitrary one. A reasoned order allows a higher court to understand why a particular conclusion was reached. If that reasoning is generated by an AI, the chain of accountability is broken. Who is responsible if the AI’s logic is biased? Who is responsible if the AI’s training data was skewed? By labeling such acts as misconduct, the Supreme Court is ensuring that judges remain the masters of their own pens and the architects of their own logic.

Addressing Bias and Transparency in Legal AI

Artificial Intelligence is only as good as the data it is trained on. In a society as diverse as India, the risk of “encoded bias” is high. AI models trained primarily on Western legal datasets or even biased historical Indian data could inadvertently perpetuate prejudices related to caste, gender, or socioeconomic status. A judge might not even realize that the “logical” conclusion suggested by an AI tool is rooted in a systemic bias. The Supreme Court’s detailed examination will likely delve into the “black box” nature of these algorithms, demanding transparency that AI companies are often unwilling to provide.

Drawing the Line: AI as a Tool, Not a Replacement

It is important to clarify that the Supreme Court is not advocating for a total ban on technology. In fact, the Indian judiciary has been at the forefront of the e-Courts initiative. AI can be used effectively for several administrative and research tasks, such as:

  • Automated translation of judgments into regional languages (a project already underway).
  • Case law research and indexing.
  • Summarizing voluminous case files to help judges identify core issues.
  • Administrative scheduling and courtroom management.

The distinction lies between “adjudicative functions” and “administrative assistance.” Adjudication—the act of deciding the rights and liabilities of parties—is a sovereign function that cannot be delegated to a machine. The Supreme Court’s impending detailed examination will likely create a “permissible use” policy, outlining exactly where the assistance ends and where the misconduct begins.

Global Precedents and the Indian Context

India is not alone in this struggle. In the United States, the case of Mata v. Avianca became a global warning sign when lawyers were sanctioned for submitting a brief full of fake citations generated by ChatGPT. Similarly, the UK’s judicial office has issued guidance to judges, acknowledging that while AI can help summarize information, it should not be used for legal research or analysis that could lead to new precedents. However, the Indian Supreme Court’s move to categorize this as “misconduct” is one of the most stringent positions taken by a high court globally, reflecting the high value placed on judicial ethics in our tradition.

The Impact on Legal Education and Training

This development will necessitate a radical shift in how we train our future lawyers and judges. Law schools must move away from rote memorization and focus on “critical thinking” and “ethical AI usage.” If the Supreme Court labels AI-dependency as misconduct, then “AI Literacy” must become a mandatory part of continuing legal education for judges and practitioners alike. We must learn to audit the machine, to question its outputs, and to maintain the “human-in-the-loop” at every stage of the litigation lifecycle.

The Road Ahead: Defining the Framework

As the Attorney General, Solicitor General, and the BCI prepare their responses, the legal fraternity expects a detailed set of guidelines. These guidelines should ideally cover:

1. **Mandatory Disclosure:** If any part of a submission or a draft has been assisted by AI, it must be disclosed to the court and the opposing party.

2. **Verification Responsibility:** The ultimate responsibility for the accuracy of every fact and citation remains with the signing authority (the advocate or the judge).

3. **Prohibition on Adjudicative AI:** A clear ban on using AI to determine the outcome of a case or to draft the final reasoning of a judgment.

4. **Data Privacy:** Ensuring that sensitive case data is not fed into public LLMs, which could violate the privacy of litigants and the confidentiality of judicial deliberations.

Conclusion: Protecting the Soul of the Judiciary

The Supreme Court of India has once again proven to be the sentinel on the qui vive. By equating AI-generated judgments with misconduct, the bench of Justices PS Narasimha and Alok Aradhe is protecting the “soul” of the judiciary. Justice is not a mathematical equation; it is a human endeavor that requires compassion, context, and a deep understanding of the human condition—traits that no algorithm, no matter how advanced, can possess.

As we move into an era of unprecedented technological change, the legal profession must adapt without losing its moral compass. The Supreme Court’s notice is a timely reminder that while the tools of the trade may change, the ethical foundations of the law remain immutable. We must embrace technology as a servant of justice, but never allow it to become the master of the bench. The forthcoming detailed examination by the Supreme Court will undoubtedly serve as a lighthouse, guiding the Indian legal system through the foggy intersection of law and technology, ensuring that the light of justice is never dimmed by the shadows of an algorithm.