by Pavithra Manivannan, Susan Thomas and Bhargavi Zaveri-Shah.
How do we identify a well performing judiciary from one that does not perform well? The literature on this question has focused on two types of metrics: inputs such as the judge to population ratio, judicial budgets and physical infrastructure and outputs such as the number of resolved cases, time taken per case and the costs involved. An emergent literature focuses on the litigant's experience of the judiciary. This approach involves criteria that the litigant uses to evaluate their experience of the judiciary, which have been found to be different from those used by judges, legal practitioners and planners (eg., Tyler, 1984; Rottman and Tyler, 2014; Hagan 2018).
In India, there is a growing awareness for the judiciary to be more citizen friendly (example; example; example), which calls for a better understanding of what a litigant's expectations are when engaging with the judiciary. In a new working paper, we propose a measurement framework that focuses on the litigant's perspective. In order to construct the framework, we draw upon the literature to hypothesise what a litigant takes into consideration when she decides to take a dispute for adjudication at a court. These considerations are then translated into the metrics to be used, when designing an evaluative framework to compare courts with similar functions. When this framework is applied to data from the legal system, it becomes an information system which can generate quantitative expectations of the time and costs involved in the process of litigation, which can potentially guide the litigant on whether to litigate.
In designing such a measurement framework, we recognise that there cannot be a single set of metrics that can be applied equally to all courts. This is because different courts perform functions that vary substantially in complexity, type and processes. For instance, the evidentiary burden required to be followed in a criminal matter is different than that of a civil matter, and the prosecution is led by the state. Additionally, the intended relief to a litigant in different types of matters also varies. For instance, in a civil matter, the relief is largely limited to compensation, specific performance and/or damages from the defendant. On the other hand, in constitutional matters, the relief sought may involve directions to the state or lower courts. While there may be some common metrics that could be useful to evaluate different types of courts, a single set of metrics may make the evaluation framework over expansive or deficient for some types of courts. Therefore, in this paper, we limit the scope of our discussion and the resulting framework to courts that adjudicate contractual disputes.
Features of the proposed framework
Given the focus on contractual dispute resolution, we identify a list of five metrics from the literature which can be usefully applied by a litigant to evaluate the performance of a court. The metrics are independence, efficiency, effectiveness, predictability and access. Based on the multiple interpretations of each metric available in the literature, we present arguments that justify why we narrow down on one interpretation over another from a litigant's perspective. We then identify proxies that can be used in the Indian context to measure the performance of the chosen courts on the selected metrics. These make up the proposed framework to measure the performance of courts that adjudicate contractual disputes.
The metrics, and the proxies that can be meaningfully evaluated to assess the metric, and the description of each proxy are summarised in the Table below. Finally, in the paper, we also lay out the source of the data and the process in which the information on each of these metrics can be collected.
Table: Metrics for evaluating court performance on contractual disputes
Sr. No. | Metric | Proxy | Description |
---|---|---|---|
1. | Independence | Procedural fairness | Adherence to procedure |
Distributive fairness | Fairness and impartiality in judgements | ||
2. | Efficiency | Timeliness | Duration of disposed and pending cases |
3. | Effectiveness | Enforceability | Ratio of sum recovered to the total sum awarded in court orders |
4. | Predictability | Certainty of case trajectory | Clarity on stages of the case |
Hearing date certainty | Certainty on number of hearings per case | ||
Ratio of substantive to non-substantive hearings | |||
5. | Access | Monetary costs | Costs of approaching the court to the litigant |
Convenience | Ease and user-friendliness for litigants |
There are two caveats to the measurement framework that we propose. First, we assume that the litigant assigns equal weights to each of these metrics in making her decision on whether to take a contractual dispute in court. This means, that the litigant values (say) independence as much as predictability. This is a simplification and may not necessarily hold in reality, and for each litigant. Second, we do not identify an optimal or ideal level of performance of the court on these metrics. For example, we do not attempt to identify an ideal duration for the disposal of a case or the optimal number of hearings or the optimal 'level' of independence. The aim of the proposed framework is simply to provide a transparent base of metrics about court performance that can be put together using publicly accessed data sources, that we believe matters to the litigant.
The public domain nature of the data used in the proposed framework, supports regular updates of the metrics. This, in turn, will facilitate a comparison of the performance of court adjudicating contractual disputes over time. If these measures can be calculated in a consistent manner across different platforms, these can provide the litigant with a relative performance evaluation that can allow her to decide when, if and how to avail of the justice delivery system with greater clarity and certainty.
Conclusion
While judicial under performance is an over used expression in both the academic literature and broader policy discourse on Indian courts, the absence of an evaluative framework exacerbates the ambiguity associated with this expression. Our literature review in this paper shows that what is measured in the context of courts largely depends on who is undertaking the measurement. By considering specific metrics that a litigant may attach priority to in her experience with the judiciary, this paper provides a foundation for rolling out regular evaluation exercises of courts adjudicating commercial disputes, and ultimately make judicial performance a more tangible and usable concept in India.
References
Hagan MD (2018). “A Human-Centered Design Approach to Access to Justice: Generating New Prototypes and Hypotheses for Intervention to Make Courts User-Friendly.” Indiana Journal of Law and Social Equality, 6(2), 199–239.
Rottman DB, Tyler TR (2014). “Thinking about judges and judicial performance: Perspective of the Public and Court users.” Onati Socio-legal Series.
Tyler, Tom R. "The Role of Perceived Injustice in Defendant's Evaulations of their Courtroom Experience." Law & Society Review, vol. 18, no. 1, 1984, p. 51-74.
Pavithra Manivannan is a senior research associate at XKDR Forum, Mumbai. Susan Thomas is Senior research fellow at XKDR Forum, Mumbai and Research Professor of Business at Jindal Global Business School. Bhargavi Zaveri-Shah is a doctoral candidate at the National University of Singapore.