Search interesting materials

Monday, November 28, 2022

The litigant perspective upon courts

by Pavithra Manivannan, Susan Thomas and Bhargavi Zaveri-Shah.

How do we identify a well performing judiciary from one that does not perform well? The literature on this question has focused on two types of metrics: inputs such as the judge to population ratio, judicial budgets and physical infrastructure and outputs such as the number of resolved cases, time taken per case and the costs involved. An emergent literature focuses on the litigant's experience of the judiciary. This approach involves criteria that the litigant uses to evaluate their experience of the judiciary, which have been found to be different from those used by judges, legal practitioners and planners (eg., Tyler, 1984; Rottman and Tyler, 2014; Hagan 2018).

In India, there is a growing awareness for the judiciary to be more citizen friendly (example; example; example), which calls for a better understanding of what a litigant's expectations are when engaging with the judiciary. In a new working paper, we propose a measurement framework that focuses on the litigant's perspective. In order to construct the framework, we draw upon the literature to hypothesise what a litigant takes into consideration when she decides to take a dispute for adjudication at a court. These considerations are then translated into the metrics to be used, when designing an evaluative framework to compare courts with similar functions. When this framework is applied to data from the legal system, it becomes an information system which can generate quantitative expectations of the time and costs involved in the process of litigation, which can potentially guide the litigant on whether to litigate.

In designing such a measurement framework, we recognise that there cannot be a single set of metrics that can be applied equally to all courts. This is because different courts perform functions that vary substantially in complexity, type and processes. For instance, the evidentiary burden required to be followed in a criminal matter is different than that of a civil matter, and the prosecution is led by the state. Additionally, the intended relief to a litigant in different types of matters also varies. For instance, in a civil matter, the relief is largely limited to compensation, specific performance and/or damages from the defendant. On the other hand, in constitutional matters, the relief sought may involve directions to the state or lower courts. While there may be some common metrics that could be useful to evaluate different types of courts, a single set of metrics may make the evaluation framework over expansive or deficient for some types of courts. Therefore, in this paper, we limit the scope of our discussion and the resulting framework to courts that adjudicate contractual disputes.

Features of the proposed framework

Given the focus on contractual dispute resolution, we identify a list of five metrics from the literature which can be usefully applied by a litigant to evaluate the performance of a court. The metrics are independence, efficiency, effectiveness, predictability and access. Based on the multiple interpretations of each metric available in the literature, we present arguments that justify why we narrow down on one interpretation over another from a litigant's perspective. We then identify proxies that can be used in the Indian context to measure the performance of the chosen courts on the selected metrics. These make up the proposed framework to measure the performance of courts that adjudicate contractual disputes.

The metrics, and the proxies that can be meaningfully evaluated to assess the metric, and the description of each proxy are summarised in the Table below. Finally, in the paper, we also lay out the source of the data and the process in which the information on each of these metrics can be collected.

Table: Metrics for evaluating court performance on contractual disputes

Sr. No. Metric Proxy Description
1. Independence Procedural fairness Adherence to procedure
Distributive fairness Fairness and impartiality in judgements
2. Efficiency Timeliness Duration of disposed and pending cases
3. Effectiveness EnforceabilityRatio of sum recovered to the total sum awarded in court orders
4. Predictability Certainty of case trajectoryClarity on stages of the case
Hearing date certaintyCertainty on number of hearings per case
Ratio of substantive to non-substantive hearings
5. Access Monetary costsCosts of approaching the court to the litigant
ConvenienceEase and user-friendliness for litigants

There are two caveats to the measurement framework that we propose. First, we assume that the litigant assigns equal weights to each of these metrics in making her decision on whether to take a contractual dispute in court. This means, that the litigant values (say) independence as much as predictability. This is a simplification and may not necessarily hold in reality, and for each litigant. Second, we do not identify an optimal or ideal level of performance of the court on these metrics. For example, we do not attempt to identify an ideal duration for the disposal of a case or the optimal number of hearings or the optimal 'level' of independence. The aim of the proposed framework is simply to provide a transparent base of metrics about court performance that can be put together using publicly accessed data sources, that we believe matters to the litigant.

The public domain nature of the data used in the proposed framework, supports regular updates of the metrics. This, in turn, will facilitate a comparison of the performance of court adjudicating contractual disputes over time. If these measures can be calculated in a consistent manner across different platforms, these can provide the litigant with a relative performance evaluation that can allow her to decide when, if and how to avail of the justice delivery system with greater clarity and certainty.

Conclusion

While judicial under performance is an over used expression in both the academic literature and broader policy discourse on Indian courts, the absence of an evaluative framework exacerbates the ambiguity associated with this expression. Our literature review in this paper shows that what is measured in the context of courts largely depends on who is undertaking the measurement. By considering specific metrics that a litigant may attach priority to in her experience with the judiciary, this paper provides a foundation for rolling out regular evaluation exercises of courts adjudicating commercial disputes, and ultimately make judicial performance a more tangible and usable concept in India.

References

Hagan MD (2018). “A Human-Centered Design Approach to Access to Justice: Generating New Prototypes and Hypotheses for Intervention to Make Courts User-Friendly.” Indiana Journal of Law and Social Equality, 6(2), 199–239.

Rottman DB, Tyler TR (2014). “Thinking about judges and judicial performance: Perspective of the Public and Court users.” Onati Socio-legal Series.

Tyler, Tom R. "The Role of Perceived Injustice in Defendant's Evaulations of their Courtroom Experience." Law & Society Review, vol. 18, no. 1, 1984, p. 51-74.

Pavithra Manivannan is a senior research associate at XKDR Forum, Mumbai. Susan Thomas is Senior research fellow at XKDR Forum, Mumbai and Research Professor of Business at Jindal Global Business School. Bhargavi Zaveri-Shah is a doctoral candidate at the National University of Singapore.

Saturday, November 05, 2022

Learning by doing and public procurement in India

by Aneesha Chitgupi, Abhishek Gorsi, and Susan Thomas.

Introduction

State objectives can feature a mix of `make' (i.e. build an organisation, recruit people) vs. `buy' (i.e. contract to a private person for the same task). While private persons are, in general, more efficient than a comparable government organisation, the `buy' pathway is not a panacea when there are bottlenecks in state capacity to contract. Understanding how to achieve capacity in government contracting is one of the critical components of the overall question of enhancing state capacity in India.

How has this tradeoff evolved over time? Generally, when contracting capabilities go up, the fraction of work that is contracted-out goes up. Chitgupi and Thomas (2022) find that the government procurement spending has been at a steady 17 percent of the total expenditure across multiple years, which suggests a lack of movement on contracting capabilities.

What about cross-sectional variation in contracting capability between spending units? Sharma and Thomas (2021) finds that procurement expenditure varies significantly across ministries in 2016-2017, with a few ministries accounting for the larger share of procurement. This would, of course, largely reflect differences in the nature of their work.

In this article, we obtain insights into contracting capabilities by examining procurement under-spend across ministries and across time. We recognise that state capacity in procurement includes multiple dimensions: achieving a targeted level of quality at the minimum cost and on time. Our question is a more basic one about whether the spending happens at all. It requires a certain level of contracting capability in order to actually achieve the budgeted procurement expenditure. We then consider the role of learning by doing in procurement.

An insight in the field of state capacity is that it is extremely sticky; changes to state capacity arise only slowly (Kelkar and Shah, 2022). State capacity emerges from the organisation design (mandate, governance, organogram, processes, the feedback loops of accountability mechanisms). Repetition of the same task, over and over, under the influence of accountability mechanisms, induces the slow process of building capability. By this reasoning, we hypothesise that state entities which procure on a sustained basis will build the organisation design and capacity to doing procurement efficiently, and vice versa. The lack of sustained procurement activity will then predict procurement underspend.

Approach

In previous work, we developed the methodology to estimate the fraction of procurement in the accounts reported in the Detailed Demand for Grants (DDG) document published by a ministry. The DDG for a given year reports two sets of numbers by item head: the amount budgeted for the year, the amount actually spent two years previously. So, the DDG in 2016 for (say) the Ministry of Health and Family Welfare will report the budgeted amounts for 2016, as well as the amount spent in 2014. These are reported by item head, and allows us to calculate the actual amount spent but with a lag of two years.

For our current analysis, we calculate what was budgeted and what was actually spent on procurement, across ministries and across years. We use these to estimate the procurement spending gap for the ministry as:

100 x (actual procurement spend - budgeted procurement) / budgeted procurement

A negative spending gap has the ministry spending less than planned. If procurement capacity is the dominant factor driving the ability to procure, then we would say that a ministry with good procurement capacity will have a procurement spending gap close to zero. Further, if the learning-by-doing hypothesis holds, then those ministries that budget consistently for procurement will have built procurement capacity, and be able to show procurement spending gap close to zero.

Other than procurement, a ministry spends on operational expenses like salaries and pensions, and on grants-in-aid and schemes. These do not depend upon specialised capacity as with the contracting capacity required in the case of procurement spending. As a benchmark, we also calculate the total spending gap as the difference between the actual and budget total spending of the ministry, as a fraction of the total budgeted spending. We use the total spending gap of the ministry to benchmark their overall spending capacity.

We calculate the total spending gap of the ministry as:

100 x (actual total spent - total budgeted) / total budgeted

For our analysis, we do the following steps:

  1. Estimate the amount of procurement expenditure for each ministry for each year for which we are able to obtain the DDG. We use this to identify which ministries have consistently higher fraction of their expenditure in procurement.
  2. Estimate the procurement under-spend for each ministry across different years.
  3. Examine whether there is a link between the ministries with a higher fraction of procurement expenditure and a relatively lower procurement under-spend.

Data

We collect the DDG published by the ministries of the Union Government to estimate the procurement and total spending gaps. The estimation procedure described in Sharma and Thomas (2021) requires the object head wise expenditure to be disclosed in the DDG. This level of information has been consistently disclosed across all ministries (other than for the Railways ministry which has a more complicated accounting structure) from 2014-2015 onward. The latest available DDGs are from 2020-2021.

There remains challenges in accessing the DDGs reliably for all the Union government ministries. In our exercise, we are able to locate the DDGs consistently from 2014-15 to 2020-21 for 10 ministries: Civil aviation, Coal, Environment, Finance, Health and Family Welfare, Home Affairs, Housing and Urban Affairs, Rural Development, Road Transport and Highways and Law and Justice.

For seven of these ministries, the DDGs contain ministry level object head wise expenditures directly. For three ministries, we had to sum up the expenditures across department DDGs. These include: the Ministry of Finance which is the sum of Economic affairs, Financial services, Revenue, Expenditure, and DIPAM. The Ministry of Health & Family Welfare is the sum of Health & Family Welfare, and Health Research. The Ministry of Home Affairs (MHA) covers Home Affairs, Cabinet, Police, and 8 departments for the UTs. The expenditure of Home Affairs, Cabinet and Police make up 74 percent of the MHA expenditure. In our analysis, we report the procurement expenditure for the MHA as the sum of these three departments, and do not include the UT departments in the analysis.

All rupee values used in these calculations are adjusted for inflation using the CPI index. The spending gaps are calculated for each year, from 2014-15 to 2018-19.

Is there a link between consistent procurement and lower procurement under-spend?

We organise our findings into three figures, where each is in the form of a heat map. In these graphs, each row is a ministry, where the values going from left to right are those for 2014-2015 to 2018-2019. The deeper / stronger the colours, the higher the values. In all three figures, the order of the ministries stay the same: in descending order of estimated procurement expenditure in 2014-2015. The ministry with the highest procurement spending is on top.

In Figure 1, we present the estimated procurement spending of the 10 ministries. The Ministry of Road Transport & Highways had the highest estimated procurement spending (in Rs. billion) in 2014-2015, and the ministry of coal had the least. There is large variation in the procurement spend across the 10 ministries, as described in Sharma and Thomas (2021). Ministries such as Coal and Rural Development spend less than Rs. 1 billion on procurement and have consistently done so over last 5 years. Ministry of Road Transport & Highways is the top procuring ministry followed by Home Affairs, of which Department of Police is largest procurer. There is more variation across the years for the remaining ministries. For example, the ministry of Health & Family Welfare shows an increased estimated procurement expenditure in 2017-2018 compared to other years in this period. The Ministry of Law & Justice has a steadily increasing procurement expenditure during this period.

 

Figure 1: Estimated procurement expenditure by ministry, 2014-15 to 2018-19

 

Figure 2 presents the estimated procurement spending gap from 2014-15 to 2019-20. Green stands for positive values (which is over-spending or under-budgeting). Red stands for negative values (which indicates under-spending or over-budgeting). Except for a few years, most ministries tend to under-spend. The exception is the Ministry of Road Transport & Highways, which is also the top procuring ministry.

 

Figure 2: Estimated procurement spending gap (%) by ministry, 2014-15 to 2018-19

Figure 2 also shows a sharper amount of under-spending towards the latter part for some of the ministries. These include Finance, Civil Aviation, Rural Development and Coal. This contradicts the proposition that procurement capacity is systematically increasing over time.

These ministries also have a lower fraction of procurement in their spending. These ministries are less likely to develop procurement capacity. Among ministries with similar procurement spending patterns, the Ministry of Environment appears to be building procurement capacity, with a procurement spending gap closer to zero.

Lastly, Figure 3 presents the spending gap for total expenditure for these ministries across time. There is much more capacity among the ministries in managing to spend their overall budgets. Most of the heat-map shows colours mapping to values closer to 0. In fact, there is more evidence of over-spending than under-spending compared to Figure 2. The ministries of Finance and Rural Development have more instances of over-spending in their overall spending, while these are ministries which have estimated procurement under-spend.

 

Figure 3: Total spending gap (%) by ministry, 2014-15 to 2018-19


Alternative explanations

Underspend can be driven by other compulsions also. In a public choice theory worldview, state organisations would give the highest priority to wage and pension expenditures, and sacrifice procurement expenditures when faced with formal or informal budget constraints. This makes procurement budgets the most vulnerable to sudden cuts.

However, this argument should apply equally across all ministries. In fact, the ministries which do higher amounts of procurement should be the prime target for mid-year budget cuts. This argument predicts that under-spend should take place roughly everywhere, and maybe to a greater extent in the high-procurement ministries. The evidence, however, shows that under-spend is more prevalent in low-procurement ministries.

Discussion

The main finding of this article is that ministries that tend to procure consistently, tend to have smaller procurement spending gaps. This is consistent with the idea that there is learning by doing, where doing procurement on a sustained basis gradually creates organisational capability for procurement.

If the key claim of this article is true -- that there is learning by doing in procurement -- this has a few interesting implications. In a department where there is low experience with procurement, the early years where procurement work begins will work poorly. In such a department, procurement under-spend is likely, with consequential failures in public expenditure programs and budgeting. In a department with low experience with procurement, a sudden jump in the procurement budget is likely to be associated with failure to spend. If the political leadership decides to push up the procurement spend of a department by three times, it would make sense to (a) increase the budget by only 20% and (b) initiate a capacity building program for procurement capability within that department.

Procurement is an expertise. No government organisation can sporadically do this well. It is an expertise which can be built, albeit over many years. In any government organisation, people and processes can be organised to focus on this expertise, and to devote time and effort on the entire pipeline of government contracting. The process of developing this capability can be accelerated by bringing in people with this specialised expertise. Strengthening the entire life cycle is required to successfully spend budget amounts. But this is only the beginning of success in procurement where government can contract to deliver quality projects efficiently, on time and at low cost.

References

Aneesha Chitgupi and Susan Thomas. The make vs. buy decision of the union government, The Leap Blog. September 10, 2022.

Vijay Kelkar and Ajay Shah. In service of the republic: The art and science of public policy. Second edition, 2022.

Shubho Roy and Anjali Sharma. What ails public procurement: an analysis of tender modifications in the pre-award process, The Leap Blog, November 26, 2020.

Anjali Sharma and Susan Thomas. The footprint of union government procurement in India, XKDR Working Paper 10, November 2021.


Aneesha Chitgupi is a Research Fellow at XKDR Forum, Abhishek Gorsi is a doctoral candidate at the IGIDR and Susan Thomas is a Senior Research Fellowe at XKDR Forum and a Research Professor of Business at Jindal Global University. We thank Josh Felman, Sudha Krishnan, Ajay Shah and Anjali Sharma for feedback and comments.