Search interesting materials

Sunday, May 18, 2008

Measurement of LIBOR

The problem

In recent months, concerns have been expressed about LIBOR. Jayanth Varma's blog post on the subject reports on these issues. On 16 April, the British Bankers Association (BBA) announced that banks giving bad quotations will be banned from participation in the calculation of LIBOR. Immediately after that, LIBOR rose by 18 basis points.

BBA is believed to be reviewing the LIBOR methodology and is expected to come out with a report later this month. ICAP is believed to be launching a New York Funding Rate, as a response to the difficulties of LIBOR.

The statistical methodology that is employed for LIBOR could be a source of trouble.

Suppose there are N observations. There are two polar extremes in obtaining a location estimator: to compute the sample mean and to compute the sample median. The mean is statistically efficient, but vulnerable to outliers. The median is robust, but statistically inefficient.

For the longest time, LIBOR has used the strategy of deleting the two most extreme observations. Think of this as being mostly like the mean, with some safety thrown in. (Recall that if you deleted N/2 extreme observations, you'd be at the median).

This method has doubtless worked well for a long time. But the tradeoffs that determine how many observations should be deleted are not static. They depend on market conditions.

In an illiquid and volatile market, the dispersion of quotes is high and everyone is uncertain about what the true price is. (For an example: think of a typical illiquid product in India). For years and years, the strategy of deleting 2 for LIBOR worked well - under normal market conditions. That same strategy is unlikely to be optimal under stressed market conditions. The mistrust of LIBOR under stressed market conditions, that I am seeing out there, sounds a lot like the mistrust that I have seen with such polling-based reference rates in illiquid and volatile markets in India.

One element of a solution

Roughly a decade ago, J. Cita and D. Lien proposed that instead of fixed trimming procedures, `adaptive' trimming could be done. In a nutshell, their idea was: Use bootstrap inference to know the standard deviation of the estimated location estimator for a few alternative trimming schemes. Then pick the `best' trimming by examining these standard deviations.

The neat thing about this scheme is that when the market is liquid and tranquil, trimming can go down, but more trimming will kick in when market conditions change.

This strategy has been used in India for NSE's `MIBOR' reference rate, and for the commodity spot prices polled by CMIE for NCDEX. All these markets are much more troublesome than the situation faced for LIBOR for a long time - but perhaps more like the situation seen with LIBOR under stressed markets.

At this page is (a) a paper, (b) a frequently asked questions and (c) full R source code.

Gyntelberg & Woolridge, 2008

Jayanth Varma points to the paper Interbank rate fixings during the recent turmoil by Jacob Gyntelberg and Philip Woolridge of BIS. In it, they say:

To test the robustness of trimming procedures, we re-estimated the mean of the US dollar, euro and yen Libor panels using a bootstrap technique. This technique minimises the influence of non-random observations and outliers on the mean without disregarding any quotes (Efron and Tibshirani (1994)). The bootstrapped mean is not significantly different from Libor for any of the panels considered.

This I do not understand. We use bootstrap inference to compute the distribution of an estimator of interest (e.g. a mean, a median, a fixed trimmed mean, or an adaptive trimmed mean). Bootstrapping is not an alternative strategy for constructing estimators.

Gyntelberg & Woolridge feel that estimation procedures are not an important issue, and they may well be right. Estimation procedures may well not be of importance in the recent criticisms of LIBOR. However, the above text makes me feel the adaptive trimmed mean was probably not evaluated. If all they have done is bootstrap inference of the fixed-trim-by-2 procedure, that is not the core issue. The core issue is to question trimming by 2 as a fixed rule that is appropriate for all kinds of market environments.

As argued above, a fixed-trim-by-2 procedure that works well for many years of tranquil times might not be such a good engineering approximation in the unique market stress which was observed lately. It's an empirical question, and one that would be settled by applying the adaptive trimmed mean to the BBA database in recent times.

A small correction for them: On page 62, in the table showing reference rates in various countries, the Indian Mibor is not just a `trimmed mean', it is an `adaptive trimmed mean'.

Larger links to physical settlement and cash settlement

This discussion is linked to the efforts that product designers make in choosing cash settled vs. physically settled derivatives. Some documents written by government agencies in India say fanciful things about cash settlement. They reflect a lack of knowledge of derivatives. The correct arguments are as follows.

Whenever good quality measurement of a reference rate is possible, cash settlement is always superior. A small reason for this is that settlement of money is cheaper than settlement of the underlying. The deeper reason is that physical settlement is much more vulnerable to short squeezes, so position limits for physically settled contracts tend to be lower there, which imposes a cost on society of a missing market when the position limits are reached.

When the underlying is traded on an electronic exchange, there is no difficulty in obtaining a top quality reference rate, for the best buy and best sell prices in the entire market are visible off the limit order book. But when the underlying is an OTC market, you are down to polling in making a reference rate. For these underlyings, it boils down to the question : How bad is the problem of a short squeeze and a small cap on the open interest? versus How bad is the problem of measurement of the spot price? In most Indian settings, I have felt that it's cheaper to put time and money into improving measurement on the spot market, rather than running afoul of a short squeeze or reducing the limits on open positions sharply.

1 comment:

  1. "The mean is statistically efficient, but vulnerable to outliers. The median is robust, but statistically inefficient." . How you defined the term Statistically inefficient here? Do you mean "biased". i.e. do you mean to say Sample Median is NOT always an unbiased estimator of population median, although it might be robust?

    ReplyDelete

Please note: Comments are moderated. Only civilised conversation is permitted on this blog. Criticism is perfectly okay; uncivilised language is not. We delete any comment which is spam, has personal attacks against anyone, or uses foul language. We delete any comment which does not contribute to the intellectual discussion about the blog article in question.

LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.