## Wednesday, July 22, 2009

### What risk models are useful?

by Ajay Shah.

Risk management failures have clearly taken place. It has become fashionable to criticise risk models.

A fair amount of the naive criticism is not well thought out. Too many people today read Nassim Taleb and pour scorn upon hapless economists who inappropriately use normal distributions. That's just not a fair depiction of how risk analysis gets done either in the real world or in the academic literature.
Another useful perspective is to see that a 99% value at risk estimate should fail 1% of the time. If a VaR implementation that seeks to find that 99% threshold does not have actual losses exceeding the VaR on 2-3 trading days each year, then it is actually faulty. Civil engineers do not design homes for once-in-a-century floods or earthquakes. When the TED Spread did unbelievable things:

the loss of a short position on the TED Spread should have been bigger than the Value at Risk reported by a proper model on many days.

The really important questions lie elsewhere. Risk management was a new engineering discipline which was pervasively used by traders and their regulators. Does the field contain fundamental problems at the core? And, are there some consequences of the use of risk management which, in itself, create or encourage crises?

### Implementation problems

There are a host of practical problems in building and testing risk models. Model selection of VaR models is genuinely hard. Regulators and boards of directors sometimes push into Value at Risk at a 99.99% level of significance. This VaR estimate should be exceeded in one trading day out of ten thousand. Millions of trading days would be required to get statistical precision in testing the model. In most standard situations, there is a semblence of meaningful testing for VaR at a 99% level of significance [example], and anything beyond that is essentially untested for all practical purposes.

Similar concerns afflict extrapolation into longer time horizons. Regulators and boards of directors sometimes push for VaR estimates with horizons like a month or a quarter. The models actually know little about those kinds of time scales. When modellers go along with simple approximations, even though the underlying testing is weak, model risk is acute.

In the last decade, I often saw a problem that I used to call the Riskmetrics illusion': the feeling that one only needed a short time-series to get a VaR going. What was really going on was that Riskmetrics assumptions were driving the risk measure. Adrian and Brunnermeier (2009) emphasise that the use of short windows was actually inducing procyclicality: When times were good, the VaR would go down and leverage would go up, and vice versa. Today, we would all be much more cautious in (a) Using long time-series when doing estimation and (b) Not trusting models estimated off short series when long series are unavailable.

The other area where the practical constraints are onerous is that of going from individual securities to portfolios. In practical settings, financial firms and their regulators always require estimates of VaR for portfolios and not individual instruments.

Even in the simplest case with only linear positions and multivariate normal returns, this requires an estimate of the covariance matrix of returns. Ever since atleast Jobson and Korkie (JASA, 1980), we have known that the historical covariance matrix is a noisy estimator. The state of the art in asset pricing theory has not solved this problem. So while risk measures at a portfolio level are essential, this is a setting where our capabilities are weak. Realworld VaR systems that try to make do using poor estimators of the covariance matrix of returns are fraught with model risk.

As an example, when we look at the literature on portfolio optimisation, there is a lot of caution about the complexity of jumping into portfolio optimisation using estimated covariance matrices. As an example, see this paper by DeMiguel, Garlappi, Nogales and Uppal, which is one of the first papers to gain some traction in trying to actually make progress on estimating a covariance matrix that's useful in portfolio optimisation. This paper is very recent - it appeared in May 2009 - and highlights the fact that these are not solved problems. It seems easy to talk about covariance matrices but obtaining useful estimates is genuinely hard.

Similar problems afflict Value at Risk in multivariate settings. Sharp estimates seem to require datasets which do not exist in most practical settings. And all this is when discussing only the simplest case, with linear products and multivariance normality. The real world is not such a benign environment.

### With all these implementation problems, VaR models actually fared rather well in most areas

There is immense criticism of risk models, and certainly we are all amazed at the events which took place on (say) the money market, which were incredible in the eyes of all modellers. But at the same time, it is not true that all risk models failed.

My first point is the one emphasised above, it was not wrong to have VaR models being surprised at once-in-a-century events.

By and large, the models worked pretty well with equities, currencies and commodities. By and large, the models used by clearing corporations worked pretty well; derivatives exchanges did not get into trouble even when we think of the eurodollar futures contract at CME which was explicitly about the London money market.

Fairly simple risk models worked well in the determination of collateral that is held by futures clearing corporations. See this paper by Jayanth Varma. If the field of risk modelling was as flawed as some make it out to be, clearing corporations worldwide would not have handled the unexpected events of 2007 and 2008 as well as they did. These events could be interpreted as suggesting that, as an engineering approximation, the VaR computations that were done here were good enough. Jayanth Varma argues that the key elements that are required are the use of coherent risk measures (like expected shortfall), fat tailed distributions and nonlinear dependence structures.