Search interesting materials

Wednesday, July 22, 2009

What risk models are useful?

by Ajay Shah.

Risk management failures have clearly taken place. It has become fashionable to criticise risk models.

A fair amount of the naive criticism is not well thought out. Too many people today read Nassim Taleb and pour scorn upon hapless economists who inappropriately use normal distributions. That's just not a fair depiction of how risk analysis gets done either in the real world or in the academic literature.
Another useful perspective is to see that a 99% value at risk estimate should fail 1% of the time. If a VaR implementation that seeks to find that 99% threshold does not have actual losses exceeding the VaR on 2-3 trading days each year, then it is actually faulty. Civil engineers do not design homes for once-in-a-century floods or earthquakes. When the TED Spread did unbelievable things:



the loss of a short position on the TED Spread should have been bigger than the Value at Risk reported by a proper model on many days.

The really important questions lie elsewhere. Risk management was a new engineering discipline which was pervasively used by traders and their regulators. Does the field contain fundamental problems at the core? And, are there some consequences of the use of risk management which, in itself, create or encourage crises?

Implementation problems


There are a host of practical problems in building and testing risk models. Model selection of VaR models is genuinely hard. Regulators and boards of directors sometimes push into Value at Risk at a 99.99% level of significance. This VaR estimate should be exceeded in one trading day out of ten thousand. Millions of trading days would be required to get statistical precision in testing the model. In most standard situations, there is a semblence of meaningful testing for VaR at a 99% level of significance [example], and anything beyond that is essentially untested for all practical purposes.

Similar concerns afflict extrapolation into longer time horizons. Regulators and boards of directors sometimes push for VaR estimates with horizons like a month or a quarter. The models actually know little about those kinds of time scales. When modellers go along with simple approximations, even though the underlying testing is weak, model risk is acute.

In the last decade, I often saw a problem that I used to call `the Riskmetrics illusion': the feeling that one only needed a short time-series to get a VaR going. What was really going on was that Riskmetrics assumptions were driving the risk measure. Adrian and Brunnermeier (2009) emphasise that the use of short windows was actually inducing procyclicality: When times were good, the VaR would go down and leverage would go up, and vice versa. Today, we would all be much more cautious in (a) Using long time-series when doing estimation and (b) Not trusting models estimated off short series when long series are unavailable.

The other area where the practical constraints are onerous is that of going from individual securities to portfolios. In practical settings, financial firms and their regulators always require estimates of VaR for portfolios and not individual instruments.

Even in the simplest case with only linear positions and multivariate normal returns, this requires an estimate of the covariance matrix of returns. Ever since atleast Jobson and Korkie (JASA, 1980), we have known that the historical covariance matrix is a noisy estimator. The state of the art in asset pricing theory has not solved this problem. So while risk measures at a portfolio level are essential, this is a setting where our capabilities are weak. Realworld VaR systems that try to make do using poor estimators of the covariance matrix of returns are fraught with model risk.

As an example, when we look at the literature on portfolio optimisation, there is a lot of caution about the complexity of jumping into portfolio optimisation using estimated covariance matrices. As an example, see this paper by DeMiguel, Garlappi, Nogales and Uppal, which is one of the first papers to gain some traction in trying to actually make progress on estimating a covariance matrix that's useful in portfolio optimisation. This paper is very recent - it appeared in May 2009 - and highlights the fact that these are not solved problems. It seems easy to talk about covariance matrices but obtaining useful estimates is genuinely hard.

Similar problems afflict Value at Risk in multivariate settings. Sharp estimates seem to require datasets which do not exist in most practical settings. And all this is when discussing only the simplest case, with linear products and multivariance normality. The real world is not such a benign environment.

With all these implementation problems, VaR models actually fared rather well in most areas


There is immense criticism of risk models, and certainly we are all amazed at the events which took place on (say) the money market, which were incredible in the eyes of all modellers. But at the same time, it is not true that all risk models failed.

My first point is the one emphasised above, it was not wrong to have VaR models being surprised at once-in-a-century events.

By and large, the models worked pretty well with equities, currencies and commodities. By and large, the models used by clearing corporations worked pretty well; derivatives exchanges did not get into trouble even when we think of the eurodollar futures contract at CME which was explicitly about the London money market.

Fairly simple risk models worked well in the determination of collateral that is held by futures clearing corporations. See this paper by Jayanth Varma. If the field of risk modelling was as flawed as some make it out to be, clearing corporations worldwide would not have handled the unexpected events of 2007 and 2008 as well as they did. These events could be interpreted as suggesting that, as an engineering approximation, the VaR computations that were done here were good enough. Jayanth Varma argues that the key elements that are required are the use of coherent risk measures (like expected shortfall), fat tailed distributions and nonlinear dependence structures.

As boring as civil engineering?


In his article Blame the models, Jon Danielsson shows a very nice example of the simplest possible VaR problem: the estimation of VaR for a $1000 position on IBM common stock. He points out that across a reasonable range of methodologies and estimation periods, the VaR estimates range over a factor of two (from 1.77% to 3.26%).

This large range is disconcerting. But look back at how civil engineers work. A vast amount of sophisticated analysis is done, and then a safety factor of 2x or 2.5x is layered on. The highest aspiration of the field of risk modelling should be to become as humdrum and useful as civil engineering. My optimistic reading of what Danielsson is saying is that a 2x safety factor adequately represents model risk in that problem.

This suggests a pragmatic approach. All models are wrong; some models are useful. Risk modelling would then go forward as civil engineering has, with an attempt at improving the scientific foundations, and with a final coup de grace of a safety factor thrown in at the end. Civil engineering evolved over the centuries, learning from the cathedrals that collapsed and the bridges that were swept away, continually improving the underlying science and improving the horse sense on what safety factors are a reasonable tradeoff between cost and safety.

Fundamental criticism: the `Lucas critique of risk management'


When an econometric model finds a reduced form relationship between y and x, this is not a useful guide for policy formulation. Hiding inside the slope parameter of x is the optimisation of economic agents, which reflect a certain policy environment. When policy changes are made, these optimisations change, giving structural change in the slope parameter. When policy changes take place, the old model will break down; the modeller will be surprised at what large deviations from the model have popped up. The Lucas critique is an integral part of the intellectual toolkit of every macroeconomist.

It should be much more prominent in the thinking of financial economists also. The most fundamental criticism of risk models is that they also suffer from the Lucas critique. As Avinash Persaud, Jon Danielsson and others have argued, risk modelling should not only be seen in a microeconomic sense of one economic agent using the model. When many agents use the same model, or when policy makers or clearing corporations start using the model, then the behaviour of the system changes.

As a consequence of this fundamental problem, an ARCH model estimated using historical data is vulnerable to getting surprised by what comes in the future. The coefficients of the ARCH model are not deep parameters; they are reduced form parameters. They suffer from structural breaks when enough traders start estimating that model and using it. The reduced-form parameters are time varying and endogenous to decisions of traders about what models they use, and the kinds of model-based prudential risk systems that regulators or clearing corporations use.

In the field of macroeconomics, the Lucas critique was a revolutionary idea, which pretty much decimated the old craft of macro modelling. Today, we walk on two very distinct tracks in macroeconomics. Forecasters do things like Bayesian VAR models where there are no deep parameters, but these models are not used for policy analysis. Policy analysis is done using DSGE models, which try to explicitly incorporate optimisations of the economic agents.
In addressing the problem of endogeneity of risk, or the Lucas critique, we in finance could do as the macroeconomists did. We could retreat into writing models with optimising agents, which is what took macroeconomists to DSGE models (though it took thirty years to get there). One example of this is found in Risk appetite and endogenous risk by Jon Danielsson, Hyun Song Shin and Jean-Pierre Zigrand, 2009.

In the field of macro, the Lucas critique decimated traditional work. But we should be careful to worry about the empirical significance of the problem. While people do optimise, the extent to which the reduced form parameters change (when policy changes take place) might not be large enough for reduced form models to be rendered useless.

It would be very nice if we could now get an research literature on this. I can think of three examples of avenues for progress. Simulations from the Danielsson/Shin/Zigrand paper could be conducted under different policy regimes, and reduced form parameters compared. Researchers could look back at natural experiments where policy changes took place (e.g. a fundamental change in rules for initial margin calculations at a futures clearing corporation) and ask whether this induced structural change in the reduced form parameters of the data generating process. Experimental economics could contribute something useful: it would be neat to setup a simulated market with 100 people trading in it, watch what reduced form parameters come out, then introduce a policy change (e.g. an initial margin requirement based on an ARCH model), and watch whether and how much the reduced form parameters change.

In the field of macro, there is a clear distinction between problems of policy analysis versus problems of forecasting. Even if the `Lucas critique' problem of risk modelling is economically significant (i.e. the parameters of the data generating process of IBM significantly change once traders and regulators start using risk modelling), one could sometimes argue that there is a problem of risk modelling which is not systemic. I suppose Avinash Persaud and Jon Danielsson would say that in finance, there is no such comparable situation. If a new time series model is useful to you in forecasting, it's useful to a million other traders, and the publication of the model generates drift in the reduced form parameters.

Regulators have focused on the risk of individual financial firms and on making individual firms safe. Today there is an increased sense that regulators need to run a capability which looks at the risk of the system and not just one firm at a time. A lot of work is now underway on these questions and it will yield improved insights and regulatory strategies in the days to come.

Why did risk models break down in some situations but not in others?


I find it useful to ask: Why did risk models work pretty well in some fields (e.g. the derivatives exchanges) but not in others (e.g. the OTC credit markets)? I think the endogenous risk perspective has something valuable to contribute in understanding this.

There are valuable insights in the ECB working paper by Lagana, Perina, von Koppen-Mertes and Persaud in 2006. They think of liquidity as made up of two stories: `search liquidity' as opposed to `systemic liquidity'. Search liquidity is about setting up a nice computer-driven market which can be accessed by as many people as possible. `Systemic liquidity' is about the consequences of endogenous risk. If a market is dominated by the big 20 financial firms, all of whom run the same models and have the same regulatory compulsions, this market will exhibit inferior systemic liquidity.

This gives us some insight into what went right with exchange-traded derivatives: the diversity of players on the exchanges (i.e. many different forecasting models, many different regulatory compulsions) helped to contain the difficulties.

The lesson then, is perhaps this one. If a market is populated with a diverse array of participants, then risk modelling as we know it works relatively well, as an engineering approximation. The big public exchange-traded derivatives fit this bill. We will all, of course, refine the practice of risk modelling, drawing on the events of 2007 and 2008 much as the civil engineers of old learned from spectacular disasters. But by and large, the approach is not broken.

Where the approach gets into trouble is in markets with just a few participants, i.e. `club markets'. A typical example would be an OTC derivative with just a handful of banks as players. In these settings, there is much more inter-dependance. When a market is populated by just a small set of players, all of whom think alike and all of whom are regulated alike, this is a much more dangerous world for the use of risk modelling. The application of standard techniques is going to run afoul of economically significant parameter instability and acute liquidity risk.

Implications for harmonisation of regulation


Harmonisation of regulation is a popular solution in regulatory circles these days. But if all large financial firms are regulated alike, the likelihood of the failure of risk management could go up. Until we get the tools to do risk modelling under conditions of economically significant risk endogeneity, all we can say is that we do not know how to compute VaR under those conditions.

Harmonisation of regulation will give us more of those situations.

In the best of times, there seem to be limits of arbitrage; there is not enough rational arbitrage capital going around to fix all market inefficiencies. With non-harmonised regulation, if a certain firm is constrained by regulation to not take a rational trade, some other firm will be able to do so. The monoculture induced by harmonised regulation will likely make the world more unsafe.

Acknowledgement


Tarun Ramadorai, Avinash Persaud, and Viral Acharya gave me valuable feedback on this.

18 comments:

  1. I am extremely pleased that, this subject has been addressed with this seriousness on such a forum.

    Too often, pop science followers read Nassim Nicholas Taleb and others and cry hoarse over "conspiracy" to bankrupt the Main St.

    Yes, I agree to a certain extent that our risk models are too sanitized. And yes,there are other ways as well to bring a lot of credibility to those models using tools like robust statistics etc. But at the end, we must realise that risk management becomes a subject, a discipline which is closer to quantum physics than any other real life subject.
    The more we try to pin point and nail and control the risk, the more it evades us. Lots of time, people have said, "Look, what does faulty risk management do to you. Look at LTCM" {Often deriving from half baked journalist stories}

    But, things become more clear {may I dare add, more hazy} once real inside story becomes known. A fantastic video of Eric Rosenfield[ex-LTCM] delivered to a batch of MIT guys is available on net. Pretty good. As it discusses,refutes and clarifies lots of floating wisdom about it.

    My interpretation had been that risk management as a pursuit inches closer to haziness the more we try to chase safety.{Monte Carlo anyone?}
    But of course that is no excuse, we ought to do it, because we must.
    And thats where my personal opinion is this should be a place where India should concentrate a lot. Think of it, it has 'almost' everything it needs to back. A growing financial ecosystem. A seedling scientific community, a statistical and advanced mathematics behemoth, ISI etc.

    Soham

    ReplyDelete
  2. Soham, can you tell us the URL for the video?

    ReplyDelete
  3. Thanks for an interesting post and link to Prof. Jayanth's paper.

    Some major gripes though:
    Your section that argues that "VaR models fared rather well in most areas" is very confusing.
    In particular, you have mentioned derivative exchanges and Prof. Jayanth's paper to conclude that - "VaR computations that were done here were good enough" however no derivative exchange uses VaR for margining purposes and Prof. Jayanth's paper actually concludes the opposite that exchanges avoided the pitfalls of VaR since they used SPAN and that events of 2007/2008 clearly show the pitfalls of using VaR (at banks) with normality and linear correlation assumptions.

    I actually found Prof. Jayanth in general agreement with Taleb in his paper since he argues that bank risk models should move towards those using non-normal, non-linear dependence.

    ReplyDelete
  4. here's the link to the video mentioned by Soham in his comment. a must watch indeed!

    http://paul.kedrosky.com/archives/2009/04/eric_rosenfeld.html

    ReplyDelete
  5. It is misconception that every trading desk or portfolio manager uses VaR blindly (as suggested in popular press and acedamics). VaR or arch/garch is backward looking measure. PM or traders do put some human overlay (safety measure) in reality.

    ReplyDelete
  6. Ajay: I may be wrong but I think you missed a key follow on point in this last "punch" of your essay. Should you clarify that regulatory heterogeneity may help produce risk-mitigating heterogeneity in modeling and trading?

    ReplyDelete
  7. Matthew,

    Yes, I'm saying that harmonisation of regulation will produce greater risk of encountering liquidity black holes.

    Households, hedge funds, these kinds of players are great because their behaviour is the least distorted by regulation. But firms like banks, insurance companies, pension funds: these are hugely influenced by regulation. If French regulators treat insurance companies differently from German regulators, then this gives the world market greater stability because not all of them think and behave alike.

    ReplyDelete
  8. http://himalayancrossing.blogspot.com/2009/07/vive-la-heterogeneity-lessons-from.html

    ReplyDelete
  9. Ajay,

    Excellent review on VaR. But the problem stems from the fact that people do not look at linkages between VaR and accounting measures of risk. To determine Capital adequcy we need to combine VaR measures with simplistic accounting measures like Tangible Common Equity Ratio(TCE) and Tier-1capital ratio or in other words we should we should not look at VaR in isolation.

    http://www.reuters.com/article/BROKER/idUSN2335724020090223?pageNumber=2&virtualBrandChannel=0

    Nobody was talking about TCE prior to the credit crisis but right now regulators have become obsessed with it and have used it as the barometer to determine the capital requirement for banks.

    http://www.bloomberg.com/apps/news?pid=20601087&sid=anh9S6AJxcbY

    For example this link compares Goldman's daily VaR with Shareholder Equity at the end of each quarter.

    http://blogs.reuters.com/commentaries/2009/07/16/goldman-liquidity-and-var/

    Another issue with VaR is interpretation of Gross VaR and Net VaR as illustrated with Goldman's VaR numbers reported in the latest quarter.

    You are absolutely right when you say that we should adopt Civil Engineering's methodology of incorporating safety measures combining them with soophisticated statistical measures. Those civil engineering measures in Finance should be construed as accounting measures like TCE and Tier-1 capital.

    ReplyDelete
  10. three quick points

    1. while Taleb argued his position regarding risk models and VaR quite in detail, one wonders why his critics like you just dismiss his position with a broad stroke and never try to rebut point by point

    2. Using analogy can be dangerous and often misleading. In the construction industry, the maximum load that can come in a structure cannot be 100 times the anticipated value. Also even if such a low probability event do happen ( through quakes etc), technical limitations rather than managerial decisions is often the constraining factor. using this analogy may thus be unfair

    3. While as a private decision maker one may use any risk models it is when one claims to the larger public, without justiffication, that one is using a robust risk management model that all problems arise

    Mahesh

    ReplyDelete
  11. On Nassim Taleb: I apologise for being terse, but it's not high enough on my priorities to write a careful rebuttal to the things he's written.

    The journal publication game is a market. Easy ideas that get journal pubs are like 50000 dollar bills lying on the ground. Academics have been looking for decades at the issues of non-normality. The entire pose that Taleb has adopted - that the establishment is blinkered and does not look at non-normality - is simply wrong.

    The reason we do so much with the normal distribution is that our brains are feeble and our analytical tools get less traction with other distributions. (And, it's not asif non-normality is not done. E.g. the original motivation for doing ARCH, which is simply HUGE in the literature, was explaining the unconditional non-normality of the distribution of financial returns).

    ReplyDelete
  12. Taleb criticises people like you and you criticise Taleb, no surprises there.

    However, a lot of real-world criticism of risk management has to do with the dishonesty of its application. In the real world, crises arise from the bonus-earning salesmen sticking lit matches under salary-earning risk geeks' toenails till the geeks give in and adjust their numbers.

    Taleb writes with full first-hand knowledge that this corruption is deeply embedded in the way finance is done--you write as if it's an implementation problem that can be wished away.

    ReplyDelete
  13. Its disingenuous to pick on a person, any person and then say you do not want to engage in a careful rebuttal.

    However, submitting to practical limitations, the reader would certainly have benefited by a short statement of what you think Taleb says that you do not agree with? Its not clear what you are arguing for/against in your post.

    About your last comment: Taleb talks about traders using heuristics which are similar to ARCH in his Dynamic Hedging book (1997). He discusses both the point of using ARCH and the issues with using it. My point is that the existence of work on ARCH doesn't rebut his arguments (if that was the intention).

    Apologize for the pushiness but its an interesting topic and its eliciting a discussion. Maybe, this is a manifestation of the allure of pop science books and the popular views they engender? Maybe, its hard to believe that the views don't have merit when noticing that Taleb gets face time with people like Kahneman, Mandelbrot, Freedman, Wilmott, Gatheral, etc?

    Enquiring minds would love to see a careful rebuttal. :)

    ReplyDelete
  14. I did link to a page which discusses Taleb in a post.

    In short, non-normality is hardly new. The profession has been doing it for decades. The profession knows about it. Trust me, we do. Sure, there are ignorant people who don't, but then you can't blame medical science because some doctors are incompetent.

    Taleb has struck a fashionable pose, and it has gained mileage particularly amongst the folks who have always been suspicious about the scientific project in economics and finance. There will be others like him. I have looked hard in his text to look for some new ideas that one could actually do something with, without success.

    ReplyDelete
  15. we all would like to see a post dedicated to your critique of some of the essential points in Taleb's work, some of which include


    1. Usage of vaR

    3. assumptions regarding correlations between asset returns

    4. normality assumption.

    such a post could initiate sound discussion.

    ReplyDelete
  16. Please excuse me if I am being presumptuous. As people dealing with science I would urge you not to appeal to the loyalty or credulity of blog readers. One has to argue with evidence to show that the profession indeed is robust with regard to the criticisms raised by Taleb. Repeated assertions or invoking trust as an alternative wont simply do. If it is a case of you not interested in taking on taleb,please say that.

    ReplyDelete
  17. http://www.iimahd.ernet.in/~jrvarma/blog/index.cgi/Y2009/better-risk-models.html

    ReplyDelete
  18. "Civil engineers do not design homes for once-in-a-century floods or earthquakes."
    LOL. this is fabulous.But nor do they make claims like this is a 25 sigma event when the earthquakes occur. the data available for capital markets is too less to even make comparisions of this order.

    the other major question which people in economics profession,having physics envy-including all IIT+PhD kinds generally have is getting to accept that human action is NOT stochastic. mises,hayek have explained that wonderfully years ago.

    this is simply not a pedagocial quibble over methodology -it is far deeper . those who pretend to be mainstream economists today are more mathematicians than economists.
    its a laughable,but pitiable condition indeed. the hubris however is impressive

    ReplyDelete

Please note: Comments are moderated. Only civilised conversation is permitted on this blog. Criticism is perfectly okay; uncivilised language is not. We delete any comment which is spam, has personal attacks against anyone, or uses foul language. We delete any comment which does not contribute to the intellectual discussion about the blog article in question.

LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.