Friday, July 31, 2009
Thursday, July 23, 2009
Wednesday, July 22, 2009
Risk management failures have clearly taken place. It has become fashionable to criticise risk models.
A fair amount of the naive criticism is not well thought out. Too many people today read Nassim Taleb and pour scorn upon hapless economists who inappropriately use normal distributions. That's just not a fair depiction of how risk analysis gets done either in the real world or in the academic literature.
Another useful perspective is to see that a 99% value at risk estimate should fail 1% of the time. If a VaR implementation that seeks to find that 99% threshold does not have actual losses exceeding the VaR on 2-3 trading days each year, then it is actually faulty. Civil engineers do not design homes for once-in-a-century floods or earthquakes. When the TED Spread did unbelievable things:the loss of a short position on the TED Spread should have been bigger than the Value at Risk reported by a proper model on many days.
The really important questions lie elsewhere. Risk management was a new engineering discipline which was pervasively used by traders and their regulators. Does the field contain fundamental problems at the core? And, are there some consequences of the use of risk management which, in itself, create or encourage crises?
There are a host of practical problems in building and testing risk models. Model selection of VaR models is genuinely hard. Regulators and boards of directors sometimes push into Value at Risk at a 99.99% level of significance. This VaR estimate should be exceeded in one trading day out of ten thousand. Millions of trading days would be required to get statistical precision in testing the model. In most standard situations, there is a semblence of meaningful testing for VaR at a 99% level of significance [example], and anything beyond that is essentially untested for all practical purposes.
Similar concerns afflict extrapolation into longer time horizons. Regulators and boards of directors sometimes push for VaR estimates with horizons like a month or a quarter. The models actually know little about those kinds of time scales. When modellers go along with simple approximations, even though the underlying testing is weak, model risk is acute.
In the last decade, I often saw a problem that I used to call `the Riskmetrics illusion': the feeling that one only needed a short time-series to get a VaR going. What was really going on was that Riskmetrics assumptions were driving the risk measure. Adrian and Brunnermeier (2009) emphasise that the use of short windows was actually inducing procyclicality: When times were good, the VaR would go down and leverage would go up, and vice versa. Today, we would all be much more cautious in (a) Using long time-series when doing estimation and (b) Not trusting models estimated off short series when long series are unavailable.
The other area where the practical constraints are onerous is that of going from individual securities to portfolios. In practical settings, financial firms and their regulators always require estimates of VaR for portfolios and not individual instruments.
Even in the simplest case with only linear positions and multivariate normal returns, this requires an estimate of the covariance matrix of returns. Ever since atleast Jobson and Korkie (JASA, 1980), we have known that the historical covariance matrix is a noisy estimator. The state of the art in asset pricing theory has not solved this problem. So while risk measures at a portfolio level are essential, this is a setting where our capabilities are weak. Realworld VaR systems that try to make do using poor estimators of the covariance matrix of returns are fraught with model risk.
As an example, when we look at the literature on portfolio optimisation, there is a lot of caution about the complexity of jumping into portfolio optimisation using estimated covariance matrices. As an example, see this paper by DeMiguel, Garlappi, Nogales and Uppal, which is one of the first papers to gain some traction in trying to actually make progress on estimating a covariance matrix that's useful in portfolio optimisation. This paper is very recent - it appeared in May 2009 - and highlights the fact that these are not solved problems. It seems easy to talk about covariance matrices but obtaining useful estimates is genuinely hard.
Similar problems afflict Value at Risk in multivariate settings. Sharp estimates seem to require datasets which do not exist in most practical settings. And all this is when discussing only the simplest case, with linear products and multivariance normality. The real world is not such a benign environment.
With all these implementation problems, VaR models actually fared rather well in most areas
There is immense criticism of risk models, and certainly we are all amazed at the events which took place on (say) the money market, which were incredible in the eyes of all modellers. But at the same time, it is not true that all risk models failed.
My first point is the one emphasised above, it was not wrong to have VaR models being surprised at once-in-a-century events.
By and large, the models worked pretty well with equities, currencies and commodities. By and large, the models used by clearing corporations worked pretty well; derivatives exchanges did not get into trouble even when we think of the eurodollar futures contract at CME which was explicitly about the London money market.
Fairly simple risk models worked well in the determination of collateral that is held by futures clearing corporations. See this paper by Jayanth Varma. If the field of risk modelling was as flawed as some make it out to be, clearing corporations worldwide would not have handled the unexpected events of 2007 and 2008 as well as they did. These events could be interpreted as suggesting that, as an engineering approximation, the VaR computations that were done here were good enough. Jayanth Varma argues that the key elements that are required are the use of coherent risk measures (like expected shortfall), fat tailed distributions and nonlinear dependence structures.
As boring as civil engineering?
In his article Blame the models, Jon Danielsson shows a very nice example of the simplest possible VaR problem: the estimation of VaR for a $1000 position on IBM common stock. He points out that across a reasonable range of methodologies and estimation periods, the VaR estimates range over a factor of two (from 1.77% to 3.26%).
This large range is disconcerting. But look back at how civil engineers work. A vast amount of sophisticated analysis is done, and then a safety factor of 2x or 2.5x is layered on. The highest aspiration of the field of risk modelling should be to become as humdrum and useful as civil engineering. My optimistic reading of what Danielsson is saying is that a 2x safety factor adequately represents model risk in that problem.
This suggests a pragmatic approach. All models are wrong; some models are useful. Risk modelling would then go forward as civil engineering has, with an attempt at improving the scientific foundations, and with a final coup de grace of a safety factor thrown in at the end. Civil engineering evolved over the centuries, learning from the cathedrals that collapsed and the bridges that were swept away, continually improving the underlying science and improving the horse sense on what safety factors are a reasonable tradeoff between cost and safety.
Fundamental criticism: the `Lucas critique of risk management'
When an econometric model finds a reduced form relationship between y and x, this is not a useful guide for policy formulation. Hiding inside the slope parameter of x is the optimisation of economic agents, which reflect a certain policy environment. When policy changes are made, these optimisations change, giving structural change in the slope parameter. When policy changes take place, the old model will break down; the modeller will be surprised at what large deviations from the model have popped up. The Lucas critique is an integral part of the intellectual toolkit of every macroeconomist.
It should be much more prominent in the thinking of financial economists also. The most fundamental criticism of risk models is that they also suffer from the Lucas critique. As Avinash Persaud, Jon Danielsson and others have argued, risk modelling should not only be seen in a microeconomic sense of one economic agent using the model. When many agents use the same model, or when policy makers or clearing corporations start using the model, then the behaviour of the system changes.
As a consequence of this fundamental problem, an ARCH model estimated using historical data is vulnerable to getting surprised by what comes in the future. The coefficients of the ARCH model are not deep parameters; they are reduced form parameters. They suffer from structural breaks when enough traders start estimating that model and using it. The reduced-form parameters are time varying and endogenous to decisions of traders about what models they use, and the kinds of model-based prudential risk systems that regulators or clearing corporations use.
In the field of macroeconomics, the Lucas critique was a revolutionary idea, which pretty much decimated the old craft of macro modelling. Today, we walk on two very distinct tracks in macroeconomics. Forecasters do things like Bayesian VAR models where there are no deep parameters, but these models are not used for policy analysis. Policy analysis is done using DSGE models, which try to explicitly incorporate optimisations of the economic agents.
In addressing the problem of endogeneity of risk, or the Lucas critique, we in finance could do as the macroeconomists did. We could retreat into writing models with optimising agents, which is what took macroeconomists to DSGE models (though it took thirty years to get there). One example of this is found in Risk appetite and endogenous risk by Jon Danielsson, Hyun Song Shin and Jean-Pierre Zigrand, 2009.
In the field of macro, the Lucas critique decimated traditional work. But we should be careful to worry about the empirical significance of the problem. While people do optimise, the extent to which the reduced form parameters change (when policy changes take place) might not be large enough for reduced form models to be rendered useless.
It would be very nice if we could now get an research literature on this. I can think of three examples of avenues for progress. Simulations from the Danielsson/Shin/Zigrand paper could be conducted under different policy regimes, and reduced form parameters compared. Researchers could look back at natural experiments where policy changes took place (e.g. a fundamental change in rules for initial margin calculations at a futures clearing corporation) and ask whether this induced structural change in the reduced form parameters of the data generating process. Experimental economics could contribute something useful: it would be neat to setup a simulated market with 100 people trading in it, watch what reduced form parameters come out, then introduce a policy change (e.g. an initial margin requirement based on an ARCH model), and watch whether and how much the reduced form parameters change.
In the field of macro, there is a clear distinction between problems of policy analysis versus problems of forecasting. Even if the `Lucas critique' problem of risk modelling is economically significant (i.e. the parameters of the data generating process of IBM significantly change once traders and regulators start using risk modelling), one could sometimes argue that there is a problem of risk modelling which is not systemic. I suppose Avinash Persaud and Jon Danielsson would say that in finance, there is no such comparable situation. If a new time series model is useful to you in forecasting, it's useful to a million other traders, and the publication of the model generates drift in the reduced form parameters.
Regulators have focused on the risk of individual financial firms and on making individual firms safe. Today there is an increased sense that regulators need to run a capability which looks at the risk of the system and not just one firm at a time. A lot of work is now underway on these questions and it will yield improved insights and regulatory strategies in the days to come.
Why did risk models break down in some situations but not in others?
I find it useful to ask: Why did risk models work pretty well in some fields (e.g. the derivatives exchanges) but not in others (e.g. the OTC credit markets)? I think the endogenous risk perspective has something valuable to contribute in understanding this.
There are valuable insights in the ECB working paper by Lagana, Perina, von Koppen-Mertes and Persaud in 2006. They think of liquidity as made up of two stories: `search liquidity' as opposed to `systemic liquidity'. Search liquidity is about setting up a nice computer-driven market which can be accessed by as many people as possible. `Systemic liquidity' is about the consequences of endogenous risk. If a market is dominated by the big 20 financial firms, all of whom run the same models and have the same regulatory compulsions, this market will exhibit inferior systemic liquidity.
This gives us some insight into what went right with exchange-traded derivatives: the diversity of players on the exchanges (i.e. many different forecasting models, many different regulatory compulsions) helped to contain the difficulties.
The lesson then, is perhaps this one. If a market is populated with a diverse array of participants, then risk modelling as we know it works relatively well, as an engineering approximation. The big public exchange-traded derivatives fit this bill. We will all, of course, refine the practice of risk modelling, drawing on the events of 2007 and 2008 much as the civil engineers of old learned from spectacular disasters. But by and large, the approach is not broken.
Where the approach gets into trouble is in markets with just a few participants, i.e. `club markets'. A typical example would be an OTC derivative with just a handful of banks as players. In these settings, there is much more inter-dependance. When a market is populated by just a small set of players, all of whom think alike and all of whom are regulated alike, this is a much more dangerous world for the use of risk modelling. The application of standard techniques is going to run afoul of economically significant parameter instability and acute liquidity risk.
Implications for harmonisation of regulation
Harmonisation of regulation is a popular solution in regulatory circles these days. But if all large financial firms are regulated alike, the likelihood of the failure of risk management could go up. Until we get the tools to do risk modelling under conditions of economically significant risk endogeneity, all we can say is that we do not know how to compute VaR under those conditions. Harmonisation of regulation will give us more of those situations.
In the best of times, there seem to be limits of arbitrage; there is not enough rational arbitrage capital going around to fix all market inefficiencies. With non-harmonised regulation, if a certain firm is constrained by regulation to not take a rational trade, some other firm will be able to do so. The monoculture induced by harmonised regulation will likely make the world more unsafe.
Tuesday, July 21, 2009
Ila Patnaik looks at what is happening in the CMIE Capex database on investment in Gujarat, Maharashtra, West Bengal, Andhra Pradesh and Orissa. The measure of interest is: the time-series of the share of the state in overall projects `under implementation' in the CMIE database. Each of these states holds an interesting story.
Also see Kunal Sen in Financial Express on the evolution of governance at the state level in India.
Monday, July 20, 2009
- Bibek Debroy on Nandan Nilekani's appointment.
- Manish Sabharwal on the five big pending issues in the `employability, education, employment' agenda.
- Anders Aslund tells the story of East Europe in the crisis.
- Percy Mistry in Financial Express before the budget speech, and after it.
- Is China Really an East Asian success story? by John Lee.
- Peter Garnham reports on progress that China is making on the internationalisation of their currency. And, see an editorial in Financial Express on dreams of becoming a reserve currency.
- Stefan Gerlach, Alberto Giovannini, Cedric Tille, and Jose Vinals have written the Geneva Reports on the World Economy 10 on the contemporary topic of monetary policy, about how the global crisis has changed our views about the appropriate structure of monetary policy. Here is the report and here is a nontechnical summary.
- Watch me talk about a few contemporary things on CNBC-TV18.
- Richard McGregor in Financial Times on how not to run a financial system.
- Watch Jeff Hammer on Ila Patnaik's TV show.
- Thorsten Beck, Berrak Buyukkarabacak, Felix Rioja, and Neven Valev examine the evidence on the impact of financial sector development in a recent voxEU column. The quick summary: better financing of firms matters, but more credit to individuals does not.
I wrote a piece in Financial Express today about current developments in the economy.
This uses the website which we recently setup which has seasonally adjusted data for a few Indian monthly time-series. This data was updated this morning.
If indeed the debt management function is taken away from RBI, I don't think it will be a big loser as now it manages some 25 functions and each can be independently run by an institution. In the past, at least five institutions have been carved out of RBI (NHB, Nabard, Exim Bank, Industrial Development Bank of India (the earlier avatar of IDBI Bank Ltd) and Unit Trust of India) and the debt office will be yet another such body.
RBI should not bother much about losing its debt management function, and instead focus on changing the organization to be in sync with the new global order.
Thursday, July 16, 2009
- Lant Pritchett in Indian Express, 6 July 2009
- Jeff Hammer in Financial Express: 6 July 2009, and on 9 July 2009.
Lant and Jeff are the best deep thinkers about how to make the Indian government function better.
Tuesday, July 14, 2009
In order to make progress on doing macroeconomics in India, one weak link has been business cycle measurement. This, in turn, requires access to a wide range of seasonally adjusted time-series. In most countries, the infrastructure of seasonally adjusted data is produced by the statistical system, but in India this has not come about.
Seasonally adjusted series are particularly important in tracking current developments in the economy. The familiar year-on-year change is the moving average of the latest twelve monthly changes. In order to know what is happening in the economy, it is better to look at recent months, rather than looking back 12 months. The familiar y-o-y changes are a sluggish indicator of what is happening. Month-on-month changes are more informative: but this requires seasonal adjustment.
We have initiated some computation and release at cycle.in
At present, we have a dataset with seasonally adjusted levels for a few time-series. We will be updating this every Monday. At the above URL, you get a sense of what is happening with month-on-month changes of seasonally adjusted data in these series.
In the spirit of creating public goods, we make it easy for you to embed these graphs into your work products. We also have a .csv file with data for levels which can be the foundation of further work.
This will be useful in tracking current developments in the economy, and also make possible research in macroeconomics, which critically requires seasonally adjusted data.
We hope this is useful. Please use the comments on this blog post to give us feedback.
Monday, July 13, 2009
Thursday, July 09, 2009
The first graph in The setting for the budget speech tells the story of the importance of private corporate investment. The text there says:
The economic reforms of the early 1990s got private corporate investment up from the region of 4% to a peak of 10%. This six percentage point increase of private corporate investment generated a strong business cycle expansion.
The decline of private corporate investment back to values like 5% was the essence of the business cycle downturn of 2001-2002. After this, we have seen an immense expansion of private corporate investment -- all the way to 16%. This is the essence of the benign business cycle conditions of recent years.
The most important question that will shape business cycle conditions in 2009-10 and 2010-11 is: by how much will private corporate investment decline? It is important to see that today, each percentage point of GDP is Rs.55,000 crore, so we are discussing massive numbers. If a decline of private corporate investment takes place from 16% to 10%, then this is a reduction of demand by 6 percentage points of GDP or Rs.330,000 crore. When private corporate investment goes down, the overall impact on GDP is magnified through multiplier effects.
The shocks to GDP that are generated by these fluctuations of private corporate investment are so large that monetary or fiscal policy, as presently organised in India, can simply not counteract them. The only place where public policy can make a difference to this is to identify the policy instruments through which private corporate investment can flourish.
Some say the budget has done a fiscal stimulus by expanding the fiscal deficit by 0.8 percentage points. Others say that in tough times, it's not correct to engage in deficit reduction. These arguments have to confront the irrelevance of the direct impact of these fiscal numbers. Whether the fiscal deficit goes up by 0.8 percentage points or it goes down by 2 percentage points, these are tiny numbers when compared with the real prize: private corporate investment.
If we play things right, and persuade domestic and foreign investors that India is on the right track, then this would perhaps add five to ten percentage points to the investment/GDP ratio, which drowns out a 2 percentage points of GDP reduction in the fiscal deficit which (in my opinion) is a critical element of the policy stance required to reassure the private sector that India is on the right track.. And if we play things wrong, then a five to ten percentage points of GDP reduction in private corporate investment will drown out a 0.8 percentage points of GDP fiscal stimulus.
The key message: focus on private corporate investment, not on either monetary or fiscal stimulus. India does not have the institutional capability to use monetary or fiscal policy to do business cycle stabilisation on the scale that we are seeing in the West. Let us not transplant the intuition and rhythm of policy that we see in the US and the UK into our setting, where we have dysfunctional institutions. In the medium term, we need to do fiscal, financial and monetary institution building so as to have that kind of apparatus. In the short term, the only real lever we have, that can stabilise the economy, is the outlook for economic reforms.
What determines private corporate investment?
Suppose you spend Rs.100 to build a factory, and suppose (for the moment) that this is all-equity financing. Suppose you take the company public and the market gives you a valuation of Rs.200. In other words, the hard work that you put in to build the business using Rs.100 of risk capital has created wealth of Rs.100.
This ratio -- the market price of a project divided by the cost of building it -- is called Tobin's `Q'. When Q > 1, CEOs and entrepreneurs would feel like building projects. The bigger Q is, when it's beyond 1, the bigger the incentive to engage in investment. When Q is near 1 or below it, the incentive to invest withers away. The sword of entrepreneurship is guided by the eyes that seek out high valuations.
The price to book ratio of the broad market is a poor man's Tobin's Q. It is a poor estimator because while the numerator (market cap of Nifty) is correct, the denominator (the replacement cost of building the Nifty companies) is only roughly approximated by the book value of the Nifty companies.
This argument emphasises the clear and direct link between stock prices and the investment optimism of CEOs. When stock prices are high, CEOs are more likely to build factories, and vice versa. In other words, the path to high private corporate investment runs through high stock prices.
The chatter on television and newspapers frequently says "Oh, but a budget must not only be judged by what it does to stock prices" or "Oh, but the stock market does not understand these things". I would emphasise two points. First, whether the stock market is right or wrong, it is the driver of how CEOs behave. So regardless of what we think about the quality of information processing of the stock market, it matters. Whether we like it or not, the stock market drives the resource allocation. Second, there is an enormous academic literature which gives us reason to respect the sophistication and capability of the stock market in information processing. The future is of course un-knowable, and every forecast of the future will have a substantial zone of error. But there are few better estimators of what might come, than the pooling of knowledge and opinion of millions of financial market participants in an open, transparent setting.
So what are stock prices saying?
The graph (click on it to see it more clearly) superposes Nifty and the S&P 500 indexes. Both indexes start at 100 on 1 May 2009. The S&P 500 index shows little movement, which suggests that the dominant story lies in domestic events.
When the election results came out, Nifty achieved values like 118 in the graph, i.e. 18% up compared with pre-election conditions. This was good news for Tobin's Q and thus private corporate investment. It was the best news possible for the outlook for business cycle conditions.
Why did we feel optimistic at the time? It was felt that two scenarios had been avoided: the scenario of confused / weak governance with attendant political problems, and the scenario of a UPA-1 where economic reforms were blocked by the CPI(M). The hope that UPA-2 would do better gave us higher stock prices, going all the way to 27% up compared with pre-election levels.
We've had a sharp decline, from indexed values of 127 to an indexed value of 112; a decline of 12%. In other words, a bit less than half the gains have been erased. This is not good for Tobin's Q and hence not good for the outlook for private corporate investment and hence not good for the outlook for business cycle conditions.
Worldwide, there has been a political shift away from left parties given that voters are nervous about economics and want top quality leadership in terms of economic policy. When the going gets tough, voters turn to The Man. In India also, this should be seen in the context of a long-term secular decline of the vote share of the Left: both trend and cycle are going against the left. A key reason why the UPA-1 won the election was the benign business cycle conditions that prevailed 2004-2009, and the fiscal bounty that came with it. Conversely, things will be very painful in political terms if business cycle conditions go back to 2001-02 levels. It is odd for India to have shifted towards giving more power to the left within the Congress at a time like this.
Early in this blog post, I show the most recent available data which proxies for private corporate investment: the monthly time-series for IIP capital goods and for imports of capital goods. Both these are quite dated. In a few months, we will know what happened to the month-on-month changes of these two series in July and August, which would reflect the consequences of the budget speech.
Wednesday, July 08, 2009
Monday, July 06, 2009
These articles emphasise that the dominant story of Indian business cycle fluctuations is the situation with private corporate investment. When this analysis was written (April and May 2009), the problem of a drop in private corporate investment was only a conjecture. Now some data is showing that there is indeed a problem. Here are the two most interesting measures of investment activity, using monthly data. In both cases, I show the average of the four most-recent values of the seasonally adjusted annualised rates (SAAR). This is similar to the familiar year-on-year growth rates of monthly data with one big difference: the yoy growth rate is the average of the latest 12 values while here we're averaging the latest 4 months so as to pickup the recent action:
|Month||IIP capital goods||Capital goods imports|
This data shows that there is a significant threat of a substantial dropoff in private corporate investment.
Fiscal, financial and monetary institutional reform
FM says he will `return to the FRBM target for fiscal deficit at the earliest and as soon as the negative effects of the global crisis on the Indian economy have been overcome'. Apart from that, there was nothing on fiscal, financial and monetary institutional reform. Pranab Mukherjee said:
Never before has Indira Gandhi's bold decision to nationalise our banking system exactly 40 years ago - on 14th of July, 1969 - appeared as wise and visionary as it has over the past few months. Her approach continues to be our inspiration even as we introduce competition and new technology in this sector.
Put together, I did not see progress on fiscal, financial and monetary institution building.
Financing of the government
There was no statement on using sale of government assets in order to pay down debt.
The GST is to be implemented from 1 April, 2010. I do get nervous given the immense complexity of that effort and the lack of accomplishment on the ground.
There are five major bad taxes in India: STT, cesses, customs, octroi and stamp duty. The budget speech tinkered with none of these. There was an `abolition' of the commodities transaction tax (which had never been levied anyway). It is distortionary, having taxation of some kinds of financial transactions but not of others. The `fringe benefit tax' was abolished.
There was no movement towards fiscal austerity that I could discern.
Put together, I did not see progress on financing of the State.
Core public goods
Core public goods are the genuine business of the State. There seem to be substantial increases of expenditure on defence and home. This might suggest that the fraction of public expenditure on core public goods might have gone up. I am, so far, not able to tell whether this change is significant.
There seems to be more money being spent on infrastructure. There is little evidence of institutional reform. The Ministry of Finance seems to be keen on building IIFCL, which seems worrisome. It is not clear that IIFCL will not suffer the fate of IDBI / IFCI / etc.
The spending on Sarva Shiksha Abhiyan (SSA) has not risen in nominal terms, which is good, but a new Madhya Shiksha Abhiyan has been created. If this ends up being run like SSA, then we'll know that there is little interest amongst politicians in actually getting India's children educated.
There are good noises about fertiliser and oil subsidies, but no action.
The role of the budget speech
Maybe we do wrong in asking for a significant workplan in the highlights of the budget speech. Maybe a lot of good things will get done even though they were not announced. I have an article in Financial Express titled Which type of budget speech is this?.
Here is a spreadsheet (.ods file) where I have a few years of data, with some value added, from `budget at a glance'. This has no corrections for the off-balance sheet stuff.
Tax revenues were at 9.17% of GDP in 2007-08. These dropped to 8.59% of GDP in 2008-09 (RE). The budget projection for 09-10 wisely places this number at 8.07% of GDP.
Non-tax revenues are projected to go up a bit: from 1.77% of GDP in 08-09 to 2.39% of GDP in 09-10. This is primarily on the back of revenues from the 3G spectrum auction.
Put together, revenue receipts are budgeted at 10.45% of GDP compared with 10.36% last year and 11.3% the year before. These projections seem reasonable to me.
Fiscal stress + gloomy revenue projections should have led to belt-tightening on expenditure. This did not happen, partly owing to the 6th pay commission.
Non-plan expenditure rose by 21.8% last year and is projected to rise by 12.6% this year. It will go from 10.59% of GDP in 07-08 to 11.83% of GDP in 09-10.
Interest payments to GDP - a key marker of fiscal stress - continues to be in troublesome territory, from 3.57% in 07-08 to 3.84% in 09-10. This is despite the dramatic collapse in inflation which should have made government borrowing much cheaper.
Plan expenditure is growing exuberantly: from 4.28% of GDP in 07-08 to 5.53% in 09-10.
With sombre revenues and a good deal of spending, we have dire deficits. The revenue deficit jumped from 1.1% in 07-08 to 4.45% last year and is budgeted at 4.81% the coming year. In other words, there is not even an attempt at fiscal correction.
The fiscal deficit was at 2.65% of GDP in 07-08; this went up to 6.02% last year and is budgeted at 6.82% for 09-10.
And finally, we switched around from a primary surplus of 0.92% in 07-08 to a primary deficit of 2.47% last year and are budgeted to have another big primary deficit of 2.98% in 09-10.
There is a caveat on all these numbers when expressed as percent of GDP. Nominal GDP is projected to be up in 09-10 by 8.35% when compared with the previous year. It is possible to think of combinations of real growth and inflation which will get this, but I would have been happier with a somewhat lower projection.
Don't I have any good news? I do. At NSE, derivatives on Nifty did turnover of Rs.707 billion or $14.7 billion. And, currency futures at NSE did turnover of $1.2 billion. So we're in good shape on having a strong equity market, and we're learning how to do currency trading also.
Sunday, July 05, 2009
It reminded me of the glorious melting pot that is Bombay. The people in the show speak in Hindi, Gujarati, Marathi, Malayalam, English. I was surprised at how little of the language of the terrorists I could understand. I feel I do better at parsing the local language when I'm in Pakistan.
Saturday, July 04, 2009
Microsoft has long faced by a credibility gap in getting into mission critical, enterprise settings. One initiative they embarked on was the `TradElect' system which did trading at London Stock Exchange. This trading system was built by Microsoft and Accenture who were keen to prove that it could work. It utilised a series of Microsoft technological components. It was used in ad campaigns by Microsoft [image credit] who claimed that if they could handle London Stock Exchange then they are ready for Serious applications [example, until they take down the page].
This is not as much of a big deal as meets the eye. The London Stock Exchange is a famous and well known brand name, but it's not particularly a big exchange by world standards. That is, it's not a really demanding IT problem. Here's some data, from the June newsletter of the World Federation of Exchanges. At page 39, they show the number of trades through order matching that are seen on all member exchanges for Jan-May 2009, a period of five months. I have added one column where I translate this into trades per second under the assumption that there were 110 trading days in these five months and trading took place for six hours a day.
|Exchange||Million trades (Jan-May 2009)||Estimated trades/s|
|NYSE Euronext (Europe)||70||29|
|Hong Kong Exchanges||53||22|
This shows NYSE and NASDAQ at 590 and 491 trades per second, which is a challenging IT problem. The two big Indian exchanges -- NSE (rank 4) and BSE (rank 7) -- are also difficult problems at 253 and 93 trades per second.
These are averages for the system load; in this business there is an extreme peak-to-average ratio. E.g. NSE routinely exceeds 1000 trades/s and occasionally does a lot more. There are days when half of the days activity happens in the last 30 minutes. So the IT challenge is much bigger than the average trades/s seems to suggest.
In this ranking, London Stock Exchange is not that big; it's ranked 9th in the world and does an estimated 30 trades per second on average. So it was a good choice for a certain kind of vendor who tries to make a point using a toy problem which does not sound like one. When sizing an IT system, it is peak load that matters, of course. But the ratio of peak to average is likely to be similar for all the above exchanges. Hence, NSE is likely to be a much bigger problem than LSE whether you measure by average load (as shown above) or by peak load.
The story seems to have gone badly wrong for Microsoft. LSE consistently failed to match rivals like Chi-X, which run Linux, in becoming a credible choice for algorithmic trading. Then there was a day when the trading system collapsed (9 Sep 2008).
This played a role in the CEO of LSE, Clara Furse, getting sacked. The new CEO, Xavier Rolet, is said to have decided to dump TradElect. Here's the story, by Steven J. Vaughan-Nichols.
To be sure, complex engineering projects can fail for many reasons. But it's ironic that the marquee adoption at an exchange, that was advertised by Microsoft as proof that they had arrived, should have flamed out like this despite direct staff involvement from Microsoft.
Friday, July 03, 2009
What if India had a Hong KongComment by Anonymous:
Interesting. Have a look:
Measuring the consequences for developing countries, of open access to the literatureComment by bagdu:
Then I contacted their coordinator Nicole Hunt who told me that CEPR papers are free as well like NBER. Here is the gist of that communication:
On reading this post, I communicated with CEPR coordinator Nicole Hunt. Here are my findings:
- One needs to provide one's address and email address to CEPR by sending a mail to CEPR and creating a profile at CEPR's website.This will enable free access to their papers.
- I suggested to CEPR to make this process user friendly similar to NBER and this suggestion is under consideration.
- It seems like this is not a defined process yet and the access is on an ad-hoc basis. This being weekend, I have not yet got the access to their papers. Create a profile at CEPR website, try downloading a paper, it results in a failure and an email address to contact. Follow up on that id and you might get the access. I will confirm it here once I get it myself.
Measuring the consequences for developing countries, of open access to the literatureComment by bagdu:
This is to confirm that further to creating a profile at CEPR's website and my communication (as an individual from a developing country) with CEPR I have been granted free access to CEPR's papers.
One feels a little lump in the throat when one gets these high quality materials free by virtue of being a citizen of a developing country.
This is one privilege one would like to let go of. Let us quickly become developed!!