Monday, September 01, 2014

What does algorithmic trading do to market quality?

by Nidhi Aggarwal and Susan Thomas.

Electronics unsettled the world of organised financial markets when the trading floor and dealers became obsolete. In the late 1980s and early 1990s, this was the subject of great debate. `Program trading' and `portfolio insurance' were believed to have exacerbated the crash of October 1987. Many people believed that human market makers did things that computerised order matching could not. Millions of jobs were on the line.

When DTB won back the long bond contract from LIFFE by replacing the trading floor, the writing was on the wall. For some time, it was still claimed that electronic order matching exchanges are good for some things like equities and derivatives, but not for the bond market and the currency market. That claim has broken down in the last decade; electronic order matching on exchanges has become important in these areas also.

The debate of the day is now about high frequency trading (HFT) and algorithmic trading (AT). Once organised financial trading is done electronically, it becomes possible to setup a man-machine hybrid, where a human controls a computer program which does the actual work of looking at information and sending back orders. This man-machine hybrid is faster than a human, is less error-prone than a human, and costs less than a human. Once again, millions of jobs are on the line.

Several concerns have however been expressed on whether an HFT / AT world is socially desirable or not. Critics argue that high levels of HFT / AT does not do any good to the quality of the markets, exacerbates market volatility, and induces `flash crashes'. There are fears that liquidity provision in the AT world is transient: it is argued that in times of market stress, algorithms step away from this essential function, and instead become liquidity demanders, worsening the volatility in the markets and creating `liquidity black holes'. These are non-trivial concerns; many regulators have started exploring the extent to which the new AT/HFT world has new kinds of market failures, and the kinds of regulatory interventions that might be appropriate in that environment.

In the last five years, myriad papers have been written on the impact of HFT / AT for market quality. Most of these papers suffer from three flaws:
  1. In the US, the market structure is very fragmented across a large number of trading venues. Hence, observing HFT/AT activity at any one market venue gives an incomplete depiction of either the treatment (HFT/AT) or the outcome (market quality at the level of the whole country). The US is not a good laboratory to study AT/HFT.

  2. Most researchers do not observe a flag for each order or each trade about whether this was HFT/AT. A variety of proxies have been reconstructed by researchers, but all these are fairly imprecise.

  3. Algorithmic traders self-select themselves to be active in certain kinds of securities. This induces selection bias and hampers our ability to claim that AT/HFT has caused the observed changes. More generally, conventional regressions -- where a market quality measure is regressed on a bunch of explanatory variables -- are riddled with endogeneity bias and other statistical problems.

In a recent paper we make substantial progress on all three problems:

  1. We observe data from the Indian `National Stock Exchange' (NSE), which was the world #1 exchange by number of trades in 2013. NSE accounts for over 75% of trading, there is no OTC trading and there are no dark pools. This yields an ideal clean setting.

  2. NSE's data files precisely tag each order and the counterparties of each trade with an AT/HFT flag so there is no imprecision in identification.

  3. That leaves the problem of endogeneity bias. We utilise an exogenous event -- the launch of co-location at NSE in 2010. The effect of a treatment is best observed when the outcomes of the individuals who receive the treatment (called the `treated') are compared to the ones who are not treated (the `controls'). These two sets of individuals are required to be otherwise similar in all other characteristics. We follow this approach and use matching techniques to identify stocks that are otherwise similar, but one set of stocks saw a significant surge in the level of AT activity after the introduction of co-location, while the other did not. To ensure comparability of days in the period prior (2009) and post (2012-13) co-location due to differences in the macroeconomic conditions, we match dates based on the volatility of the market index (Nifty). The matching on the stocks along with the matching on the dates allows us to setup a matched difference-in-difference analysis through which we can measure the causal impact of AT.

We find that the adoption of AT was a gradual process. The community took nearly a year and a half after NSE started co-location before adopting it in a big way.

Evolution of AT intensity before and after co-location

Further, the adoption of AT was not uniform for all stocks. This animated visualisation shows the fraction of traded volume of a particular stock due to AT for all large securities traded on NSE between January 2009 to August 2013. Each circle is a security with the size capturing market capitalisation of the firm. Large market capitalisation firms all saw a high AT adoption from the start to the end of the period. But AT adoption is highly varied for the smaller market capitalisation securities: some got high levels of AT and some got low. This gives us the opportunity to compare `treatment' stocks (which got to high AT) against `controls' (similar stocks which got to low AT).

Our analysis yields the following results:

Market quality measureEstimated coefficient
Transactions costs
  Spread -0.35
  Impact cost -0.79
Depth
  Total depth 0.33
  Top 1 depth 0.16
  Top 5 depth 0.33
  Order imbalance                         -13.87
Liquidity risk
  IC volatility -0.02
Volatility
  Realised volatility-2.65
  Range -16.90
Efficiency
  Variance Ratio -0.03
Crash risk
  Price movements
  in excess of 5% -2.39

To summarise, our results suggest that higher AT has caused:
  • Transactions costs to drop.
  • Available liquidity to shift closer to the touch.
  • Total available depth increased.
  • Closer alignment of orders between the buy side and sell side.
  • Lower intra-day volatility of price.
  • Lower intra-day volatility of transaction costs.
  • Faster adjustment of intray-day prices.
  • Lower incidence of extreme prices outside the 5 percent band.
These results do not support the skeptical view about algorithmic trading. There is no evidence of more mini flash crashes, or of greater liquidity risk, or of a more jittery and volatile market. On the contrary, greater algorithmic trading improves market quality.

There is one class of concerns which is not addressed in this work: the problems of transacting large quantities. The evidence about transactions costs that is visible in the order book is limited to small transactions (i.e. impact cost). Big orders are dribbled out through complex algorithms. We do not know whether transactions costs got much worse for institutional investors, as some fear. 

2 comments:

  1. Hi, if you have used auto-triggered trades as algo-trading it will give a wrong picture. Auto buy or auto sell order triggered by price-variables entered by human judgements are not the main problem creators. These have advantages that you describe above.

    The main problem makers are algorithms that make the judgement on price. The strength of these algorithms depends on the algorithms itself. There a finite number of them - mostly quant based i.e. based on technical analysis. The problem is that at times these algorithms trigger a self-fulfilling spiral. Since most of them are similar they in fact feed off each other leading to catastrophic crisis.

    I hope the algo-trading analysis focuses on this.

    ReplyDelete
  2. Hi. Could you please tell from where you got the NSE data on AT and Non AT Trades? Also you ran a regression to find the dependency on different variables by AT ?

    ReplyDelete

Please note: Comments are moderated. Only civilised conversation is permitted on this blog. Criticism is perfectly okay; uncivilised language is not. We delete any comment which is spam, has personal attacks against anyone, or uses foul language. We delete any comment which does not contribute to the intellectual discussion about the blog article in question.

LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.