Wednesday, August 10, 2011

Algorithmic trading -- the positive side

In researching a forthcoming article, I happened upon this recent empirical study in the Journal of Finance looking at some of the benefits of algorithmic trading. I've written before about natural instabilities inherent to high-frequency trading, and I think we still know very little about the hazards presented by dynamical time-bombs linked to positive feed backs in the ecology of algorithmic traders. Still, it's important not to neglect some of the benefits algorithms and computer trading do bring; this study highlights them quite well.

This paper asks the question: "Overall, does AT (algorithmic trading) have salutary effects on market quality, and should it be encouraged?" The authors claim to give "the first empirical analysis of this question." The ultimate message coming out is that "algorithmic trading improves liquidity and enhances the informativeness of quotes." In what follows I've given a few highlights -- some points being obvious, others less obvious:
From a starting point near zero in the mid-1990’s, AT (algorithmic trading) is thought to be responsible for as much as 73% of trading volume in the U.S in 2009.
That's no longer news, of course. By now, mid-2011, I expect that percentage has risen to closer to 80%.

Generally, when I think of automated trading, I think of two activities: market makers (such as GETCO) and statistical arbitrage high-frequency traders, of which there are many (several hundred) firms. But this article rightly emphasizes that automated trading now runs through the markets at every level:

There are many different algorithms, used by many different types of market participants. Some hedge funds and broker-dealers supply liquidity using algorithms, competing with designated market-makers and other liquidity suppliers. For assets that trade on multiple venues, liquidity demanders often use smart order routers to determine where to send an order (e.g., Foucault and Menkveld (2008)). Statistical arbitrage funds use computers to quickly process large amounts of information contained in the order flow and price moves in various securities, trading at high frequency based on patterns in the data. Last but not least, algorithms are used by institutional investors to trade large quantities of stock gradually over time.
One very important point the authors make is that it is not at all obvious that algorithmic trading should improve market liquidity. Many people seem to think this is obvious, but there are many routes by which algorithms can influence market behaviour, and they work in different directions:
... it is not at all obvious a priori that AT and liquidity should be positively related. If algorithms are cheaper and/or better at supplying liquidity, then AT may result in more competition in liquidity provision, thereby lowering the cost of immediacy. However, the effects could go the other way if algorithms are used mainly to demand liquidity. Limit order submitters grant a trading option to others, and if algorithms make liquidity demanders better able to identify and pick off an in-the-money trading option, then the cost of providing the trading option increases, and spreads must widen to compensate. In fact, AT could actually lead to an unproductive arms race, where liquidity suppliers and liquidity demanders both invest in better algorithms to try to take advantage of the other side, with measured liquidity the unintended victim.
This is the kind of thing most participants in algorithmic trading do not emphasize when raving about the obvious benefits it brings to markets.

However, the most important part of the paper comes in an effort to track the rise of algorithmic trading (over roughly a five year period, 2001-2006) and to compare this to changes in liquidity. This isn't quite as easy as it might seem because algorithmic trading is just trading and not obviously distinct in market records from other trading:
We cannot directly observe whether a particular order is generated by a computer algorithm. For cost and speed reasons, most algorithms do not rely on human intermediaries but instead generate orders that are sent electronically to a trading venue. Thus, we use the rate of electronic message traffic as a proxy for the amount of algorithmic trading taking place.
 The figure below shows this data, recorded for stocks with differing market capitalization (sorted into quintiles, Q1 being the largest fifth). Clearly, the amount of electronic traffic in the trading system has increased by a factor of at least five over a period of five years:

The paper then compares this to data on the effective bid-ask spread for this same set of stocks, again organized by quintile, over the same period. The resulting figure indeed shows a more or less steady decrease in the spread, a measure of improving liquidity:

So, there is a clear correlation. The next question, of course, is whether this correlation reflects a causal process or not. I won't get into details but what perhaps sets this study apart from others (see, for example, any number of reports by the Tabb Group, which monitors high-frequency markets) is an effort to get at this causal link. The authors do this by studying a particular historical event that increased the amount of algorithmic trading in some stocks but not others.The results suggest that there is a causal link.

The conclusion, then, is that algorithmic trading (at least in the time period studied, in which stocks were generally rising) does improve market efficiency in the sense of higher liquidity and better price discovery. But the paper also rightly ends with a further caveat:

While we do control for share price levels and volatility in our empirical work, it remains an open question whether algorithmic trading and algorithmic liquidity supply are equally beneficial in more turbulent or declining markets. Like Nasdaq market makers refusing to answer their phones during the 1987 stock market crash, algorithmic liquidity suppliers may simply turn off their machines when markets spike downward.

This resonates with a general theme across all finance and economics. When markets are behaving "normally", they seem to be more or less efficient and stable. When they go haywire, all the standard theories and accepted truths go out the window. Unfortunately, "haywire" isn't as unusual as many theorists would like it to be.

** UPDATE **

Someone left an interesting comment on this post, which for some reason hasn't shown up below. I had an email from Puzzler183 saying:

"I am an electronic market maker -- a high frequency trader. I ask you: why should I have to catch the falling knife? If I see that it isn't not a profitable time to run my business, why should I be forced to, while no one else is?

You wouldn't force a factory owner to run their plant when they couldn't sell the end product for a profit. Why am I asked to do the same?

During normal times, bid-ask spreads are smaller than ever. This is directly a product of automation improving the efficiency of trading."

This is a good point and I want to clarify that I don't think the solution is to force anyone to take positions they don't want to take. No one should be forced to "catch the falling knife." My point is simply that in talking about market efficiency, we shouldn't ignore the non-normal times. An automobile engine which uses half the fuel of any other when working normally wouldn't be considered efficient if it exploded every few hours. Judgments of the efficiency of the markets ought to include consideration of the non-normal times as well as the normal.

An important issue is to explore if there is a trade-off between efficiency in "normal times" as reflected in low spreads, and episodes of explosive volatility (the mini flash crashes which seem ever more frequent). Avoiding the latter (if we want to) may demand throwing some sand into the gears of the market (with trading speed limits or similar measures).

But I certainly agree with Puzzler183: no one should be forced to take on individual risks against their wishes.