Tuesday, August 30, 2011

Algorithms are smarter than people

On the topic of algorithmic trading, I recently posted on some evidence documenting the benefits it brings to markets -- more liquidity, lower spreads and trading costs, etc. On a related topic, Ole Roleberg at Freakynomics has a nice post reviewing some of the evidence that automated decision tools actually make better decisions that real people when confronting many different kinds of problems. As he notes,
There’s a host of studies showing that human judgment is poor at synthesizing and weighting a large number of different types of evidence, and that simple, statistical models can outperform humans on tasks such as predicting recidivism, making clinical judgments (psychiatry and medicine), predicting divorce, predicting future academic success, etc. (for an entrypoint to this literature, see here for a blogpost I found that has some good quotes from J.D. Trout and Michael Bishop).

I guess the point is that algorithmic trading can be good or bad depending on the algorithm – and that the danger it brings is more if the ecology of trading algorithms active in a market is of a kind that could create cascading ripples destabilizing the market: One set of algorithms lowering the price of a set of stocks, triggering another set of algorithms to sell these stocks to avoid loss, triggering another set of… and so on.
This is precisely the point I've made before about the dangers of algorithms -- it's not one algorithm that might blow things up, but potentially explosive webs of feedback running between many.

But I think the superior performance of algorithms at making decisions is itself quite striking and not generally recognized. The article to which Rogeberg links makes the following all-too-plausible remark:
Training of large numbers of experts by universities has probably had the perverse effect of increasing the number of people running around making highly confident but wrong judgements. But the tendency to not notice our errors and to place excessive confidence in our subjective judgements is something that all humans suffer from to varying degrees.
One final interesting read -- again thanks to Rogeberger for pointing this out -- is a profile in The Atlantic of Cliff Asness of the quant hedge fund Applied Quantitative Research. AQR was one of the hedge funds involved in the infamous "quant meltdown" of August 2007 which was driven precisely by a positive feedback loop, in the case one which caused a violent de-leveraging among a number of hedge funds using similar strategies and invested in similar assets. This is one of the few cases in which we have a pretty good quantitative model explaining how these kinds of feedback loops emerge essentially in the same way violent storms (or hurricanes) do in the atmosphere -- through ordinary processes which create the conditions in which explosive events become virtually certain. In the profile, Asness describes the dynamics behind the quant meltdown, which weren't as complex, mysterious or irrational as many people seem to think:
He told the New York Post that he blamed the sudden losses not on AQR's computer models but on "a strategy getting too crowded ... and then suffering when too many try to get out the same door" at the same time.