Tuesday, May 31, 2011

How has the crisis changed the teaching of economics?

The Economist has organized a discussion on how the teaching of economics may change in the wake of the recent crisis. Around 15 prominent economists have weighed in, with few suggesting anything radical. Harvard's Alberto Alesina suggests that indeed nothing so far has changed:
As for the methods of teaching and research nothing has changed. We kept all that is good about methods in economics: theoretical and empirical rigor. But one may say we kept also what is bad: a tendency to be too fond of technical elegance and empirical perfection at the expense of enlarging the scope of analysis and its realism. Those who found our methodology good should not worry about changes. Those who did not like it should not hold their breath for any sudden change due to the crisis.
But some of the others suggest that things are changing, and that the crisis has at least stimulated a renewed interest in economic history. Indeed, for all the consternation that economists didn't see this crisis coming, and didn't predict it, this isn't really the most surprising thing about the crisis. More surprising is how many economists seemed convinced that an event of this magnitude simply couldn't happen, and believed this despite centuries of history showing a never-ending string of episodic crises in countries around the world. Some economists did foresee trouble brewing, precisely because they took the past seriously as a guide to what could happen in the future, rather than mathematical theory. As Michael Pettis writes,

ONE of the stranger myths about the recent financial crisis is that no one saw it coming. In fact quite a lot of economists saw it coming, and for years had been writing with dread about the growing global imbalances and the necessary financial adjustments. In 2002 for example, Financial Policy published my article, “Will Globalization Go Bankrupt?” in which I compared the previous decade to earlier globalisation cycles during the past two hundred years and argued that we were about to see a major financial crisis that would result in a sharp economic contraction, bankruptcies of seemingly unassailable financial institutions, rising international trade tensions, and the reassertion of politics over finance. I even predicted that at least one financial superstar would go to jail.

How did I know? It didn’t require a very sophisticated understanding of economics, just some knowledge of history. Every previous globalisation cycle except one (the one cut short in 1914) ended that way, and nothing in the current cycle seemed fundamentally different from what had happened before. ... So how should the teaching of economics change? That’s easy. While mathematical fluency is very useful, it should not be at the heart of economics instruction. That place should be reserved for economic history.

That seems like an eminently sensible attitude. Moreover, modelling in economics ought to be much more strongly focused on understanding how past events and crises have emerged and why they appear to be inherent in the nature of economics systems -- just as natural as thunderstorms or hurricanes are in the Earth's atmosphere. This means, it would seem clear, moving outside the context of the profession's beloved general equilibrium models to study natural instabilities in a serious way.

In any event, I'm not convinced this discussion reflects adequately the deep dissatisfaction -- perhaps even embarrassment -- some economists feel over the state of their field. A good gauge of the stronger views held by a fraction of academic economists is the report written by those at the 2008 Dahlem Workshop in economics. The opening paragraph sets the tone:
The global financial crisis has revealed the need to rethink fundamentally how financial systems are regulated. It has also made clear a systemic failure of the economics profession. Over the past three decades, economists have largely developed and come to rely on models that disregard key factors—including heterogeneity of decision rules, revisions of forecasting strategies, and changes in the social context—that drive outcomes in asset and other markets. It is obvious, even to the casual observer that these models fail to account for the actual evolution of the real-world economy. Moreover, the current academic agenda has largely crowded out research on the inherent causes of financial crises. There has also been little exploration of early indicators of system crisis and potential ways to prevent this malady from developing. In fact, if one browses through the academic macroeconomics and finance literature, “systemic crisis” appears like an otherworldly event that is absent from economic models. Most models, by design, offer no immediate handle on how to think about or deal with this recurring phenomenon. In our hour of greatest need, societies around the world are left to grope in the dark without a theory. That, to us, is a systemic failure of the economics profession.

How many "lost decades"?

It is perhaps a reflection of the perceived self-importance of the financial industry -- it is the axis about which the world revolves -- that the term "lost decade" refers to any ten year period in which the stock market actually declines in value. An entire decade of history is deemed to have been lost. Fortunately, such events are exceedingly rare -- or so most people think.

Not quite. Economist Blake LeBaron finds that the likelihood of a lost decade -- as assessed by the historical data for U.S. markets -- is actually around 7%. That's the historical chance for the numerical or nominal value of a diversified portfolio of U.S. stocks to fall over a decade. Calculated in real terms -- adjusting for inflation -- makes the probability significantly higher, probably over 10%, not really an extremely unlikely event at all. The figure below (Figure 1 in LeBaron's paper) shows the calculated return over ten year windows over the past 200 years or so, and shows maybe six or so episodes in which the real return descends into negative figures.


Not an earth shaking result, perhaps, but a useful corrective to widespread belief that long term drops in the market are truly exceptional events. As LeBaron comments,
Lost decades are often treated as a kind of black swan event that is almost impossible. Results in this note show that while they are a tail event, they may not be as far out in the tail as the popular press would have us think.... A life long investor facing 6 decades of investments should consider a probability 0.35 of seeing at least one lost decade in their lifetime.
UPDATE


Blake pointed out to me in an email that the term "lost decade" of course has much wider meanings that what I've discussed here, arising for example in discussions of Japan from 1990 to 2000 (and maybe longer).

Sunday, May 29, 2011

One 50-Miler down -- More To Come

I finished my 50 miler, and in fairly good shape, too. It took just about 3 hours 20 minutes, which is a bit more than I expected.

I'd hoped to take it easy for the first 30 miles or so, but there were a couple of hills (one at 5 miles, and another another at 17 1/2) that were pretty stiff. Since my heart rate was up to 160 by the top of each hill, I knew I was in for a long ride. Then, to add insult to injury, there was another hill at about 30 that felt like I was riding up a telephone pole for about 2-300 yards. Oh, what fun.

Ah well - next year I'll know to spend a lot more time working on hills beforehand (it's flat enough near Unknown University that I don't see a lot of hills unless I want to).

The legs aren't too bad right now, but I can tell tomorrow will be a real treat.

Next stop - a century!

Saturday, May 28, 2011

Deep Discounting Errors?

One of the papers currently on my list of "Breaking Research" (see right sidebar) has the potential to be unusually explosive; perhaps world-changing. It's conclusions represent dynamite for everyone involved in the economic assessment (i.e. cost-benefit analysis) of various proposals for measures to respond to climate change or environmental degradation more generally. All this from a bit of algebra (and good thinking). Here's why.

Five years ago, the British Government issued the so-called Stern Review of the economics of climate change, authored by economist Nicholas Stern. The review had strong conclusions:
“If we don’t act, the overall costs and risks of climate change will be equivalent to losing at least 5% of global GDP each year, now and forever. If a wider range of risks and impacts is taken into account, the estimates of damage could rise to 20% of GDP or more.”
The review recommended that governments take fast action to reduce greenhouse-gas emissions.

In response, many economists -- most prominently William Nordhaus of Yale University -- have countered the Stern Review by criticizing the way it "discounted" the value of consequences in the future. They said it didn't discount the future strongly enough. In this essay in Science in 2007, for example, Nordhaus argued that the value of future economic losses attributed to climate change (or any other concerns about the environment) should be discounted at about 7% per year, far higher than the value of 1.4% used in the Stern Review. Here is his comment on this difference, providing some context:
In choosing among alternative trajectories for emissions reductions, the key economic variable is the real return on capital, r, which measures the net yield on investments in capital, education, and technology. In principle, this is observable in the marketplace. For example, the real pretax return on U.S. corporate capital over the last four decades has averaged about 0.07 per year. Estimated real returns on human capital range from 0.06 to > 0.20 per year, depending on the country and time period (7). The return on capital is the “discount rate” that enters into the determination of the efficient balance between the cost of emissions reductions today and the benefit of reduced climate damages in the future. A high return on capital tilts the balance toward emissions reductions in the future, whereas a low return tilts reductions toward the present. The Stern Review’s economic analysis recommended immediate emissions reductions because its assumptions led to very low assumed real returns on capital.
Of course, one might wonder if four decades of data is enough to project this analysis safely into untold centuries in the future (think sub-prime crisis and the widespread belief that average housing prices in the US have never fallen, based on a study going back 30 years or so). That to one side, however, there may be something much more fundamentally wrong with Nordhaus's critique, as well as with the method of discounting used by Stern in his review and by most economists today in almost every cost benefit analysis involving the projections into future.

The standard method of economic discounting follows an exponential decay. Using the 7% figure, each movement of roughly 10 years into the future implies a decrease in current value by a factor of 2. With a discounting rate r, the discount factor applied at time T in the future is exp(-rT). Is this the correct way to do it? Economists have long argued that it is for several reasons. To be "rational", in particular, discounting should obey a condition known as "time consistency" -- essentially that subsequent periods of time should all contribute to the discounting in an equal way. This means that a discount over a time A+B should be equal to a discount over time A multiplied by a discount over time B. If this is true -- and it seems sensible that it should be -- then it's possible to show that exponential discounting is the only possibility. It's the rational way to discount.

That would seem beyond dispute, although it doesn't settle the question of which discount rate to use. But not so fast. Physicist Doyne Farmer and economist John Geanakoplos have taken another look at the matter in the case in which the discount rate isn't fixed, but varies randomly through time (as indeed do interest rates in the market). This blog isn't a mathematics seminar so I won't get into details, but their analysis concludes that in such a (realistically) uncertain world, the exponential discounting function no longer satisfies the time consistency condition. Instead, a different mathematical form is the natural one for discounting. The proper or rational discounting factor D(T) has the form D(T) = 1/(1 + αT)^β, where α and β are constants (here ^ means "raised to the power of"). For long times T, this form has a power law tail proportional to T^-β, which falls off far more slowly than an exponential. Hence, the value of the future isn't discounted to anywhere near the same degree.

Farmer and Geanakoplos illustrate the effect with several simple models. You might take the discount rate at any moment to be the current interest rate, for example. The standard model in finance for interest rate movements in the geometric random walk (the rate gets multiplied or divided at each moment by a number, say 1.1, to determine the next rate). With discount rates following this fluctuating random process, the average effective discount after a time T isn't at all like that based on the current rate projected into the future. Taking the interest rate as 4%, with a volatility of 15%, the following figure taken from their paper compares the resulting discount factors as time increases:

For the first 100 years, the numbers aren't too different. But at 500 years the exponential is already discounting values about one million times more strongly than the random process (GRW), and it gets worse after that. This is truly a significant hole in the analyses performed to date on climate policy (or steps to counter other problems where costs come in the future).

Farmer and Geanakoplos don't claim that this geometric random walk model is THE correct one, it's only illustrative (but also isn't obviously unreasonable). But the point is that everything about discounting depends very sensitively on the kinds of assumptions made, not only about the rate of discounting but the very process it follows through time. As they put it:
What this analysis makes clear, however, is that the long term behavior of valuations depends extremely sensitively on the interest rate model. The fact that the present value of actions that affect the far future can shift from a few percent to infinity when we move from a constant interest rate to a geometric random walk calls seriously into question many well regarded analyses of the economic consequences of global warming. ... no fixed discount rate is really adequate – as our analysis makes abundantly clear, the proper discounting function is not an exponential.
It seems to me this is a finding of potentially staggering importance. I hope it quickly gets the attention it deserves. It's incredible that what are currently considered the best analyses of some of the world's most pressing problems hinge almost entirely on quite arbitrary -- and possible quite mistaken -- techniques for discounting the future, for valuing tomorrow much less than today.  But it's true. In his essay in Science criticizing the Stern Review, Nordhaus makes the following quite amazing statement, which is nonetheless taken by most economists, I think, as "obviously" sensible:
In fact, if the Stern Review’s methodology is used, more than half of the estimated damages “now and forever” occur after 2800.
Can you imagine that? Most of the damage could accrue after 2800 -- i.e., in that semi-infinite expanse of the future leading forward into eternity, rather than in the 700 years between now and then? Those using standard economics are so used to the idea that the future should receive very little consideration find this kind of idea crazy. But their logic looks to me seriously full of holes.

Thursday, May 26, 2011

Eugene Fama's first paper

University of Chicago financial economist Eugene Fama is famous for a number of things, perhaps foremost for his assertion in the 1960s of the Efficient Markets Hypothesis. A million people (including me) have criticized this ever-so-malleable idea as ultimately not offering a great deal of insight into markets. They're hard to predict, true, but who's surprised? Fama is still defending his hypothesis even after the recent crisis: witness his valiant if not quite convincing efforts in this interview with John Cassidy.

But the EMH isn't the only thing Fama has worked on, and he deserves great credit for a half-century of detailed empirical studies of financial markets. Way back in 1963, in fact, it was Fama who took pains in his very first published paper to bring attention to the work of Benoit Mandelbrot on what we now call "fat tails" in the distribution of financial returns. I may have known this before, but I had forgotten and only relearned it when watching this interview of Fama by Richard Roll on the Journal of Finance web site. The paper was entitled "Mandelbrot and the Stable Paretian Hypothesis." Fama gives a crystal clear description of Mandelbrot's empirical studies on price movements in commodities markets, showing a preponderance of large, abrupt movements -- far more than would be expected by the Gaussian or normal statistics assumed at the time. He explored Mandelbrot's hypothesis that the true empirical distributions might be fit by "Stable Paretian" distributions, which we today call "Stable Levy" distributions, for which statistical measures of fluctuations, such as the mean square variance, may be formally infinite. All of this 48 years ago.

How did Fama know about Mandelbrot so early on, when the rest of the economics profession took so long to take notice (and in many cases still haven't)? It turns out that Mandebrot visited Chicago for several months in 1963 and he and Fama spent much time discussing the former's empirical work. As Fama says in the interview, he's always been convinced that a lot of research depends on serendipity. Good example.

Given much better data, we now know (and have for more than a decade) that the Stable Levy distributions aren't in fact adequate for describing the empirical distribution of market returns. If we define the return R(t) over some time interval t as the logarithm of the ratio of prices, s(t)/s(0) -- this makes the return be centered roughly about zero -- then the distribution of R has been found in all markets studied to have power law tails with P(R) inversely proportional to R raised to a power α = 4, at least approximately. See this early paper, for example, as one of many finding the same pattern. Stable Levy distributions can't cope with this as they only yield tail exponents α between 1 and 3.

Given that power laws of this sort arise quite naturally in systems driven out of equilibrium (in physics, geology, biology, engineering etc), these observations don't sit comfortably with the equilibrium fixation of theoretical economics -- or with the EMH in particular. But that's another matter. Fama clearly saw the deep importance of the power law deviation from Gaussian regularity, noting that it implies a market with much more unruly fluctuations than one would expect in a Gaussian world. As he put it,
"...such a market is inherently more risky for the speculator or investor than a Gaussian market."

Wednesday, May 25, 2011

An Unsurpassable Greenspan-ism

Former Chairman of the Federal Reserve Bank Alan Greenspan has been known to say some remarkable things (and some remarkably opaque things), but he really out-did himself in a recent Financial Times editorial. Not surprisingly, he's back at it recycling his favourite story that markets know best and that any attempt to regulate them can only be counterproductive. But wonder at the paradoxical beauty of the following sentence:
With notably rare exceptions (2008, for example), the global "invisible hand" has created relatively stable exchange rates, interest rates, prices, and wage rates.
The markets work beautifully, and all on their own, but for those rare, notable exceptions. In other words, they work wonderfully except when they fail spectacularly, bring the banking system to the brink of collapse and throw millions of people out of work and into financial misery.

But Greenspan's language of il-logic has at least spawned an amusing reaction. The blog Crooked Timber posted up some further examples to illustrate how his delicate construction might be employed much more generally. For example,
"With notably rare exceptions, Russian Roulette is a fun, safe game for all the family to play."
or, from a commenter on the blog,
"With notably rare exceptions, Germany remained largely at peace with its neighbors during the 20th century."
See Crooked Timber (especially the comments) for hundreds of other impressive examples of Greenspan-ian logic.



 

Tuesday, May 24, 2011

Physics Envy?

I just came across this post from late last year by Rick Bookstaber, someone I respect highly and consider well worth listening to. Last year when researching an article on high-frequency trading for Wired UK, insiders in the field strongly recommended Bookstaber's book A Demon of Our Own Design: Markets, Hedge Funds and the Perils of Financial Innovation. It is indeed a great book as Bookstaber draws on a wealth of practical Wall St. experience in describing markets in realistic terms, without resorting to the caricatures of academic finance theory.

In his post, he makes an argument that seems to contradict everything I'm writing about here. Essentially, he argues that there's already too much "physics envy" in finance, meaning too much desire to make it appear that market functions can be wrapped up in tidy equations. As he puts it,
...physics can generate useful models if there is well-parameterized uncertainty, where we know the distribution of the randomness, it becomes less useful if the uncertainty is fuzzy and ill-defined, what is called Knightian uncertainty.

I think it is useful to go one step further, and ask where this fuzzy, ill-defined uncertainty comes from. It is not all inevitable, it is not just that this is the way the world works. It is also the creation of those in the market, created because that is how those in the market make their money. That is, the markets are difficult to model, whether with the methods of physics or anything else, because those in the market make their money by having it difficult to model, or, more generally, difficult for others to anticipate and do as well.

Bookstaber goes on to argue that it is this relentless innovation and emergence of true novelty in the market which makes physics methods inapplicable:
The markets are not physical systems guided by timeless and universal laws. They are systems based on creating an informational advantage, on gaming, on action and strategic reaction, in a space without well structured rules or defined possibilities. There is feedback to undo whatever is put in place, to neutralize whatever information comes in.

The natural reply of the physicist to this observation is, “Not to worry. I will build a physics-based model that includes feedback. I do that all the time”. The problem is that the feedback in the markets is designed specifically not to fit into a model, to be obscure, stealthy, coming from a direction where no one is looking. That is, the Knightian uncertainty is endogenous. You can’t build in a feedback or reactive model, because you don’t know what to model. And if you do know – by the time you know – the odds are the market has changed.

I think this is an important and perceptive observation, yet it also strongly misrepresents what physicists -- the ones doing good work, at least -- are trying to do in modeling markets. Indeed, I think it's fair to say that much of the work in what I call the physics of finance starts from the key observation that "feedback in the markets is designed specifically not to fit into a model, to be obscure, stealthy, coming from a direction where no one is looking." The best work in no way hopes to wrap up everything in one final tidy equation (as in the cartoon version of physics, although very little real physics works like this), or even one final model solved on a computer, but to begin teasing out -- with a variety of models of different kinds -- the kinds of things that can happen and might be expected to happen in markets dense with interacting, intelligent and adaptive participants who are by nature highly uncertain and trying to go in directions no one has gone before.

A good example is a recent paper (still a pre-print) by Bence Toth and other physicists. It's a fascinating and truly novel effort to tackle in a fundamental way the long-standing mystery of market impact -- the widely observed empirical regularity that a market order of size V causes the price of an asset to rise or fall in proportion (roughly) to the square root of V. The paper doesn't start from the old efficient markets idea that all information is somehow rapidly reflected in market prices, but rather tries to actually model how this process takes place. It begins with the recognition that when an investor has valuable information, they don't just go out and release it to the market in one big trade. To avoid the adverse effects of market impact -- your buying drives the price up so you have to buy at ever higher prices -- those with large orders typically break them up into lots of pieces and try to disguise their intentions and reduce market impact. As a result, the market isn't at all a place where all information rapidly becomes evident. Most of the trading is being driven by people trying to hide their information and keep it private as long as possible.

In particular, as Toth and colleagues argue, the sharp rise of market impact for very small trades (the infinite slope of the square root form at the origin) suggests a view very different from the standard one. Many people take the concave form of the observed impact function, gradually becoming flatter for larger trades, as reflecting some kind of saturation of the impact for large volumes. Perhaps. But the extremely high impact of small trades is perhaps a more interesting phenomenon. The square root form in fact implies that the "susceptibility" of the market -- the marginal price change induced per unit of market order -- heads toward infinity in the limit of zero trade size. A singularity of this kind in physics or engineering generally signals something special -- a point where the linear response of the system (reflecting outcomes in direct proportion to the size of their causes) breaks down. The market lives in a highly unstable state.

Toth and colleagues go on to show that this form can be understood naturally as arising from a critical shortage of liquidity -- that is, a perpetual scarcity of available small volume trades at the best prices. I won't get into the details here (as I will return to the topic in greater detail soon), but their model depends crucially on the idea that much information remains "latent" or hidden in the market, and only gets revealed over long timescales. It remains hidden precisely because market participants don't want to give it away and incur undue costs associated with market impact. The upshot -- although this needs further study to flesh out details -- is that this perpetual hiding of information, and the extreme market sensitivity it gives rise to for small trades, might well lie behind the surprising frequency with which markets experience relatively large movements up or down, apparently without any cause.

This is just the kind of the thing that anyone interested in a deeper picture of market dynamics should find valuable. It's the kind of fundamental insight that, with further development, might even suggest some very non-obvious policy steps for making markets more stable.

Much the same can be said for a variety of physics-inspired studies on complex financial networks, as reviewed recently in Nature by May and Haldane. These models also share a great resonance with ecology and the study of evolutionary biology, which Bookstaber suggests might be more appropriate fields in which to find insights of value to financial theory. These fields do have much that is valuable, but even here it is hard to get away from physics. Indeed, some of the best models in either theoretical ecology or evolutionary biology -- especially for evolution at the genetic level over long timescales -- have also been strongly inspired by thinking in physics. A case in point is a recent theory for the evolution of so-called horizontal gene flow in bacteria and other organisms, developed by famous biologist Carl Woese along with physicist Nigel Goldenfeld.  

This wide reach of physics doesn't show, I think, that physicists are smarter than anyone else. Rather, just that physicists have inherited a very rich modeling culture and tools -- developed in statistical physics over the past few decades -- that are incredibly powerful. If there is physics envy  in finance -- and Bookstaber asserts there is -- it's only a problem because the wrong model of physics is being envied. Forget the elegant old equation based physics of quantum field theory. Nothing like that is going to be of much help in understanding markets. Think more of the more modern and much more messy physics of fluid and plasma instabilities in supernovae, or here on Earth in the project to achieve inertial confinement fusion, where simple hot gases continue to find new and surprising ways to foil our best attempts to trap them long enough to produce practical fusion energy.

In his post, Bookstaber (politely) dismisses as nonsense a New York Times article about physics in finance (or so-called 'econophysics'). Along the way, the article notes that...
Macroeconomists construct elegant theories to inform their understanding of crises. Econophysicists view markets as far more messy and complex — so much so that the beauty and logic of economic theory is a poor substitute. Drawing on the tools of the natural sciences, they believe that by sorting through an enormous amount of data, they can work backward to find the underlying dynamics of economic earthquakes and figure out how to prepare for the next one.

Financial crises are difficult to predict, the econophysicists say, because markets are not, as some traditional economists believe, efficient, self-regulating and self-correcting. The periodic upheavals are the result of a cascade of events and feedback loops, much like the tectonic rumblings beneath the Earth’s surface.
As long as one doesn't push metaphors too far, I can't see anything wrong with the above, and much that makes absolutely obvious good sense. No one working in this field thinks there's going to be a "theory of everything for finance". But we might well get a deeper understanding than we have today -- and not be so misled by silly slogans like the old efficient markets idea -- if we accept the presence of myriad instabilities in markets and begin modeling the important feed back loops and evolving systems in considerable detail.

Monday, May 23, 2011

What's Efficient About the Efficient Markets Hypothesis?

The infamous Efficient Markets Hypothesis (EMH) has been the subject of rancorous and unresolved debate for decades. It's often used to assert that markets don't need regulation or oversight because they have a remarkable power to get prices just about right (stocks, bonds and other assets have their correct "fundamental values"), and so never get too much out of balance. Somehow the idea still gets lots of attention even after the recent crisis. Financial Times columnist Tom Harford recently suggested that the EMH gets some things right (markets are "mostly efficient") even if it is also supports unjustified faith in market stability. In a talk, economist George Akerlof took on the question of whether the EMH can be seen to have caused the crisis, and concludes that yes, it could, although there are plenty of other causes as well.

Others have defended the EMH as being unfairly maligned. Jeremy Siegel, for example, argues that the EMH actually doesn't imply anything about prices being right, and insists that, recent dramatic evidence to the contrary, "our economy is inherently more stable" than it was before -- precisely because of modern financial engineering and the wondrous ability of markets to aggregate information into prices. Robert Lucas asserted much the same thing in The Economist, as did Alan Greenspan in the Financial Times. Lucas asserted his view (equivalent to the EMH) that the market really does know best:
The main lesson we should take away from the EMH for policy making purposes is the futility of trying to deal with crises and recessions by finding central bankers and regulators who can identify and puncture bubbles. If these people exist, we will not be able to afford them.

That debate over the EMH persists half century after it was first stated seems to reflect tremendous confusion and disagreement over what the hypothesis actually asserts. As Andrew Lo and Doyne Farmer noted in a paper from a decade ago, it's not actually a well-defined hypothesis that would permit clear and objective testing:

One of the reasons for this state of affairs is the fact that the EMH, by itself, is not a well posed and empirically refutable hypothesis. To make it operational, one must specify additional structure: e.g., investors’ preferences, information structure, etc. But then a test of the EMH becomes a test of several auxiliary hypotheses as well, and a rejection of such a joint hypothesis tells us little about which aspect of the joint hypothesis is inconsistent with the data.

So what does the EMH assert?

In trying to bring some order to the topic, one useful technique is to identify distinct forms of the hypothesis reflecting different shades of meaning frequently in use. This was originally done in 1970 by Eugene Fama, who introduced a "weak" form, a "semi-strong" form and a "strong" form of the hypothesis. Considering these in turn is useful, and helps to expose a rhetorical trick -- a simple bait and switch -- that defenders of the EMH (such as those mentioned above) often use. One version of the EMH makes an interesting claim -- that markets always work very efficiently (and rapidly) in bringing information to bear on prices which therefore take on accurate values. This (as we'll see below) is clearly false. Another version makes the uninteresting and uncontroversial claim that markets are hard to predict. The rhetorical trick is to mix these two in argument and to defend the interesting one by giving evidence for the uninteresting one. In his Economist article, for example, Lucas cites as evidence for information efficiency the fact that markets are hard to predict, when these are very much not the same thing.

Let's look at this in a little more detail. The Weak form of the EMH merely asserts that asset prices fluctuate in a random way so that there's no information in past prices which can be used to predict future prices. As it is, even this weak form appears to be definitively false if it is taken to apply to all asset prices. In their 1999 book A Non-random Walk Down Wall St, Andrew Lo and Craig MacKinley documented a host of predictable patterns in the movements of stocks and other assets. Many of these patterns disappeared after being discovered -- presumably because some market agents began trading on these strategies -- but there existence for a short time proves that markets have some predictability.

Other studies document the same thing in other ways. The simplest argument for the randomness of market movements is that any patterns that exist should be exploited by market participants to make profits. The trading they do should act to remove these patterns. Is this true? Take a look at Figure 1 below, taken from a paper from 2008 by Doyne Farmer and John Geanakoplos. Back in the 1970s, Farmer and others at a financial firm called The Prediction Company identified numerous market signals they could use to try to predict market movements in the future. The figure shows the correlation between one such trading signal and market prices two weeks in advance, calculated from data over a 23 year period. In 1975, this correlation was as high as 15%, and it was still persisting at a level of roughly 5% as of 2008. This signal -- I don't know what it is, as it is a proprietary signal of The Prediction Company -- has long been giving reliable advance information on market movements.



One might try to argue that this data shows that the pattern is indeed gradually being wiped out, but this is hardly anything like the rapid or "nearly instantaneous" action generally supposed by efficient market enthusiasts. Indeed, there's not much reason to think this pattern will be entirely wiped out for another 50 years.

This persisting memory in price movements can also be analyzed more systematically. Physicist Jean-Philippe Bouchaud and colleagues from the hedge fund Capital Fund management have explored the subtle nature of how new market orders arrive in the market and initiate trades. A market order is a request by an investor to either buy or sell a certain volume of an asset. In the view of the EMH, these orders should arrive in markets at random, driven by the randomness of arriving news. If one piece of news is positive for some stock, influencing someone to place a market buy order, there's no reason to expect that the next piece of news is therefore more likely also to be positive and to trigger another. So there shouldn't be any observed correlation in the times when buy or sell orders enter the market. But there is.

What Bouchaud and colleagues found (originally in 2003, but improved on since then) is that the arrivals of these order are correlated and remain so over very long times -- even over months. This means that the sequence of buy or sell market orders isn't at all just a random signal, but is highly predictable. As Bouchaud writes in a recent and beautifully written review: "Conditional on observing a buy trade now, one can predict with a rate of success a few percent above 1/2 that the sign of the 10,000th trade from now (corresponding to a few days of trading) will be again positive."

Hardly the complete unpredictability claimed by EMH enthusiasts. To look at just one more piece of evidence -- from a very long list of possibilities -- we might take an example discussed recently by Gavyn Davies in the Financial Times. He refers to a study by Andrew Haldane of the Bank of England. As Davies writes,
Andy Haldane conducts the following experiment. He estimates the results of an investment strategy in US equities which is based entirely on the past direction of the stockmarket. If the market rises in the period just ended, the strategy buys stocks for the next period, and vice versa. In other words, the strategy simply extrapolates the recent trend in the market. The result? According to Andy, if you had been wise enough to start this procedure with $1 in 1880, you would have consistently shifted in and out of stocks at the right times, and you would now possess over $50,000. Not bad for a strategy which could have been designed in a kindergarten.

Next, Andy tries an alternative strategy based on value. This calculates whether the stockmarket is fundamentally over or undervalued, and buys the market only when value gives a positive signal. The criterion for measuring value is the dividend discount model, first devised by Robert Shiller. If you had been clever enough to devise this measure of value investing in 1880, and had invested $1 at the time, the procedure would have left you with a portfolio now worth the princely sum of 11 cents.

That, according to the weak version of the EMH, shouldn't be possible.

If weakened still further you might salvage some form of the weak hypothesis by saying that "most or many asset prices are difficult to predict," which seems to be true. We might call this the Absurdly Weak form of the EMH, and it seems ridiculous to form such a puffed-up "hypothesis" at all. Does anyone doubt that markets are hard to predict?

But the more serious point with regard to the weak (or absurdly weak) forms of the EMH is that the word "efficient" really has no business being present at all. This word seems to go back to a famous paper by Paul Samuelson, the originator (along with Eugene Fama) of the EMH, who established that prices should fluctuate randomly and be impossible to predict in a market that is "informationally efficient," i.e. in which participants bring all possible information to bear in trying to anticipate the future. If such efficient information processing goes on in the market, then prices will fluctuate randomly. Informational efficiency is what Lucas and others claim the market does, and they take the difficulty of predicting markets as evidence. But it is not, in fact, evidence of anything of the sort.

Think carefully about this. The statement that information efficiency implies random price movements in no way implies the opposite -- that random price movements imply that information is being processed efficiently, although many people seem to want to draw this conclusion. Just suppose (to illustrate the point) that investors in some market make their decisions to buy and sell by flipping coins. Their actions would bring absolutely no information into the market, yet prices would fluctuate randomly and the market would be hard to predict. It would be far better and more honest to call the weak form of the EMH the Random Market Hypothesis or the Market Unpredictability Hypothesis. It is strictly speaking false, as we just noted, although still a useful, crude first approximation. It's about as true as it is to say that water doesn't flow uphill. Yes, mostly, but then, ordinary waves do it at the seaside every day.

So the weak version of the EMH isn't very useful. Perhaps it has some value in dissuading casual investors from thinking it ought to be easy to beat the market, but it's more metaphor than science.

Next up is the "semi-strong" version of the EMH. This asserts that the prices of stocks or other assets (in the market under consideration) reflect all publicly available information, so these assets have the correct values in view of this information.That is, investors quickly pounce on any new information that becomes public, buy or sell accordingly, and the supply and demand in the market works its wonders so prices take their fundamental values (instantaneously, it is often said, or at least very quickly). This version has one big advantage already over the weak form of the EMH -- it actually makes an assertion about information, and so might plausibly say something about the efficiency with which the market absorbs and processes information. However, there are many vague terms here. What do we mean precisely by "public"? How quickly are the prices supposed to reflect the new information? Minutes? Days? Weeks? This isn't specified.

Notice that a hypothesis formulated this way -- as a positive statement that a market always behaves in a certain way -- cannot possibly ever be proven. Evidence that a market works this way today doesn't mean it will tomorrow or did yesterday. Asserting that the hypothesis is true is asserting the truth of an infinite number of propositions -- efficiency for all stocks, for example, and all information at all times. No finite amount of evidence goes any distance whatsoever toward establishing this infinite set of propositions. The only thing that can be tested is whether it is sometimes -- possibly often or even frequently -- demonstrably false that a market is efficient in this sense.

This observation puts into a context an enormous body of studies which purport to give "evidence for" the EMH, going back to Fama's 1970 review. What they all mean is "evidence consistent with" the EMH, but not in any sense "evidence for." In science, you test hypotheses by trying to prove they are wrong, not right, and the most useful hypotheses are those that turn out hardest to find any evidence against. This is very much not the case for the semi-strong EMH.

If markets move quickly to absorb new information, then they should settle down and remain inert in the absence of new information. This seems to be very much not the case. Nearly two decades ago, a classic economic study by Lawrence Summers and others found that of the 50 largest single-day price movements since World War II, most happened on days when there was no significant news, and that news in general seemed to account for only about a third of the overall variance in stock returns. A similar study more recently (2002) found much the same thing: "Many large stock price changes have no events associated with them."

But if we leave aside the most dramatic market events, what about price movements over short times during a single day? Here too the evidence rather strongly contradicts the semi-strong EMH. Bouchaud and his colleagues at Capital Fund Management recently used data for high-frequency trading to test the alleged EMH link between news and price movements far more precisely. Their idea was to study possible links between sudden jumps in the prices of stock prices and possible news items appearing in electronic news feeds, which might, for example, announce new information about a company. Without entering into the technical points, they found that most sudden price jumps took place without any conceivably causal news arriving on the feeds. To be sure, the news entering did cause price movements in many cases, but most large movements happened in the absence of such news.

Finally, we can immediately also dismiss -- with the evidence just cited -- the strong version of the EMH which claims that markets rapidly reflect not only all public information, but all private information as well. In such a market insider trading would be impossible, because insider information gives no one an advantage. If I'm a government regulator about to issue a drilling permit to Exxon for a wildly lucrative new oil field, even my personal knowledge won't permit be to profit by buying Exxon stock in advance of announcing my decision. The market, in effect, can read my mind and tell the future. This is clearly ridiculous.

So it appears that the two stronger versions of the EMH -- which make real claims about how the markets process information -- are demonstrably (or obviously ) false. The weak version is also falsified by masses of data -- there are patterns in the market which can be used to make profits. People are doing it all the time.

The one statement close to the EMH which does have empirical support is that market movements are very difficult to predict because prices do move in a highly erratic, essentially random fashion. Markets sometimes and perhaps even frequently process new information fairly quickly and that information gets reflected in prices. But frequently they do not. And frequently markets move even though there appears to be no new information at all -- as if they simply have rich internal dynamics driven by the expectations, fears and hopes of market participants.

All in all, the EMH then doesn't tell us much. Perhaps Emanuel Dermin, a former physicist who has worked on Wall St. as a "quant" for many years, puts it best: you shouldn't take the thing too seriously, he suggests, but only take it to assert that "it's #$&^ing difficult or well-nigh impossible to systematically predict what's going to happen next." But this, of course, has nothing at all to do with "efficiency." Many economists, lured by the desire to prove some kind of efficiency for markets, have gone a lot further, absurdly so, even trying to make a strength of its own ignorance about markets, indeed enshrining its ignorance as if it were a final infallible theory. Dermin again:
The EMH was a kind of jiu-jitsu response on the part of economists to turn weakness into strength. "I can't figure out how things work, so I'll make that a principle." 
In this sense, on the other hand, I have to admit that the word "efficient" fits here after all. Maybe the word is meant to apply to "hypothesis" rather than "markets." Measured for its ability to wrap up a universe of market complexity and rich dynamic possibilities in a sentence or two, giving the illusion of complete and final understanding on which no improvement can be made, the efficient markets hypothesis is indeed remarkably efficient.

Sunday, May 22, 2011

Another Good Ride

I did a 34 mile ride today - my longest so far this year. I've been using a heart monitor for a couple of years, and every year, it takes me a while to realize that I should pay attention to it - when I keep my pace slow enough in the early miles that my heart rate stays below 135 or so, a couple hour ride becomes pretty easy.

About time - my ride for the Hole in the Wall Gang Camp is only 7 days away.

Friday, May 20, 2011

The Queen of England criticizes economic models

It seems the Queen of England visited the London School of Economics a couple of years ago and listened to some discussion of the general equilibrium models used by central banks in their efforts to understand and manage entire economies. According to economist Thomas Lux, she astutely commented on the peculiarity that these models do not even include a financial sector, even though the financial industry has expanded markedly over past decades and clearly now plays an enormous role in any developed economy.

I'm hoping someone can tell me more about this anecdote, perhaps if they were present at the meeting. Lux's comments are listed in his response to an email sent out to various scientists by Dirk Helbing, seeking their views on the primary shortcomings of contemporary economic theory. The responses from those scientists hit on a number of themes - replacing equilibrium models with more general models able to include instabilities, going beyond the representative agent approximation, and so on. And, as the Queen rightly noted, acknowledging that the financial sector exists.

European Central Bank's Trechet on post-crisis economics

Italian physicist Luciano Pietronero recently pointed me to an address given by Jean Claude Trichet, president of the European Central Bank. It was titled Reflections on the nature of monetary policy: non-standard measures and finance theory and given at the ECB's 2010 Central Banking Conference which brings together central bankers from around the world.

In the speech, Trechet aimed to identify "some main lessons to be learned from the crisis regarding economic analysis." After talking a little about monetary policy and inflation targets, Trichet got to his main points about the shortcomings of current finance theory.

When the crisis came, the serious limitations of existing economic and financial models immediately became apparent. Arbitrage broke down in many market segments, as markets froze and market participants were gripped by panic. Macro models failed to predict the crisis and seemed incapable of explaining what was happening to the economy in a convincing manner. As a policy-maker during the crisis, I found the available models of limited help. In fact, I would go further: in the face of the crisis, we felt abandoned by conventional tools.

In the absence of clear guidance from existing analytical frameworks, policy-makers had to place particular reliance on our experience. Judgement and experience inevitably played a key role... In exercising judgement, we were helped by one area of the economic literature: historical analysis. Historical studies of specific crisis episodes highlighted potential problems which could be expected. And they pointed to possible solutions. Most importantly, the historical record told us what mistakes to avoid.

But relying on judgement inevitably involves risks. We need macroeconomic and financial models to discipline and structure our judgemental analysis. How should such models evolve? The key lesson I would draw from our experience is the danger of relying on a single tool, methodology or paradigm. Policy-makers need to have input from various theoretical perspectives and from a range of empirical approaches. Open debate and a diversity of views must be cultivated – admittedly not always an easy task in an institution such as a central bank. We do not need to throw out our DSGE and asset-pricing models: rather we need to develop complementary tools to improve the robustness of our overall framework.

This is a somewhat formal and wordy expression of a sentiment expressed quite beautifully two years ago by journalist Will Hutton of The Observer in London:

Economics is a discipline for quiet times. The profession, it turns out, ...has no grip on understanding how the abnormal grows out of the normal and what happens next, its practitioners like weather forecasters who don't understand storms.
In other words, when markets are relatively stable, unstressed and calm, the basic equilibrium framework of economic theory gives a not-too-misleading picture. But in any episode of slightly unusual dynamics the standard theories give very little insight. The trouble is, of course, that unusual episodes are actually not so unusual. I haven't yet tried to count of the number of financial and economic crises described in Charles Kindleberger's masterpiece Manias, Panis and Crashes: A History of Financial Crises, but it is surely a few hundred over the past two centuries (and this doesn't even touch on the short term tumults that frequently hit markets on short time scales).

Trichet went on to describe the kinds of ideas he thinks finance theory needs to turn to if it is going to improve:
First, we have to think about how to characterise the homo economicus at the heart of any model. The atomistic, optimising agents underlying existing models do not capture behaviour during a crisis period. We need to deal better with heterogeneity across agents and the interaction among those heterogeneous agents. We need to entertain alternative motivations for economic choices. Behavioural economics draws on psychology to explain decisions made in crisis circumstances. Agent-based modelling dispenses with the optimisation assumption and allows for more complex interactions between agents. Such approaches are worthy of our attention.

Second, we may need to consider a richer characterisation of expectation formation. Rational expectations theory has brought macroeconomic analysis a long way over the past four decades. But there is a clear need to re-examine this assumption. Very encouraging work is under way on new concepts, such as learning and rational inattention.

Third, we need to better integrate the crucial role played by the financial system into our macroeconomic models. One approach appends a financial sector to the existing framework, but more far-reaching amendments may be required. In particular, dealing with the non-linear behaviour of the financial system will be important, so as to account for the pro-cyclical build up of leverage and vulnerabilities.

In this context, I would very much welcome inspiration from other disciplines: physics, engineering, psychology, biology. Bringing experts from these fields together with economists and central bankers is potentially very creative and valuable. Scientists have developed sophisticated tools for analysing complex dynamic systems in a rigorous way. These models have proved helpful in understanding many important but complex phenomena: epidemics, weather patterns, crowd psychology, magnetic fields. Such tools have been applied by market practitioners to portfolio management decisions, on occasion with some success. I am hopeful that central banks can also benefit from these insights in developing tools to analyse financial markets and monetary policy transmission.

So, four things: get past the idea that economic agents must be rational and optimising, take note of human learning, include financial markets in the models used by central banks, and bring economic theories up to date with advanced ideas coming from physics and other sciences linked to the study of complex systems. This quite an extraordinary statement made by the president of the European Central Bank to central bankers from around the world. Were they listening?

On Trechet's speech, physicist Jean-Philippe Bouchaud had the following interesting comment:
Those not steeped in economic theory may not realize how revolutionary Mr. Trichet’s challenge is. Economics has traditionally been closely focused on developing a core set of ideas that are very different from those that Mr. Trichet champions above. It is truly remarkable for the president of the ECB to suggest such a radical departure from the traditional canon of economics, and it is a reflection of the seriousness of the crisis and the magnitude of the loss of confidence in existing tools. And it is not just Mr. Trichet who is asking these questions -- senior policymakers in finance and economic ministries, central banks, and regulatory agencies across the EU, as well as in the US and other countries are asking similar questions.

Wednesday, May 18, 2011

Stick A Fork In Me!

I'm done, done, DONE with grading for the semester. Now there's nothing left to do but wait for the complaints. Ah well - that I can deal with.

For a reward, I spent the night spent reading an anthology of short stories titled Strange Brew by P.N. Elrod (author of the Vampire Files). It includes stories by some of my favorites, including Jim Butcher, Patricia Briggs, and Charlaine Harris, among others (what can I say - I'm a big fantasy/sci-fi nerd).

On the biking side, there's been nothing but rain for the last few days, so I went to the gym to use the exercise bike for about 40 minutes. It's a poor substitute for having wheels on the road, but my 50 miler (the Angel Ride) is only 11 days ahead, so it's better than nothing.

Enough goofing off - back to research.

Update: The rain stopped, so I got in another 26 miler. I rode like a circus bear on a bike, but I was still within a minute of my best time, so I'll take it. The good news is that I seem to be able to handle at least that distance at a pretty good pace even on an off day. So, with a bit more work, I should be able to do the 50 if I dial back a bit. It won't be pretty, but it's a ride, not a race.

The Physics of Finance

In the spring of 2009, in the wake of the recent financial crisis, economists gathered at a conference in Dahlem, Germany for five days of discussion on the economic modeling of financial markets. After the meeting, the group issued a joint statement on the economic profession's failure to either see the financial crisis coming or to judge its ultimate severity. The lack of understanding, they suggested in the conference report, is due

“... to a mis-allocation of research efforts in economics. We trace the deeper roots of this failure to the profession’s insistence on constructing models that, by design, disregard the key elements driving outcomes in real-world markets. The economics profession has failed in communicating the limitations, weaknesses, and even dangers of its preferred models to the public.”


The full report makes good reading. Most importantly, it singles out the lack of realistic market dynamics as the primary failing of the standard models used by economists. These models assume that markets tend to a state of balance or equilibrium, and pay no attention to potential positive feed backs -- among asset prices, investors views, new regulations and so on -- which might drive markets far away from a state of balance. Realistic models would seek to capture such processes from the outset.

This aim of this blog is to cover and comment upon a wide range of new research -- much of it in physics, but some elsewhere - which is beginning to fill this gap. The idea is to accept that markets like most other natural systems have rich and complex internal dynamics. As with the weather, terrific storms can brew up out of blue skies through quite ordinary natural processes. If the equilibrium fixation of traditional economics has pushed the study of crises to one side - as the study of those exceptional events that occur when markets fail - the new perspective aims to understand how crises of many kinds emerge quite naturally from market processes. As any glance at history shows, they surely do, and quite routinely.

This work has been developing and growing more sophisticated since early evolutionary models of financial markets first developed in the mid 1990s at the Santa Fe Institute in New Mexico. It has come a long way since then, particularly in the past five years, and a growing number of economists and policy makers are beginning to take it very seriously. As just one example, Nature recently published a paper reviewing research on the stability of banking "ecosystems" - looking at problems that can arise in networks of banks -- which was co-authored by a mathematical ecologist, Robert May, along with an employee of the Bank of England, Andrew Haldane.

Old crude ideas about efficient markets and market equilibrium are rapidly being buried, good riddance, to be replaced by more realistic and useful ideas emerging from the physics of finance.

Monday, May 16, 2011

It's Time To Bring The Crop In

It's that time of the semester - exams are done, projects are in (with one exception) and it's grading time. Some highlights/low lights:
  • My Student Managed Investment fund was a weak group, and they never seemed to "get with the program". As a result, they did a lot of the work for the end-of-semester presentation to our advisory board in the 11th hour.
  • Having said that, they did a pretty good job in the presentation. Not as good as last year's group (that was probably my strongest group in the last 5 years), but good enough
  • My Investments class did terribly on my final exam. On the one hand, it means that grades will be lower than expected. On the other, since grades will depend a lot on the curve, it allows me a lot of flexibility.
  • I have THREE students that will be returning for my student-managed investment fund class next semester (they're three of the better ones, too). This makes my job a lot easier.
On non-job related things, I'm mostly trying to put in enough miles on my bike so that I'll be ready for the Hole In The Wall Gang Angel Ride. It's a relatively hilly 50 miler, and, while I'm pretty sure I can finish it, it'll be ugly. So, I'm trying to squeeze in a few more 25-30 milers in the next two weeks. It's a good cause (check out the link above.).

Unfortunately, yesterday involved a pretty hard 26 miler followed in short order by my 1 1/2 hour "Yoga For Stiff Guys" class (fairly strenuous yoga done in a heated room). BY the end of the day, I was beat to the bone.

Oh well - back to grading those last few student projects.

Tuesday, May 3, 2011

FMA Decisions Are Out!

I just heard from a coauthor - we got a paper accepted at the Denver FMA meeting in October. The idea resulted from taking an idea we'd been working on and applying it to another data set we had available.

It's funny - we submitted two papers: this one was an early version, and the other was pretty much finished. However, to be fair, the results on this one were more interesting. And since we'd already gotten one paper on the program, we were actually glad we got the second one rejected - doing two papers at a conference means there's less time for catching up with friends.

This tale of two papers reminds me of a piece I read a while back (unfortunately, I can't recall its title). It discussed how there's a trade-off in research between "newness" and "required rigor". In other words, if you're working on a topic that's been done to death (e.g. capital structure or dividend policy), you'll be asked to do robustness tests out the yazoo. On the other hand, if it's a more novel idea, there's a lower bar on the rigor side, because the "newness" factor gets you some slack on the rigor side. .

In general, however, the "rigor" bar has been ratcheting up for the last 20-30 years, regardless of the "newness" factor. To see this, realize that the average length of a Journal of Finance article in the early 80s was something like l6 pages - now it's more like 30-40. As further (anecdotal) evidence, a friend of mine had a paper published on long-run returns around some types of mergers in the Journal of Banking and Finance about 9 years back. They made him calculate the returns FIVE different ways.

In any event, to make a long story short, I'm hoping we got accepted at FMA because the reviewers though our paper was a good, new idea.

But it's probably because we got lucky.

But either way, we'll take it - see you in Denver!

Journal Of Undergraduate Research In Finance

There's a new journal out geared towards undergraduate research. It's called (appropriately), The Journal of Undergraduate Research In Finance. Here's it's description:
The Journal of Undergraduate Research in Finance publishes original work written exclusively by undergraduates. Accepted articles are largely the result of the highest quality senior or honors theses. Articles come from all areas of Finance, case studies and pedagogy. All articles are subject to blind review by faculty.

The JURF exists to encourage exceptional undergraduate students to pursue high quality research in Finance, to provide these students with an outlet for their research, and to prepare these students for success in graduate school or industry. To maintain a focus on contributions made by the students, faculty involvement is limited to the guidance typically given during the writing of a senior thesis. Initial submissions must be made while the author is an undergraduate student.

The JURF is published annually.

So, if you have a student who has done some good research and who might be looking for an outlet, have them send it in - the submission deadline for this year's edition is May 15. As an added inducement, the top three articles for this year's issue will be invited to the FMA meeting in Denver to present their research, and will be considered for the annual Mark J. Bertus prize (in the amount of $1,000).