Monday, October 31, 2011

Halloween Scares - Greek Referendum.

In my last post I mentioned that risks are high and we will see a correction soon. Sure enough we got one. The market killed the bears and when the MOMO lemmings joined the bulls, they got killed as well. I think the corrections will continue for a while before we reach a tradable bottom.

Confusion should be the other name for Europe. Here we have an empty box full of magic money. The magic number of Euro 1 Trillion is a figure made out of imagination and half truths. Europeans never had that money to start with and it was just a figment of imagination of the EU politicians. Why otherwise travel to China and beg for money if they have it? The Italian bond rate continues to soar higher and that itself undermines all the efforts to stop the contagion. The pink elephant is well and truly in the room. There was nothing in the EU plan from last week that would make investing in European sovereign or European Banks attractive for investors. If 50% hair-cut was voluntary, can you imagine what the coming 90% hair-cut will be? The hair-saloon is now open - who is the next customer? This not going to end well for Greece or for Europe. And now G-Pap has come up with the bright idea of referendum for the freshly proposed bail-out package! Talk about brinkmanship!

Closer to home, MF Global has now gone bust and have taken customer’s money along with it.  Millions of dollars have gone missing. Will anyone go to jail for what appears to be a clear case of fraud. Will anyone be held responsible for this mess or they will reward the person in charge with a golden handshake of $12-13 millions? Judging by what has happened since the sub-prime crisis of 2008, where nobody has been held responsible for the massive losses and collapse of the financial system, chances of anyone going to jail this time around appears to be remote. But for the next few days at least it is all “Risk-Off”. At this point, S&P futures are down about 25 handles and European markets are down almost 4%. Euro has given up most of the gains from last Friday.  With the lack of clarity in Europe and absence of clear information with MF Global, there is bound to be some panic in the market place.

The prudent course of action is not to take any action at all at this point of time.  Too many factors are in play and it seems that no one is in complete control of the events.  Even precious metals are down and if gold closes below $1700, it will see more selling pressure. That is why I am still waiting for a better entry point for gold. So far as Us bonds are concerned, I do not think selling pressure is over yet and again, I have advised not to take any position yet.

We shall review the position by the end of the week and I expect the corrections to get over by then. If not, we can kiss the year-end rally good bye.
Till then, keep all your fire power dry.

Sunday, October 30, 2011

Sharing risk can increase risk

I have a column coming out in Bloomberg Views sometime this evening (US time). It touches on the European debt crisis and the issue of outstanding credit default swaps. This post is intended to provide a few more technical details on the study by Stefano Battiston and colleagues, which I mention in the column, showing that more risk sharing between institutions can, in some cases, lead to greater systemic risk. [Note: this work was carried out as part of an ambitious European research project called Forecasting Financial Crises, which brings together economists, physicists, computer scientists and others in an effort to forge new insights into economic systems by exploiting ideas from other areas of science.]

The authors of this study start out by noting the obvious: that credit networks can both help institutions to pool resources to achieve things they couldn't on their own, and to diversify against the risks they face. At the same time, the linking together of institutions by contracts implies a greater chance for the propagation of financial stress from one place to another. The same thing applies to any network such as the electrical grid -- sharing demands among many generating stations makes for a more adaptive and efficient system, able to handle fluctuations in demand, yet also means that failures can spread across much of the network very quickly. New Orleans can be blacked out in a few seconds because a tree fell in Cleveland.

In banking, the authors note, Allen and Gale (references given in the paper) did some pioneering work on the properties of credit networks:
... in their pioneering contribution Allen and Gale reach the conclusion that if the credit network of the interbank market is a credit chain – in which each agent is linked only to one neighbor along a ring – the probability of a collapse of each and every agent (a bankruptcy avalanche) in case a node is hit by a shock is equal to one. As the number of partners of each agent increases, i.e. as the network evolves toward completeness, the risk of a collapse of the agent hit by the shock goes asymptotically to zero, thanks to risk sharing. The larger the pool of connected neighbors whom the agent can share the shock with, the smaller the risk of a collapse of the agent and therefore of the network, i.e. the higher network resilience. Systemic risk is at a minimum when the credit network is complete, i.e. when agents fully diversify individual risks. In other words, there is a monotonically decreasing relationship between the probability of individual failure/systemic risk and the degree of connectivity of the credit network.
This is essentially the positive story of risk sharing which is taken as the norm in much thinking about risk management. More sharing is better; the probability of individual failure always decreases as the density of risk-sharing links grows.

This is not what Battiston and colleagues find under slightly more general assumptions of how the network is put together and how institutions interact. I'll give a brief outline of what is different in their model in a moment; what comes out of it is the very different conclusion that...
The larger the number of connected neighbors, the smaller the risk of an individual collapse but the higher systemic risk may be and therefore the lower network resilience. In other words, in our paper, the relationship between connectivity and systemic risk is not monotonically decreasing as in Allen and Gale, but hump shaped, i.e. decreasing for relatively low degree of connectivity and increasing afterwards.
 Note that they are making a distinction between two kinds of risk: 1. individual risk, arising from factors specific to one bank's business and which can make it go bankrupt, and 2. systemic risk, arising from the propagation of financial distress through the system. As in Allen and Gale, they find that individual risk DOES decrease with increasing connectivity: banks become more resistant to shocks coming from their own business, but that systemic risk DOES NOT decrease. The latter risk increases with higher connectivity, and can win out in determining the overall chance a bank might go bankrupt. In effect, the effort on the part of many banks to manage their own risks can end up creating a new systemic risk that is worse than the risk they have reduced through risk sharing.

There are two principle elements in the credit network model they study. First is the obvious fact that resilience of an institution in such a network depends on the resilience of those with whom it shares risks. Buying CDS against the potential default of your Greek bonds is all well and good as long as the bank from whom you purchased the CDS remains solvent. In the 2008 crisis, Goldman Sachs and other banks had purchased CDS from A.I.G. to cover their exposure to securitized mortgages, but those CDS would have been more or less without value had the US government not stepped in to bail out A.I.G.

The second factor model is very important, and it's something I didn't have space to mention in the Bloomberg essay. This is the notion that financial distress tends to have an inherently nonlinear aspect to it -- some trouble or distress tends to bring more in its wake. Battiston and colleagues call this "trend reinforcement, " and describe it as follows:
... trend reinforcement is also quite a general mechanism in credit networks. It can occur in at least two situations. In the first one (see e.g. in (Morris and Shin, 2008)), consider an agent A that is hit by a shock due a loss in value of some securities among her assets. If such shock is large enough, so that some of A’s creditors claim their funds back, A is forced to fire-sell some of the securities in order to pay the debt. If the securities are sold below the market price, the asset side of the balance sheet is decreasing more than the liability side and the leverage of A is unintentionally increased. This situation can lead to a spiral of losses and decreasing robustness (Brunnermeier, 2008; Brunnermeier and Pederson, 2009). A second situation is the one in which when the agent A is hit by a shock, her creditor B makes condition to credit harder in the next period. Indeed it is well documented that lenders ask a higher external finance premium when the borrowers’ financial conditions worsen (Bernanke et al., 1999). This can be seen as a cost from the point of view of A and thus as an additional shock hitting A in the next period. In both situations, a decrease in robustness at period t increases the chance of a decrease in robustness at period t + 1.
It is the interplay of such positive feedback with the propagation of distress in a dense network which causes the overall increase in systemic risk at high connectivity.

I'm not going to wade into the detailed mathematics. Roughly speaking, the authors develop some stochastic equations to follow the evolution of a bank's "robustness" R -- considered to be a number between 0 and 1, with 1 being fully robust. A bankruptcy event is marked by R passing through 0. This is a standard approach in the finance literature on modeling corporate bankruptcies. The equations they derive incorporate their assumptions about the positive influences of risk sharing and the negative influences of distress propagation and trend reinforcement.

The key result shows up clearly in the figure (below), which shows the overall probability of a bank in the network to go bankrupt (a probability per unit of time) versus the amount of risk-sharing connectivity in the network (here given by k, the number of partners with which each bank shares risks). It may not be easy to see, but the figure shows a dashed line (labeled 'baseline') which reflects the classical result on risk sharing in the absence of trend reinforcement. More connectivity is always good. But the red curve shows the more realistic result with trend reinforcement or the positive feedback associated with financial distress taken into account. Now adding connectivity is only good for a while, and eventually becomes positively harmful. There's a middle range of optimal connectivity beyond which more connections only serve to put bank in greater danger.

Finally, the authors of this paper make very interesting observations about the potential relevance of this model to globalization, which has been an experiment in risk sharing on a global scale, with an outcome -- at the moment -- which appears not entirely positive:
In a broader perspective, this conceptual framework may have far reaching implications also for the assessment of the costs and benefits of globalization. Since some credit relations involve agents located in different countries, national credit networks are connected in a world wide web of credit relationships. The increasing interlinkage of credit networks – one of the main features of globalization – allows for international risk sharing but it also makes room for the propagation of financial distress across borders. The recent, and still ongoing, financial crisis is a case in point.

International risk sharing may prevail in the early stage of globalization, i.e. when connectivity is relatively ”low”. An increase in connectivity at this stage therefore may be beneficial. On the other hand, if connectivity is already high, i.e. in the mature stage of globalization, an increase in connectivity may bring to the fore the internationalization of financial distress. An increase in connectivity, in other words, may increase the likelihood of financial crises worldwide.
Which is, in part, why we're not yet out of the European debt crisis woods.

Why The Melt Up?

Last week I was travelling through parts of interior India and one thing lacking was decent internet connection. As a result I could not post my regular market comments for almost 10 days. I should have purchased and carried the portable internet connection from Mumbai but silly me.  Anyway, lessons learned.

Last week we saw melt up and SPX was almost touching 1300 level. Is this the beginning of a new Bull market?  I am waiting on the sideline for many months now, waiting to get a sense of direction. We were close to 1050 in SPX and while many were expecting a re-run of the 2008, I said many times that we are not going to fall though the crack as yet. We might see one more selling pressure next week before we can have a tradable bottom.

So what is the reason for this melt-up? Have Europe solved its problem for good? Are the PIIGS really flying?  We must be delusional to think so for even a moment. European Union faces a risk of tsunami of fiscal and banking crisis. EFSF cannot solve this problem.  The problem is of solvency of all Euro zone financial institutions and banks and ECB and the Governments are trying to cure it by injecting liquidity. Of course if you give free money to the insolvent banks, it might keep them alive for a longer period but in the process create some Zombie banks which will drag the economy down for ever. If you don’t believe me, ask Japan!
The Europeans agreed for 1Trillion Euro for EFSF but where this money will come from is not explained. And they will use leverage to reach this magic number. And even that number is not sufficient to shore up the fortunes of the PIIGS countries. We are told that the banks have agreed for a voluntary haircut of 50% on the Greek bond holding and yet there can never be a greater lie. Because team Merkozy told them that the other alternative is 100% haircut. There is nothing voluntary about this agreement. The banks have been forced to accept this 50% write down with great deal of arm twisting and the next question is whether they will get the protection from the CDS they purchased. If it is voluntary, then there is no compensation from the CDS they purchased and that will effectively kill the CDS market.  So what happens when Ireland comes calling next about its due 50% write down or when Portugal comes with its share of write down? Will it be a voluntary event as well? What happens when Spain or Italy goes down? Already the yield of Italian Bonds are close to 6 % vs 2.5% of German bond yields.

"Data released by the European Central Bank show that real M1 deposits in Portugal have fallen at an annualized rate of 21pc over the past six months, buckling violently in September.
" ‘Portugal appears to have entered a Grecian vortex and monetary trends have deteriorated sharply in Spain, with a decline of 8.4pc,' said Simon Ward, from Henderson Global Investors.

Given this situation, when things are far from normal why this massive stock market melt up? What gives?  Because they changed the rules of the game. As simple as that. Because the haircut was voluntary, it did not trigger the CDS payouts. That took away the risk factor the financials who wrote the CDS.  As there is no risk, it is time to cover for those who were short the markets.  The HFT algos saw the short covering and they joined the trade. So did the momentum traders. And the run up continues. This is a very simplified version or explanation of the run up.

Just remember nothing has changed and when other PIIGS come calling, there would not be enough money in the world for voluntary haircut. Not even China can save Europe.

Coming back to the stock market, what can we expect next? Because everything is so overbought for now, we can expect a pullback next week. For traders that will be an opportunity to go long. I think it will be safer now to buy the dip till December. For investors, it will be another opportunity to liquidate and go out of equities.

For precious metals, the Bull Run will continue for the next six to eight months. I plan to go long gold in the next two weeks time. But I am hoping for a lower entry point. If we close below $1700 in gold that would mean that the corrections in gold is not over yet.

The year 2011 will possibly end in positive territory but I am very skeptical about 2012. Combined with the solvency problem in the Euro Zone, we shall be facing a very nasty Presidential election in the USA. The end of the debt super cycle is upon us. We are just missing the woods for the trees. The risk is actually increasing.   

Friday, October 28, 2011

Central corporate control revealed by mathematics

If you haven't already heard about this new study on the network of corporate control, do have a look. The idea behind it was to use network analysis of who owns whom in the corporate world (established through stock ownership) to tease out centrality of control. New Scientist magazine offers a nice account, which starts as follows:
AS PROTESTS against financial power sweep the world this week, science may have confirmed the protesters' worst fears. An analysis of the relationships between 43,000 transnational corporations has identified a relatively small group of companies, mainly banks, with disproportionate power over the global economy.

The study's assumptions have attracted some criticism, but complex systems analysts contacted by New Scientist say it is a unique effort to untangle control in the global economy. Pushing the analysis further, they say, could help to identify ways of making global capitalism more stable.

The idea that a few bankers control a large chunk of the global economy might not seem like news to New York's Occupy Wall Street movement and protesters elsewhere (see photo). But the study, by a trio of complex systems theorists at the Swiss Federal Institute of Technology in Zurich, is the first to go beyond ideology to empirically identify such a network of power. It combines the mathematics long used to model natural systems with comprehensive corporate data to map ownership among the world's transnational corporations (TNCs).
But also have a look at the web site of the project behind the study, the European project Forecasting Financial Crises, where the authors have tried to clear up several common misinterpretations of just what the study shows.

Indeed, I know the members of this group quite well. They're great scientists and this is a beautiful piece of work. If you know a little about natural complex networks, then the structures found here actually aren't terrifically surprising. However, they are interesting, and it's very important to have the structure documented in detail. Moreover, just because the structure observed here is very common in real world complex networks doesn't mean its something that is good for society.

Hating bankers and the "Unholy Alliance" -- the long history

An excellent if brief article at gives some useful historical context to the current animosity toward bankers -- it's nothing new. Several interesting quotes from key figures in the past:
“Behind the ostensible government sits enthroned an invisible government owing no allegiance and acknowledging no responsibility to the people. To destroy this invisible government, to befoul this unholy alliance between corrupt business and corrupt politics is the first task of statesmanship.”

Theodore Roosevelt, 1912

“We have in this country one of the most corrupt institutions the world has ever known. I refer to the Federal Reserve Board and the Federal Reserve Banks. The Federal Reserve Board, a Government board, has cheated the Government of the United States and the people of the United States out of enough money to pay the national debt. The depredations and the iniquities of the Federal Reserve Board and the Federal Reserve banks acting together have cost this country enough money to pay the national debt several times over…

“Some people think the Federal Reserve Banks are United States Government institutions. They are not Government institutions. They are private credit monopolies, which prey upon the people of the United States for the benefit of themselves and their foreign customers, foreign and domestic speculator sand swindlers, and rich and predatory money lenders.”

Louis McFadden, chairman of the House Committee on Banking and Currency, 1932
I should have known this, but didn't -- the Federal Reserve Banks are not United States Government institutions. They are indeed owned by the private banks themselves, even though the Fed has control over taxpayer funds.This seems dubious in the extreme to me, although I'm sure there are many arguments to consider. Memory recalls reading arguments about the required independence of the central bank, but independence is of course not the same as "control by the private banks." Maybe we need to change the governance of the Fed and install some oversight with real power from a non-banking non-governmental element.

And my favourite:
“Banks are an almost irresistible attraction for that element of our society which seeks unearned money.”
FBI head J. Edgar Hoover, 1955.

In recent years, the attraction has been very strong indeed.

This is why knowing history is so important. Many battles have been fought before.

Thursday, October 27, 2011

Matt Taibbi on OWS

Don't miss this post by Matt Taibbi on the Occupy Wall St. movement and its roots as an anti-corruption movement:
People aren't jealous and they don’t want privileges. They just want a level playing field, and they want Wall Street to give up its cheat codes, things like:
FREE MONEY. Ordinary people have to borrow their money at market rates. Lloyd Blankfein and Jamie Dimon get billions of dollars for free, from the Federal Reserve. They borrow at zero and lend the same money back to the government at two or three percent, a valuable public service otherwise known as "standing in the middle and taking a gigantic cut when the government decides to lend money to itself."

Or the banks borrow billions at zero and lend mortgages to us at four percent, or credit cards at twenty or twenty-five percent. This is essentially an official government license to be rich, handed out at the expense of prudent ordinary citizens, who now no longer receive much interest on their CDs or other saved income. It is virtually impossible to not make money in banking when you have unlimited access to free money, especially when the government keeps buying its own cash back from you at market rates.

Your average chimpanzee couldn't fuck up that business plan, which makes it all the more incredible that most of the too-big-to-fail banks are nonetheless still functionally insolvent, and dependent upon bailouts and phony accounting to stay above water. Where do the protesters go to sign up for their interest-free billion-dollar loans?

CREDIT AMNESTY. If you or I miss a $7 payment on a Gap card or, heaven forbid, a mortgage payment, you can forget about the great computer in the sky ever overlooking your mistake. But serial financial fuckups like Citigroup and Bank of America overextended themselves by the hundreds of billions and pumped trillions of dollars of deadly leverage into the system -- and got rewarded with things like the Temporary Liquidity Guarantee Program, an FDIC plan that allowed irresponsible banks to borrow against the government's credit rating.

This is equivalent to a trust fund teenager who trashes six consecutive off-campus apartments and gets rewarded by having Daddy co-sign his next lease. The banks needed programs like TLGP because without them, the market rightly would have started charging more to lend to these idiots. Apparently, though, we can’t trust the free market when it comes to Bank of America, Goldman, Sachs, Citigroup, etc.

In a larger sense, the TBTF banks all have the implicit guarantee of the federal government, so investors know it's relatively safe to lend to them -- which means it's now cheaper for them to borrow money than it is for, say, a responsible regional bank that didn't jack its debt-to-equity levels above 35-1 before the crash and didn't dabble in toxic mortgages. In other words, the TBTF banks got better credit for being less responsible. Click on to see if you got the same deal.

STUPIDITY INSURANCE. Defenders of the banks like to talk a lot about how we shouldn't feel sorry for people who've been foreclosed upon, because it's they're own fault for borrowing more than they can pay back, buying more house than they can afford, etc. And critics of OWS have assailed protesters for complaining about things like foreclosure by claiming these folks want “something for nothing.”

This is ironic because, as one of the Rolling Stone editors put it last week, “something for nothing is Wall Street’s official policy." In fact, getting bailed out for bad investment decisions has been de rigeur on Wall Street not just since 2008, but for decades.

Time after time, when big banks screw up and make irresponsible bets that blow up in their faces, they've scored bailouts. It doesn't matter whether it was the Mexican currency bailout of 1994 (when the state bailed out speculators who gambled on the peso) or the IMF/World Bank bailout of Russia in 1998 (a bailout of speculators in the "emerging markets") or the Long-Term Capital Management Bailout of the same year (in which the rescue of investors in a harebrained hedge-fund trading scheme was deemed a matter of international urgency by the Federal Reserve), Wall Street has long grown accustomed to getting bailed out for its mistakes.

The 2008 crash, of course, birthed a whole generation of new bailout schemes. Banks placed billions in bets with AIG and should have lost their shirts when the firm went under -- AIG went under, after all, in large part because of all the huge mortgage bets the banks laid with the firm -- but instead got the state to pony up $180 billion or so to rescue the banks from their own bad decisions.

This sort of thing seems to happen every time the banks do something dumb with their money...
 More at the link.

Abolish banks? Maybe, maybe not...

I have little time to post this week as I have to meet several writing deadlines, but I wanted to briefly mention  this wonderful and extremely insightful speech by Adair Turner from last year (there's a link to the video of the speech here). Turner offers so many valuable perspectives that the speech is worth reading and re-reading; here are a few short highlights that caught my attention.

First, Turner mentions that the conventional wisdom about the wonderful self-regulating efficiency of markets is really a caricature of the real economic theory of markets, which notes many possible shortcomings (asymmetric information, incomplete markets, etc.). However, he also notes that this conventional wisdom is still what has been most influential in policy circles:
.. why, we might ask, do we need new economic thinking when old economic thinking has been so varied and fertile? ... Well, we need it because the fact remains that while academic economics included many strains, in the translation of ideas into ideology, and ideology into policy and business practice, it was one oversimplified strain which dominated in the pre-crisis years.
What was that "oversimplified strain"? Turner summarizes it as follows:
For over half a century the dominant strain of academic economics has been concerned with exploring, through complex mathematics, how economically rational human beings interact in markets. And the conclusions reached have appeared optimistic, indeed at times panglossian. Kenneth Arrow and Gerard Debreu illustrated that a competitive market economy with a fully complete set of markets was Pareto efficient. New classical macroeconomists such as Robert Lucas illustrated that if human beings are not only rational in their preferences and choices but also in their expectations, then the macro economy will have a strong tendency towards equilibrium, with sustained involuntary unemployment a non-problem. And tests of the efficient market hypothesis appeared to illustrate that liquid financial markets are not driven by the patterns of chartist fantasy, but by the efficient processing of all available information, making the actual price of a security a good estimate of its intrinsic value.

As a result, a set of policy prescriptions appeared to follow:

· Macroeconomic policy – fiscal and monetary – was best left to simple, constant and clearly communicated rules, with no role for discretionary stabilisation.

· Deregulation was in general beneficial because it completed more markets and created better incentives.

· Financial innovation was beneficial because it completed more markets, and speculative trading was beneficial because it ensured efficient price discovery, offsetting any temporary divergences from rational equilibrium values.

· And complex and active financial markets, and increased financial intensity, not only improved efficiency but also system stability, since rationally self-interested agents would disperse risk into the hands of those best placed to absorb and manage it.
In other words, all the nuances of the economic theories showing the many limitations of markets seem to have made little progress in getting into the minds of policy makers, thwarted by ideology and the very simple story espoused by the conventional wisdom. Insidiously, the vision of efficient markets so transfixed people that it was assumed that the correct policy prescriptions must be those which would take the system closer to the theoretical ideal (even if that ideal was quite possibly a theorist's fantasy having little to do with real markets), rather than further away from it:
What the dominant conventional wisdom of policymakers therefore reflected was not a belief that the market economy was actually at an Arrow-Debreu nirvana – but the belief that the only legitimate interventions were those which sought to identify and correct the very specific market imperfections preventing the attainment of that nirvana. Transparency to reduce the costs of information gathering was essential: but recognising that information imperfections might be so deep as to be unfixable, and that some forms of trading activity might be socially useless, however transparent, was beyond the ideology...
Turner goes on to argue that the more nuanced views of markets as very fallible systems didn't have much influence mostly because of ideology and, in short, power interests on the part of Wall St., corporations and others benefiting from deregulation and similar policies. I think it is also fair to say that economists as a whole haven't done a very good job of shouting loudly that markets cannot be trusted to know best or that they will only give good outcomes in a restricted set of circumstances.Why haven't there been 10 or so books by prominent economists with titles like "markets are often over-rated"?

But perhaps the most important point he makes is that we shouldn't expect a "theory of everything" to emerge from efforts to go beyond the old conventional wisdom of market efficiency: of the key messages we need to get across is that while good economics can help address specific problems and avoid specific risks, and can help us think through appropriate responses to continually changing problems, good economics is never going to provide the apparently certain, simple and complete answers which the pre-crisis conventional wisdom appeared to. But that message is itself valuable, because it will guard against the danger that in the future, as in the recent past, we sweep aside common sense worries about emerging risks with assurances that a theory proves that everything is OK.
That is indeed a very important message.

The speech goes on to touch on many other topics, all with a fresh and imaginative perspective. Abolish banks? That sounds fairly radical, but it's important to realise that things we take for granted aren't fixed in stone, and may well be the source of problems. And abolishing banks as we know them has been suggested before by prominent people:
Larry Kotlikoff indeed, echoing Irving Fisher, believes that a system of leveraged fractional reserve banks is so inherently unstable that we should abolish banks and instead extend credit to the economy via mutual loan funds, which are essentially banks with 100% equity capital requirements.8 For reasons I have set out elsewhere, I’m not convinced by that extremity of radicalism.9 ... But we do need to ensure that debates on capital and liquidity requirements address the fundamental issues rather than simply choices at the margin. And that requires
economic thinking which goes back to basics and which recognises the importance of specific evolved institutional structures (such as fractional reserve banking), rather than treating existing institutional structures either as neutral pass-throughs in economic models or as facts of life which cannot be changed.

Tuesday, October 25, 2011

The European debt crisis in a picture

From the New York Times (by way of Simon Johnson), a beautiful (and scary) picture of the various debt connections among European nations. (Best to right click and download and then open so you can easily zoom in and out as the picture is mighty big.)

My question is - what happens if the Euro does collapse? Do European nations have well-planned emergency measures to restore the Franc, Deutchmark, Lira and other European currencies quickly? Somehow I'm not feeling reassured.

Monday, October 24, 2011

Studies confirm: bankers are mostly non-human at the cellular level

This is no joke. Studies show that if you examine the genetic material of your typical banker, you'll find that only about 10% of it takes human form. The other 90% is much more slimy and has been proven to be of bacterial origin. That's 9 genes out of 10: bankers are mostly bacteria. Especially Lloyd Blankfein. This is all based on detailed state-of-the-art genetic science, as you can read in this new article in Nature.

OK, I am of course joking. The science shows that we're all like this, not only the bankers. Still, the title of this post is not false. It just leaves something out. Probably not unlike the sales documentation or presentations greasing the wheels of the infamous Goldman Sachs Abacus deals.

Saturday, October 22, 2011

Tell Us About Faculty Interviews That Went Bad

At the FMA, I talked to several candidates on the market (of course, their opening line was the traditional mating call of the new candidate: "are you hiring this year?"). Since by now they're through the experience, I thought they could use a bit of comic relief. .

I was out with a number of my friends in Denver. The topic turned to "interviews gone bad". Most of them had been in the field for at least a half-dozen years (and in most cases, twice or more that many). So we've all either been on an interview that (as Terry Pratchett would say) "went pear-shaped") or have heard of one - and in some cases we know stories from either side of the table. After hearing a few stories that made me laugh so hard that I almost wet my trousers, I thought this would make a great topic for a post.

To get the ball rolling, I'll share tow of my favorites (I wasn't personally involved in either, but heard them from one or more of the participants):

1) Sleeping Walrus University: My friend John (the name has been changed to project the guilty) likes to (over)partake of the fruit of the vine. One night, he overdid it in a major way. His school was interviewing, and unfortunately, they were holding interviews in the room he was sharing with another faculty member. The next morning came around, and he was hung-over, probably still mostly soused, and completely dead to the world (absent dynamite or a crane, he was not to be roused or moved). So, when the first interviewee of the day came in, the other two faculty members mad ethe best of the situation, and conducted the interview with John asleep in the bed, covered up completely by a mound of blankets.

John is not a slender man (he's somewhere in the Chris Christie weight and body-shape class), so the pile of blankets looked like someone had buried a walrus (or maybe a sea lion)under there. And to boot, John was snoring at rock-concert decibel level. So, every few minutes, an interviewer's question (or the candidate's response) would be punctuated by a loud "SNNNZZZZPPPPFT". I think the candidate might have gotten a campus visit out of it, but ended up taking a position elsewhere.

2) Yes, we believe in full disclosure: An older faculty member I know came on the job market in the late 1970. His most memorable interview was conducted in a poorly-lit hotel room. I know that it's important for the interviewer to feel comfortable, but this guy didn't quit get the concept. For some reason, he felt no need to wear pants, and conducted the entire interview wearing a t-shirt and his underwear (and no, my friend didn;t remember if they were boxers or briefs - he focused on making only eye contact). Sometimes less is NOT more, dude.

If you have other stories, feel free to put them in the comments. Please pass this along to your friends, because almost everyone either has a story of their own or knows of one. By all means, don't use your real name, and try to disguise or change enough details so that they can't be traced back to the parties in questions. I'll periodically promote the best ones from the comments up to the main post (note: I may make a few editorial changes for the sake of spelling, punctuation, extremely poor taste, anonymity's sake, or comic license).

So give me your best (or worst), and let's have some "inside baseball" fun.

Back From The FMA

I just got back from the annual FMA (Financial Management association) annual meeting in Denver. I presented my paper, commented on a few others, set up some possibilities for collaboration (and possibly making so money teaching overseas), and spent a ;lot of times with old and dear friends (and made a few new ones)

I n particular, it seems like the Christian Finance Faculty Association is getting off the ground,. We had a good meeting on Friday with some stimulating discussion and a chance to meet new friend (some of whom we've known for years but didn't realize they were Christian.

We're in discussions about starting a blog, and when It's up, I'll pass it along.

Friday, October 21, 2011

Break up the big banks...

It's encouraging to see that the president of the Federal Reserve Bank of Kansas City has come out arguing that "too big too fail" banks are "fundamentally inconsistent with capitalism." See the speech of Thomas Hoenig. One excerpt:
“How can one firm of relatively small global significance merit a government bailout? How can a single investment bank on Wall Street bring the world to the brink of financial collapse? How can a single insurance company require billions of dollars of public funds to stay solvent and yet continue to operate as a private institution? How can a relatively small country such as Greece hold Europe financially hostage? These are the questions for which I have found no satisfactory answers. That’s because there are none. It is not acceptable to say that these events occurred because they involved systemically important financial institutions.

Because there are no satisfactory answers to these questions, I suggest that the problem with SIFIs is they are fundamentally inconsistent with capitalism. They are inherently destabilizing to global markets and detrimental to world growth. So long as the concept of a SIFI exists, and there are institutions so powerful and considered so important that they require special support and different rules, the future of capitalism is at risk and our market economy is in peril.”

Thursday, October 20, 2011

Federal Reserve Corruption

Take a look at this on the transparency of the Federal Reserve (from Financeaddict) compared to other large nations' central banks. Then watch this, where Timothy Geithner tries very hard to slip sleazily away from any mention of the $13 Billion that went directly from AIG to politically well-connected Goldman Sachs. "Did you have conversations with the AIG counterparties?" Response -- waffle, evade, waffle, stare, mumble. After that, try to tell me that the US is not neck deep in serious political corruption.

And they wonder what Occupy Wall Street is all about!

Private information and jumps in the market

Following my second recent post on what moves the markets, two readers posted interesting and noteworthy comments and I'd like to explore them a little. I had presented evidence in the post that many large market movements do not appear to be linked to the sudden arrival of public information in the form of news. Both comments noted that this may leave out of the picture another source of information -- private information brought into the market through the action of traders taking actions:
Anonymous said...
I don't see any mention of what might be called "trading" news, e.g. a large institutional investor or hedge fund reducing significantly its position in a given stock for reasons unrelated to the stock itself - or at least not synchronized with actual news on the underlying. The move can be linked to internal policy, or just a long-term call on the company which timing has little to do with market news, or lags them quite a bit (like an accumulation of bad news leading to a lagged reaction, for instance). These shocks are frequent even on fairly large cap stocks. They also tend to have lingering effect because the exact size of the move is never disclosed by the investor and can spread over long periods of time (i.e. days), which would explain the smaller beta. Yet this would be a case of "quantum correction", both in terms of timing and agent size, rather than a breakdown of the information hypothesis.
DR said...
Seconding the previous comment, asset price information comes in a lot more forms than simply "news stories about company X." All market actions contains information. Every time a trade occurs there's some finite probability that it's the action of an informed trader. Every time the S&P moves its a piece of information on single stock with non-zero beta. Every time the price of related companies changes it contains new information.
Both of these comments note the possibility that every single trade taking place in the market (or at least many of them) may be revealing some fragment of private information on the part of whoever makes the trade. In principle, it might be such private information hitting the market which causes large movements (the s-jumps described in the work of Joulin and colleagues).  

I think there are several things to note in this regard. The first is that, while this is a sensible and plausible idea, it shouldn't be stretched too far. Obviously, if you simply assume that all trades carry information about fundamentals, then the EMH -- interpreted in the sense that "prices move in response to new information about fundamentals" -- essentially becomes true by definition. After all, everyone agrees that trading drives markets. If all trading is assumed to reveal information, then we've simple assumed the truth of the EMH. It's a tautology.

More useful is to treat the idea as a hypothesis requiring further examination. Certainly some trades do reveal private information, as when a hedge fund suddenly buys X and sells Y, reflecting a belief based on research that Y is temporarily overvalued relative to X. Equally, some trades (as mentioned in the first comment) may reveal no information, simply being carried out for reasons having nothing to do with the value of the underlying stock. As there's no independent way -- that I know of -- to determine if a trade reveals new information or not, we're stuck with a hypothesis we cannot test.

But some research has tried to examine the matter from another angle. Again, consider large price movements -- those in the fat-tailed end of the return distribution. One proposed idea looking to private information as a cause proposes that large price movements are caused primarily by large-volume trades by big players such as hedge funds, mutual funds and the like. Some such trades might reveal new information, and some might not, but let's assume for now that most do. In a paper in Nature in 2003, Xavier Gabaix and colleagues argued that you can explain the precise form of the power law tail for the distribution of market returns -- it has an exponent very close to 3 -- from data showing that the size distribution of mutual funds follows a similar power law with an exponent of 1.05. A key assumption in their analysis is that the price impact Δp generated by a trade of volume V is roughly equal to Δp = kV1/2.

This point of view seems to support the idea that the arrival of new private information, expressed in large trades, might account for the no-news s jumps noted in the Jouvin study. (It seems less plausible that such revealed information might account for anything as violent as the 1987 crash, or the general meltdown of 2008). But taken at face value, these arguments at least seem to be consistent with the EMH view that even many large market movements reflect changes in fundamentals. But again, this assumes that all or at least most large volume trades are driven by private information on fundamentals, which may not be the case. The authors of this study themselves don't make any claim about whether large volume trades really reflect fundamental information. Rather, they note that...
Such a theory where large individual participants move the market is consistent with the evidence that stock market movements are difficult to explain with changes in fundamental values... 
But more recent research (here and here, for example) suggest that this explanation doesn't quite hang together because the assumed relationship between large returns and large volume trades isn't correct. This analysis is fairly technical, but is based on the study of minute-by-minute NASDAQ trading and shows that, if you consider only extreme returns or extreme volumes, there is no general correlation between returns and volumes. The correlation assumed in the earlier study may be roughly correct on average, but it not true for extreme events. "Large jumps," the authors conclude, "are not induced by large trading volumes."

Indeed, as the authors of these latter studies point out, people who have valuable private information don't want it to be revealed immediately in one large lump because of the adverse market impact this entails (forcing prices to move against them). A well-known paper by Albert Kyle from 1985 showed how an informed trader with valuable private information, trading optimally, can hide his or her trading in the background of noisy, uninformed trading, supposing it exists. That may be rather too much to believe in practice, but large trades do routinely get broken up and executed as many small trades precisely to minimize impact. 

All in all, then, it seems we're left with the conclusion that public or private news does account for some large price movements, but cannot plausibly account for all of them. There are other factors. The important thing, again, is to consider what this means for the most meaningful sense of the EMH, which I take to be the view that market prices reflect fundamental values fairly accurately (because they have absorbed all relevant information and processed it correctly). The evidence suggests that prices often move quite dramatically on the basis of no new information, and that prices may be driven as a result quite far from fundamental values.

The latter papers do propose another mechanism as the driver of routine large market movements. This is a more mechanical process centering on the natural dynamics of orders in the order book. I'll explore this in detail some other time. For now, just a taster from this paper, which describes the key idea:
So what is left to explain the seemingly spontaneous large price jumps? We believe that the explanation comes from the fact that markets, even when they are ‘liquid’, operate in a regime of vanishing liquidity, and therefore are in a self-organized critical state [31]. On electronic markets, the total volume available in the order book is, at any instant of time, a tiny fraction of the stock capitalisation, say 10−5 −10−4 (see e.g. [15]). Liquidity providers take the risk of being “picked off”, i.e. selling just before a big upwards move or vice versa, and therefore place limit orders quite cautiously, and tend to cancel these orders as soon as uncertainty signals appear. Such signals may simply be due to natural fluctuations in the order flow, which may lead, in some cases, to a catastrophic decay in liquidity, and therefore price jumps. There is indeed evidence that large price jumps are due to local liquidity dry outs.

Tuesday, October 18, 2011

Markets are rational even if they're irrational

I promise very soon to stop beating on the dead carcass of the efficient markets hypothesis (EMH). It's a generally discredited and ill-defined idea which has done a great deal, in my opinion, to prevent clear thinking in finance. But I happened recently on a defense of the EMH by a prominent finance theorist that is simply a wonder to behold -- its logic a true empirical testament to the powers of human rationalization. It also illustrates the borderline Orwellian techniques to which diehard EMH-ers will resort to cling to their favourite idea.

The paper was written in 2000 by Mark Rubinstein, a finance professor at University of California, Berkeley, and is entitled "Rational Markets: Yes or No. The Affirmative Case." It is Rubinstein's attempt to explain away all the evidence against the EMH, from excess volatility to anomalous predictable patterns in price movements and the existence of massive crashes such as the crash of 1987. I'm not going to get into too much detail, but will limit myself to three rather remarkable arguments put forth in the paper. They reveal, it seems to me, the mind of the true believer at work:

1. Rubinstein asserts that his thinking follows from what he calls The Prime Directive. This commitment is itself interesting:
When I went to financial economist training school, I was taught The Prime Directive. That is, as a trained financial economist, with the special knowledge about financial markets and statistics that I had learned, enhanced with the new high-tech computers, databases and software, I would have to be careful how I used this power. Whatever else I would do, I should follow The Prime Directive:

Explain asset prices by rational models. Only if all attempts fail, resort to irrational investor behavior.

One has the feeling from the burgeoning behavioralist literature that it has lost all the constraints of this directive – that whatever anomalies are discovered, illusory or not, behavioralists will come up with an explanation grounded in systematic irrational investor behavior.
Rubinstein here is at least being very honest. He's going to jump through intellectual hoops to preserve his prior belief that people are rational, even though (as he readily admits elsewhere in the text) we know that people are not rational. Hence, he's going to approach reality by assuming something that is definitely not true and seeing what its consequences are. Only if all his effort and imagination fails to come up with a suitable scheme will he actually consider paying attention to the messy details of real human behaviour.

What's amazing is that, having made this admission, he then goes on to criticize behavioural economists for having found out that human behaviour is indeed messy and complicated:
The behavioral cure may be worse than the disease. Here is a litany of cures drawn from the burgeoning and clearly undisciplined and unparsimonious behavioral literature:

Reference points and loss aversion (not necessarily inconsistent with rationality):
Endowment effect: what you start with matters
Status quo bias: more to lose than to gain by departing from current situation
House money effect: nouveau riche are not very risk averse

Overconfidence about the precision of private information
Biased self-attribution (perhaps leading to overconfidence)
Illusion of knowledge: overconfidence arising from being given partial information
Disposition effect: want to hold losers but sell winners
Illusion of control: unfounded belief of being able to influence events

Statistical errors:
Gambler’s fallacy: need to see patterns when in fact there are none
Very rare events assigned probabilities much too high or too low
Ellsberg Paradox: perceiving differences between risk and uncertainty
Extrapolation bias: failure to correct for regression to the mean and sample size
Excessive weight given to personal or antidotal experiences over large sample statistics
Overreaction: excessive weight placed on recent over historical evidence
Failure to adjust probabilities for hindsight and selection bias

Miscellaneous errors in reasoning:Violations of basic Savage axioms: sure-thing principle, dominance, transitivity
Sunk costs influence decisions
Preferences not independent of elicitation methods
Compartmentalization and mental accounting
“Magical” thinking: believing you can influence the outcome when you can’t
Dynamic inconsistency: negative discount rates, “debt aversion”
Tendency to gamble and take on unnecessary risks
Overpricing long-shots
Selective attention and herding (as evidenced by fads and fashions)
Poor self-control
Selective recall
Anchoring and framing biases
Cognitive dissonance and minimizing regret (“confirmation trap”)
Disjunction effect: wait for information even if not important to decision
Tendency of experts to overweight the results of models and theories
Conjunction fallacy: probability of two co-occurring more probable than a single one

Many of these errors in human reasoning are no doubt systematic across individuals and time, just as behavioralists argue. But, for many reasons, as I shall argue, they are unlikely to aggregate up to affect market prices. It is too soon to fall back to what should be the last line of defense, market irrationality, to explain asset prices. With patience, the anomalies that appear puzzling today will either be shown to be empirical illusions or explained by further model generalization in the context of rationality.
Now, there's sense in the idea that, for various reasons, individual behavioural patterns might not be reflected at the aggregate level. Rubinstein's further arguments on this point aren't very convincing, but at least it's a fair argument. What I find more remarkable is the a priori decision that an explanation based on rational behaviour is taken to be inherently superior to any other kind of explanation, even though we know that people are not empirically rational. Surely an explanation based on a realistic view of human behaviour is more convincing and more likely to be correct than one based on unrealistic assumptions (Milton Friedman's fantasies notwithstanding). Even if you could somehow show that market outcomes are what you would expect if people acted as if they were rational (a dubious proposition), I fail to see why that would be superior to an explanation which assumes that people act as if they were real human beings with realistic behavioural quirks, which they are.

But that's not how Rubinstein sees it. Explanations based on a commitment to taking real human behaviour into account, in his view, have "too much of a flavor of being concocted to explain ex-post observations – much like the medievalists used to suppose there were a different angel providing the motive power for each planet." The people making a commitment to realism in their theories, in other words, are like the medievalists adding epicycles to epicycles. The comparison would seem more plausibly applied to Rubinstein's own rational approach.

2. Rubinstein also relies on the wisdom of crowds idea, but doesn't at all consider the many paths by which a crowd's average assessment of something can go very much awry because individuals are often strongly influenced in their decisions and views by what they see others doing. We've known this going all the way back to the famous 1950s experiments of Solomon Asch on group conformity. Rubinstein pays no attention to that, and simply asserts that we can trust that the market will aggregate information effectively and get at the truth, because this is what group behaviour does in lots of cases:
The securities market is not the only example for which the aggregation of information across different individuals leads to the truth. At 3:15 p.m. on May 27, 1968, the submarine USS Scorpion was officially declared missing with all 99 men aboard. She was somewhere within a 20-mile-wide circle in the Atlantic, far below implosion depth. Five months later, after extensive search efforts, her location within that circle was still undetermined. John Craven, the Navy’s top deep-water scientist, had all but given up. As a last gasp, he asked a group of submarine and salvage experts to bet on the probabilities of different scenarios that could have occurred. Averaging their responses, he pinpointed the exact location (within 220 yards) where the missing sub was found. 

Now I don't doubt the veracity of this account or that crowds, when people make decisions independently and have no biases in their decisions, can be a source of wisdom. But it's hardly fair to cite one example where the wisdom of the crowd worked out, without acknowledging the at least equally numerous examples where crowd behaviour leads to very poor outcomes. It's highly ironic that Rubinstein wrote this paper just as the bubble was collapsing. How could the rational markets have made such mistaken valuations of Internet companies? It's clear that many people judge values at least in part by looking to see how others were valuing them, and when that happens you can forget the wisdom of the crowds.

Obviously I can't fault Rubinstein for not citing these experiments  from earlier this year which illustrate just how fragile the conditions are under which crowds make collectively wise decisions, but such experiments only document more carefully what has been obvious for decades. You can't appeal to the wisdom of crowds to proclaim the wisdom of markets without also acknowledging the frequent stupidity of crowds and hence the associated stupidity of markets.

3. Just one further point. I've pointed out before that defenders of the EMH in their arguments often switch between two meanings of the idea. One is that the markets are unpredictable and hard to beat, the other is that markets do a good job of valuing assets and therefore lead to efficient resource allocations. The trick often employed is to present evidence for the first meaning -- markets are hard to predict -- and then take this in support of the second meaning, that markets do a great job valuing assets. Rubinstein follows this pattern as well, although in a slightly modified way. At the outset, he begins making various definitions of the "rational market":
I will say markets are maximally rational if all investors are rational.
This, he readily admits, isn't true:
Although most academic models in finance are based on this assumption, I don’t think financial economists really take it seriously. Indeed, they need only talk to their spouses or to their brokers.
But he then offers a weaker version:
... what is in contention is whether or not markets are simply rational, that is, asset prices are set as if all investors are rational.
In such a market, investors may not be rational, they may trade too much or fail to diversify properly, but still the market overall may reflect fairly rational behaviour:
In these cases, I would like to say that although markets are not perfectly rational, they are at least minimally rational: although prices are not set as if all investors are rational, there are still no abnormal profit opportunities for the investors that are rational.
This is the version of "rational markets" he then tries to defend throughout the paper. Note what has happened: the definition of the rational market has now been weakened to only say that markets move unpredictably and give no easy way to make a profit. This really has nothing whatsoever to do with the market being rational, and the definition would be improved if the word "rational" were removed entirely. But I suppose readers would wonder why he was bothering if he said "I'm going to defend the hypothesis that markets are very hard to predict and hard to beat" -- does anyone not believe that? Indeed, this idea of a "minimally rational"  market is equally consistent with a "maximally irrational" market. If investors simply flipped coins to make their decisions, then there would also be no easy profit opportunities, as you'd have a truly random market.

Why not just say "the markets are hard to predict" hypothesis? The reason, I suspect, is that this idea isn't very surprising and, more importantly, doesn't imply anything about markets being good or accurate or efficient. And that's really what EMH people want to conclude -- leave the markets alone because they are wonderful information processors and allocate resources efficiently. Trouble is, you can't conclude that just from the fact that markets are hard to beat. Trying to do so with various redefinitions of the hypothesis is like trying to prove that 2 = 1. Watching the effort, to quote physicist John Bell in another context, " like watching a snake trying to eat itself from the tail. It becomes embarrassing for the spectator long before it becomes painful for the snake."

Monday, October 17, 2011

What moves the markets? Part II

High frequency trading makes for markets that produce enormous volumes of data. Such data make it possible to test some of the old chestnuts of market theory -- the efficient markets hypothesis, in particular -- more carefully than ever before. Studies in the past few years show quite clearly, it seems to me, that the EMH is very seriously misleading and isn't really even a good first approximation.

Let me give a little more detail. In a recent post I began a somewhat leisurely exploration of considerable evidence which contradicts the efficient markets idea. As the efficient markets hypothesis (the "weak" version, at least) claims, market prices fully reflect all publicly available information. When new information becomes available, prices respond. In the absence of new information, prices should remain more or less fixed.

Striking evidence against this view comes from studies (now almost ten or twenty years old) showing that markets often make quite dramatic movements even in the absence of any news. I looked at some older studies along these lines in the last post, but stronger evidence comes from studies using electronic news feeds and high-frequency stock data. Are sudden jumps in prices in high frequency markets linked to the arrival of new information, as the EMH says? In a word -- no!

The idea in these studies is to look for big price movements which, in a sense, "stand out" from what is typical, and then see if such movements might have been caused by some "news". A good example is this study by Armand Joulin and colleagues from 2008. Here's how they proceeded. Suppose R(t) is the minute by minute return for some stock. You might take the absolute value of these returns, average them over a couple hours and use this as a crude measure -- call it σ -- of the "typical size" of one-minute stock movements over this interval. An unusually big jump over any minute-long interval will be one for which the magnitude of R is much bigger than σ. 

To make this more specific, Joulin and colleagues defined "s jumps" as jumps for which the ratio |R/σ| > s. The value of s can be 2 or 10 or anything you like. You can look at the data for different values of s, and the first thing the data shows -- and this isn't surprising -- is a distinctive pattern for the probability of observing jumps of size s. It falls off with increasing s, meaning that larger jumps are less likely, and the mathematical form is very simple -- a power law with P(s) being proportional to s-4, especially as s becomes large (from 2 up to 10 and beyond). This is shown in the figure below (the upper curve):

This pattern reflects the well known "fat tailed" distribution of market returns, with large returns being much more likely than they would be if the statistics followed a Gaussian curve. Translating the numbers into daily events, s jumps of size s = 4 turn out to happen about 8 times each day, while larger jumps of s = 8 occur about once every day and one-half (this is true for each stock).

Now the question is -- are these jumps linked to the announcement of some new information? To test this idea, Joulin and colleagues looked at various news feeds including feeds from Dow Jones and Reuters covering about 900 stocks. These can be automatically scanned for mention of any specific company, and then compared to price movements for that company. The first thing they found is that, on average, a new piece of news arrives for a company about once every 3 days. Given that a stock on average experiences one jump every day and one-half, this immediately implies an imbalance between the number of stock movements and the number of news items. There's not enough news to cause the jumps observed. Stocks move -- indeed, jump -- too frequently.

Conclusion: News sometimes but not always causes market movements, and significant market movements are sometimes but not always caused by news. The EMH is wrong, unless you want to make further excuses that there could have been news that caused the movement, and we just don't recognize it or haven't yet figured out what it is. But that seems like simply positing the existence of further epicycles.

But another part of the Joulin et al. study is even more interesting. Having found a way to divide price jumps into two categories: A) those caused by news (clearly linked to some item in a news feed) and B) those unrelated to any news, it is then possible to look for any systematic differences in the way the market settled down after such a jump. The data show that the volatility of prices, just after a jump, becomes quite high; it then relaxes over time back to the average volatility before the jump. But the relaxation works differently depending on whether the jump was of type A or B: caused by news or not caused by news. The figure below shows how the volatility relaxes back to the norm first for jumps linked to news, and second to jumps not linked to news. The later shows a much slower relaxation:

As the authors comment on this figure,
In both cases, we find (Figure 5) that the relaxation of the excess-volatility follows a power-law in time σ(t) − σ(∞) ∝ t− β (see also [22, 23]). The exponent of the decay is, however, markedly different in the two cases: for news jumps, we find β ≈ 1, whereas for endogenous jumps one has β ≈ 1/2. Our results are compatible with those of [22], who find β ≈ 0.35.
Of course, β ≈ 1/2 implies a much slower relaxation back to the norm (whatever that is!) than does β ≈ 1. Hence, it seems that the market takes a longer time to get back to normal after a no-news jump, whereas it goes back to normal quite quickly after a news-related jump.

No one knows why this should be, but Joulin and colleagues made the quite sensible speculation that a jump clearly related to news is not really surprising, and certainly not unnerving. It's understandable, and traders and investors can decide what they think it means and get on with their usual business. In contrast, a no-news event -- think of the Flash Crash, for example -- is very different. It is a real shock and presents a lingering unexplained mystery. It is unnerving and makes investors uneasy. The resulting uncertainty registers in high volatility.

What I've written here only scratches the surface of this study. For example, one might object that lots of news isn't just linked to the fate of one company, but pertains to larger macroeconomic factors. It may not even mention a specific company but point to a likely rise in the prices of oil or semiconductors, changes influencing whole sectors of the economy and many stocks all at once. Joulin and colleagues tried to take this into account by looking for correlated jumps in the prices of multiple stocks, and indeed changes driven by this kind of news do show up quite frequently. But even accounting for this more broad-based kind of news, they still found that a large fraction of the price movements of individual stocks do not appear to be linked to anything coming in through news feeds. As they concluded in the paper:
Our main result is indeed that most large jumps... are not related to any broadcasted news, even if we extend the notion of ‘news’ to a (possibly endogenous) collective market or sector jump. We find that the volatility pattern around jumps and around news is quite different, confirming that these are distinct market phenomena [17]. We also provide direct evidence that large transaction volumes are not responsible for large price jumps, as also shown in [30]. We conjecture that most price jumps are in fact due to endogenous liquidity micro-crises [19], induced by order flow fluctuations in a situation close to vanishing outstanding liquidity.
Their suggestion in the final sentence is intriguing and may suggest the roots of a theory going far beyond the EMH. I've touched before on early work developing this theory, but there is much more to be said. In any event, however, data emerging from high-frequency markets backs up everything found before -- markets often make violent movements which have no link to news. Markets do not just respond to new information. Like the weather, they have a rich -- and as yet mostly unstudied -- internal dynamics.

Friday, October 14, 2011

Learning in macroeconomics...

I've posted before on macroeconomic models that try to go beyond the "rational expectations" framework by assuming that the agents in an economy are different (they have heterogeneous expectations) and are also not necessarily rational. This approach seems wholly more realistic and believable to me.

In a recent comment, however, ivansml pointed me to this very interesting paper from 2009, which I've enjoyed reading. What the paper does is explore what happens in some of the common rational expectations models if you suppose that agents' expectations aren't formed rationally but rather on the basis of some learning algorithm. The paper shows that learning algorithms of a certain kind lead to the same equilibrium outcome as the rational expectations viewpoint. This IS interesting and seems very impressive. However, I'm not sure it's as interesting as it seems at first.

The reason is that the learning algorithm is indeed of a rather special kind. Most of the models studied in the paper, if I understand correctly, suppose that agents in the market already know the right mathematical form they should use to form expectations about prices in the future. All they lack is knowledge of the values of some parameters in the equation. This is a little like assuming that people who start out trying to learn the equations for, say, electricity and magnetism, already know the right form of Maxwell's equations, with all the right space and time derivatives, though they are ignorant of the correct coefficients. The paper shows that, given this assumption in which the form of the expectations equation is already known, agents soon evolve to the correct rational expectations solution. In this sense, rational expectations emerges from adaptive behaviour.

I don't find this very convincing as it makes the problem far too easy. More plausible, it seems to me, would be to assume that people start out with not much knowledge at all of how future prices will most likely be linked by inflation to current prices, make guesses with all kinds of crazy ideas, and learn by trial and error. Given the difficulty of this problem, and the lack even among economists themselves of great predictive success, this would seem more reasonable. However, it is also likely to lead to far more complexity in the economy itself, because a broader class of expectations will lead to a broader class of dynamics for future prices. In this sense, the models in this paper assume away any kind of complexity from a diversity of views.

To be fair to the authors of the paper, they do spell out their assumptions clearly. They state in fact that they assume that people in their economy form views on likely future prices in the same way modern econometricians do (i.e. using the very same mathematical models). So the gist seems to be that in a world in which all people think like economists and use the equations of modern econometrics to form their expectations, then, even if they start out with some of the coefficients "mis-specified," their ability to learn to use the right coefficients can drive the economy to a rational expectations equilibrium. Does this tell us much?

I'd be very interested in others' reactions to this. I do not claim to know much of anything about macroeconomics. Indeed, one of the nice things about this paper is its clear introduction to some of the standard models. This in itself is quite illuminating. I hadn't realized that the standard models are not any more complex than linear first-order time difference equations (if I have this right) with some terms including expectations. I had seen these equations before and always thought they must be toy models just meant to illustrate the far more complex and detailed models used in real calculations and located in some deep economic book I haven't yet seen, but now I'm not so sure.

Difficulties with learning...

I just finished reading this wonderful short review of game theory (many thanks to ivansml for pointing this out to me) and its applications and limitations by Martin Shubik. It's a little old -- it appeared in the journal Complexity in 1998 -- but offers a very broad perspective which I think still holds today. Game theory in the pure sense generally views agents as coming to their strategies through rational calculation; this perspective has had huge influence in economics, especially in the context of relatively simple games with few players and not too many possible strategies. This part of game theory is well developed, although Shubik suggests there are probably many surprises left to learn.

Where the article really comes alive, however, is in considering the limitations to this strictly rational approach in games of greater complexity. In physics, the problem of two rigid bodies in gravitational interaction can be solved exactly (ignoring radiation, of course), but you get generic chaos as soon as you have three bodies or more. The same is true, Shubik argues, in game theory. Extend the number of players above three and as the number of possible permutations of strategies proliferates it is no longer plausible to assume that agents act rationally. The decision problems become too complex. One might still try to search for optimal N player solutions as a guide to what might be possible, but the rational agent approach isn't likely to be profitable as a guide to the likely behaviour and dynamics in such complex games. I highly recommend Shubik's short article to anyone interested in game theory, and especially its application to real world problems where people (or other agents) really can't hope to act on the basis of rational calculation, but instead have to use heuristics, follow hunches, and learn adaptively as they go.

Some of the points Shubik raises find perfect illustration in a recent study (I posted on it here) of typical dynamics in two-player games when the number of possible strategies gets large. Choose the structure of the games at random and the most likely outcome is a rich ongoing evolution of strategic behaviour which never settles down into any equilibrium. But these games do seem to show characteristic dynamical behaviour such as "punctuated equilibrium" -- long periods of relative quiescence which get broken apart sporadically by episodes of tumultuous change -- and clustered volatility -- the natural clustering together of periods of high variability. These qualitative aspects appear to be generic features of the non-equilibrium dynamics of complex games. Interesting that they show up generically in markets as well.

When problems are too complex -- which is typically the case -- we try to learn and adapt rather than "solving" the problem in any sense. Our learning itself may also never settle down into any stable form, but continually change as we find something that works well for a time, and then suddenly find it fails and we need to learn again.

Thursday, October 13, 2011


I'm probably beating a quickly dying horse, but I couldn't resist. The other day, I was talking with a colleague about the Occupy Wall Street issue, and came down on the side of the protesters, saying that the distribution of wealth in our country wasn't "fair" (actually, they said "equitable", but they pretty much meant the same thing. So, I brought up another colleague's Business Law class where if the students used the word "fair" in an answer, they automatically lost points.

Fair is one of those words that seems to mean so many different things to different people that it's practically useless in conversation except as a rhetorical tool. When the Unknown Daughter was seven, we decided to expunge the use of the "it's not fair". The Unknown Wife and I told her that we didn't want to hear it, and whenever she uttered the phrase, she'd just have to "put it in THE BOOK". THE BOOK was a little journal with her name on it and the title "It's Not Fair". Whenever she used the forbidden phrase, she had to write it down as "It's not fair that______". She looked at the book, thought a minute, smiled at me, and wrote one (and only one) entry in the book: "It's not fair that they're my parents". She's pretty much never used the phrase since (yes, I have a remarkable daughter).

To close, let me give you two sites to peruse. In the first, We are The 99 Percent, the Occupy Wall Street Crowd posts their grievances, and in the second, We are the 53 Percent, some others post their responses. Feel free to chime in on either side.

Wednesday, October 12, 2011

Not Out Of The Woods Yet.

The Futures are up. European markets are up and yet it does not feel like we have reached the bottom.
let us consider the following news items;
Aluminium company Alcoa opened the US corporate earnings season with disappointing results for the third quarter, reporting profits below consensus expectations, the FT reports. Earnings per share were 15 cents for the third quarter,

European authorities plan to set a higher than expected capital threshold for the region’s banks and give them six to nine months to achieve that level or face government recapitalisations under the auspices of the eurozone’s €440bn rescue fund,

A bill that aims to punish Beijing for holding down its currency passed the Senate on Tuesday despite a warning from China that the legislation could plunge the global economy into a 1930s-like depression,

Slovakia’s government became the first in the eurozone to fall over opposition to expanding the European financial stability fund when just 55 of the parliament’s 150 MPs voted in favour of the measure,

Paulson & Co, the giant US hedge fund run by billionaire investor John Paulson, has warned that in a “worst case” scenario, it could suffer redemptions equivalent to between a fifth and a quarter of its assets by the end of the year,

I think till uncle Ben comes up with QE 3 or whatever number, we shall see renewed weakness. Time is not right to invest. I think the rally is just a bounce and the bears are not done yet.

Tuesday, October 11, 2011

Crazy economic models


In a recent post I commented on the "fetish of rationality" present in a great deal of mathematical economic theory. Agents in the theories are often assumed to have super-human reasoning abilities and to determine their behaviour and expectations solely through completely rational calculation. In comments, Relja suggested that maybe I'd gone too far and that economists version of rationality isn't all that extreme:
I think critiques like this about rationality in economics miss the point. The rationality assumed in economics is concerned with general trends; generally people pursue pleasure, not pain (according to their own utility functions), they prefer more money to less (an expanded budget constraint leaves them on a higher indifference curve, thus better off), they have consistent preferences (when they're in the mood for chocolate, they're not going to choose vanilla). Correspondingly, firms have the goal of profit maximization - they produce products that somebody will want to buy or they go out of business. Taking the rationality assumption to its "umpteenth" iteration is really quite irrational in itself. A consumer knows that spending 6 years to calculate the mathematically optimal choice of ice-cream is irrational. An economist accordingly knows the same thing. And although assumptions are required for modelling economic scenarios (micro or macro), I seriously doubt that any serious economist would make assumptions that infer such irrationality. :).
I think Relja expressed a well-balanced perspective, has learned some economics in detail, and has taken away from it some conclusions that are, all in all, pretty sound. Indeed, people are goal oriented, don't (usually) prefer pain, and businesses do try to make profits (although whether they try to 'maximize' is an open question). If economists were really just following these reasonable ideas, I would have no problem.

But I also think the problem is worse than Relja may realize. The use of rationality assumptions is more extreme than this, and also decisive in some of the most important areas of economic theory, especially in macroeconomics. A few days ago, John Kay offered this very long and critical essay on the form of modern economic theory. It's worth a read all the way through, but in essence, Kay argues that economics is excessively based on logical deduction of theories from a set of axioms, one of which (usually) is the complete rationality of economic agents:
Rigour and consistency are the two most powerful words in economics today.... They have undeniable virtues, but for economists they have particular interpretations.  Consistency means that any statement about the world must be made in the light of a comprehensive descriptive theory of the world.  Rigour means that the only valid claims are logical deductions from specified assumptions.  Consistency is therefore an invitation to ideology, rigour an invitation to mathematics.  This curious combination of ideology and mathematics is the hallmark of what is often called ‘freshwater economics’ – the name reflecting the proximity of Chicago, and other centres such as Minneapolis and Rochester, to the Great Lakes.

Consistency and rigour are features of a deductive approach, which draws conclusions from a group of axioms – and whose empirical relevance depends entirely on the universal validity of the axioms.
Kay isn't quite as explicit as he might have been, but economist Michael Woodford, in a comment on Kay's argument, goes further in spelling out what Key finds most objectionable -- the so-called rational expectations framework, originally proposed by Robert Lucas, which forms the foundations of today's DGSE (dynamic stochastic equilibrium models). A core assumption of such models is that all individuals in the economy have rational expectations about the future, and that such expectations affect their current behaviour.

Now, if this meant something like Relja's comment suggests it might -- that people are simply forward looking, as we know they are -- this would be fine. But it's not. The form this assumption ultimately takes in these models is to assume that everyone in the economy has fully rational expectations, in that they form their expectations in accordance with the conceivably best and most accurate economic models, even if solving those models might require considerable mathematics and computation (and knowledge of everyones' expectations). As Woodford puts it in his comment,
It has been standard for at least the past three decades to use models in which not only does the model give a complete description of a hypothetical world, and not only is this description one in which outcomes follow from rational behavior on the part of the decision makers in the model, but the decision makers in the model are assumed to understand the world in exactly the way it is represented in the model. More precisely, in making predictions about the consequences of their actions (a necessary component of an accounting for their behavior in terms of rational choice), they are assumed to make exactly the predictions that the model implies are correct (conditional on the information available to them in their personal situation).
This postulate of “rational expectations,” as it is commonly though rather misleadingly known, is the crucial theoretical assumption behind such doctrines as “efficient markets” in asset pricing theory and “Ricardian equivalence” in macroeconomics.  
It is precisely here that modern economics takes the assumption of rationality much too far merely for the sake of mathematical and theoretical rigour. Do economists really believe people form their expectations in this way? It's hard to imagine they could as the live the rest of their lives with people who do not do this. But the important question isn't what they really believe but on what do they base their theories which then get used by governments in policy making? Sadly, these unrealistic assumptions remain in the key models. But these assumptions really have zero plausibility. Woodford again,
[The rational expectations assumption] is often  presented as if it were a simple consequence of an aspiration to internal consistency in one’s model and/or explanation of people’s choices in terms of individual rationality, but in fact it is not a  necessary implication of these methodological commitments. It does not follow from the fact that one believes in the validity of one’s own  model and that one believes that people can be assumed to make rational  choices that they must be assumed to make the choices that would be seen  to be correct by someone who (like the economist) believes in the validity of the predictions of that model. Still less would it follow, if the economist herself accepts the necessity of entertaining the  possibility of a variety of possible models, that the only models that  she should consider are ones -- in each of which everyone in the economy is assumed to understand the correctness of that particular model, -- rather than entertaining beliefs that might (for example) be consistent with  one of the other models in the set that she herself regards as possibly  correct.
This is the sense in which hyper-rationality really does enter into economic theories. It's still pervasive, and still indefensible. It would be infinitely preferable if macro-economists such as Lucas and his followers (one of whom, Thomas Sargent, was perversely and outrageously just awarded the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel).


Blogger sometimes doesn't seem to register comments. Email alerted me to a sharp criticism by ivansml of some of the points I made, but the comment isn't, at least for my browser, yet showing up. Just so it doesn't get lost, ivansml said:
Every assumption is false when understood literally, including rational expectations. The important thing is whether people behave as if they had rational expectations - and answer to that will fortunately depend on particular model and data, not on emotional arguments and expressive vocabulary.

By the way, if you reject RE but accept that expectations matter and should be forward-looking, how do you actually propose to model them? One possible alternative is to have agents who estimate laws of motion from past data and continously update their estimates, which is something that macroeconomists have actually investigated before. And guess what - this process will often converge to rational expectations equilibrium.

Finally, the comment about Nobel Prize (yeah, it's not real Nobel, whatever) for Sargent is a sign of ignorance. Sargent has published a lot on generalizations or relaxations of RE, including the learning literature mentioned above, literature on robustness (where agents distrust their model and choose actions which are robust to model misspecifications) and even agent-based models. In addition to that, the prize citation focuses on his empirical contributions (i.e. testing theories against data). This does not seem like someone who is religiously devoted to "hyper-rationality" and ideology.
To points in response:

1. Yes, the point is precisely to include expectations but to model their formation in some more behaviourally realistic way, through learning algorithms as suggested. I am aware of such work and think it is very important. Indeed, the latter portion of this post from earlier this year looked precisely at this and considered a recent review of work in this area by Cars Hommes and others. The idea is not to assume that everyone forms their expectations identically, that learning is important, that their may be systematic biases and so on. As ivansml notes, there are circumstances in which the model may settle into a rational expectations equilibrium. But there are also many in which it does not. My hunch -- not backed by any evidence that I can point to readily -- is that the rational expectations equilibrium will be increasingly unlikely as the decisions faced by agents in the model become increasingly complex. Very possibly the system won't settle into any equilibrium at all.

But I think ivansml for pointing this out. It is certainly the case that expectations matter, and these should be brought into theory in some plausible and convincing way. Just to finish on this point, this is a quote from the Hommes review article, suggesting that the RE equilibrium doesn't come up very often:
Learning to forecast experiments are tailor-made to test the expectations hypothesis, with all other model assumptions computerized and under control of the experimenter. Different types of aggregate behavior have been observed in different market settings. To our best knowledge, no homogeneous expectations model [rational or irrational] fits the experimental data across different market settings. Quick convergence to the RE-benchmark only occurs in stable (i.e. stable under naive expectations) cobweb markets with negative expectations feedback, as in Muth's (1961) seminal rational expectations paper. In all other market settings persistent deviations from the RE fundamental benchmark seem to be the rule rather than the exception.

2. On his second point about Thomas Sargent, I plead guilty. ivansml is right -- his work is not as one dimensional as my comments made it seem. Indeed, I had been looking into his work over the past weekend for different reasons and had noticed that his work has been fairly wide ranging, and he does deserve credit for trying to relax RE assumptions. (Although he did seem a little snide in one interview I read, suggesting that mainstream macro-economists were not at all surprised by the recent financial crisis.)

So thanks also ivansml for setting me straight. I've changed the offending text above.