Friday, September 30, 2011

Lobbying pays off handsomely -- visual proof

From an article in The Economist, a graph showing the performance of the "Lobbying Index" versus the S&P 500 over the past decade. The Lobbying Index being an average over the 50 most intense lobbying firms within the S&P 500. It's pretty clear that lobbying -- a rather less than honourable profession in my book -- pays off:

The Fetish of Rationality

I'm currently reading Jonathan Aldred's book The Skeptical Economist. It's a brilliant exploration of how economic theory is run through at every level with hidden value judgments which often go a long way to  determining its character. For example, the theory generally assumes that more choice always has to be better. This follows more or less automatically from the view that people are rational "utility maximizers" (a phrase that should really be banned for ugliness alone). After all, more available choices can only give a "consumer" the ability to meet their desires more effectively, and can never have negative consequences. Add extra choices and the consumer can always simply ignore them.

As Aldred points out, however, this just isn't how people work. One of the problems is that more choice means more thinking and struggling to decide what to do. As a result, adding more options often has the effect of inhibiting people from choosing anything. In one study he cites, doctors were presented with the case history of a man suffering from osteoarthritis and asked if they would A. refer him to a specialist or B. prescribe a new experimental medicine. Other doctors were presented with the same choice, except they could choose between two experimental medicines. Doctors in the second group made twice as many referrals to a specialist, apparently shying away from the psychological burden of having to deal with the extra choice between medicines.

I'm sure everyone can think of similar examples from their own lives in which too much choice becomes annihilating. Several years ago my wife and I were traveling in Nevada and stopped in for an ice cream at a place offering 200+ flavours and a variety of extra toppings, etc. There were an astronomical number of potential combinations. After thinking for ten minutes, and letting lots of people pass by us in the line, I finally just ordered a mint chocolate chip cone -- to end the suffering, as it were. My wife decided it was all too overwhelming and in the end didn't want anything! If there had only been vanilla and chocolate we'd have ordered in 5 seconds and been very happy with the result.

In discussing this problem of choice, Aldred refers to a beautiful paper I read a few years ago by economist John Conlisk entitled Why Bounded Rationality? The paper gives many reasons why economic theory would be greatly improved if it modeled individuals as having finite rather than infinite mental capacities. But one of the things he considers is a paradoxical contradiction at the very heart of the notion of rational behaviour. A rational person facing any problem will work out the optimal way to solve that problem. However, there are costs associated with deliberation and calculation. The optimal solution to the ice cream choice problem isn't to stand in the shop for 6 years while calculating how to maximize expected utility over all the possible choices. Faced with a difficult problem, therefore, a rational person first has to solve another problem -- for how long should I deliberate before it becomes advantageous to just take a guess?

This is a preliminary problem -- call is P1 -- which has to be solved before the real deliberation over the choice can begin. But, Conlisk pointed out, P1 is itself a difficult problem and a rational individual doesn't want to waste lots of resources thinking about that one too long either. Hence, before working on P1, the rational person first has to decide what is the optimal amount of time to spend on solving P1. This is another problem P2, which is also hard. Of course, it never ends. Take rationality to it's logical conclusion and it ends up destroying itself -- it's simply an inconsistent idea.

Anyone who is not an economist might be quite amazed by Conlisk's paper. It's a great read, but it will dawn on the reader that in a sane world it simply wouldn't be necessary. It's arguing for the obvious and is only required because economic theory has made such a fetish of rationality. The assumption of rationality may in some cases have made it possible to prove theorems by turning the consideration of human behaviour into a mathematical problem. But it has tied the hands of economic theorists in a thousand ways.

Thursday, September 29, 2011

Economists on the way to being a "religious cult"...

A short seven minute video produced by the Institute for New Economic Thinking offers the views (very briefly) of a number of economists on modeling and it's purposes (h/t to Moneyscience). Two things of note:

1. Along the way, Brad DeLong mentions Milton Friedman's famous claim that a model is better the more unrealistic its assumptions, and that the sole measure of a theory is making accurate predictions. I'd really like to know what DeLong thinks on this, but his views aren't there in the interview. He mentions Friedman's idea but doesn't defend or attack it, just a reference to one of the most influential ideas on this topic, I guess. Shows how much Friedman's view is still in play.

In my view (some not very well organized thoughts here) the core problem with Friedman's argument is that a theory with perfect predictions and perfectly unrealistic assumptions simply doesn't teach you anything -- you're left just as mystified by how the model can possibly work (give the right predictions) as you were with the original phenomena you set out to explain. It's like a miracle. Such a model might of course be valuable as a starting point, and in stimulating the invention of further models with more realistic assumptions which then -- if they give the same predictions -- may indeed teach you something about how certain kinds of interactions, behaviours, etc (in the assumptions) can lead to observed consequences.

But then -- it's the models with the more realistic assumptions that are superior. (It's worth remembering that Friedman liked to say provocative things even if he didn't quite believe them.)

2. An interesting quote from economist James Galbraith, with which I couldn't agree more:
Modeling is not the end-all and the be-all of economics... The notion that the qualities of an economist should be defined by the modeling style that they adopt [is a disaster]. There is a group of people who say that if you're not doing dynamic stochastic general equilibrium modeling then you're not really a modern economist... that's a preposterous position which is going to lead to the reduction of economics to the equivalent of a small religious cult working on issues of interest to no one else in the world.

Basel III -- Taking away Jamie Dimon's Toys

Most people have by now heard the ridiculous claim by Jamie Dimon, CEO of JPMorgan Chase, that the new Basel III rules are "anti-American." The New York Times has an interesting set of contributions by various people on whether Dimon's claim has any merit. You'll all be shocked to learn that Steve Bartlett, president of the Financial Services Roundtable -- we can assume he's not biased, right? -- thinks that Dimon is largely correct. Personally, I tend to agree more with the views of Russell Roberts of George Mason University:

Who really writes the latest financial regulations, where the devil is in the details? Who has a bigger incentive to pay attention to their content — financial insiders such as the executives of large financial institutions or you and me, the outsiders? Why would you ever think that the regulations that emerge would be designed to promote international stability and growth rather than the naked self-interest of the financial community?

I do not believe it’s a coincidence that Basel I and II blew up in a way that enriched insiders at the expense of outsiders. To expect Basel III to yield a better result (now that we've supposedly learned so much) is to ignore the way the financial game is played. Until public policy stops subsidizing leverage (bailouts going back to 1984 make it easier for large financial institutions to fund each other’s activities using debt), it is just a matter of time before any financial system is gamed by the insiders.

Jamie Dimon is a crony capitalist. Don’t confuse that with the real kind. If he says Basel III is bad for America, you can bet that he means "bad for JPMorgan Chase." Either way, he’ll have a slightly larger say in the ultimate outcome than the wisest economist or outsider looking in.
Sadly, this is the truth, even though many people still cling to the hope that there are good people out there somewhere looking after the welfare of the overall system. Ultimately, I think, the cause of financial crises isn't to be found in the science of finance or of economics, but of politics. There is no way to prevent them as long as powerful individuals can game the system to their own advantage, privatizing the gains, as they say, and socializing the losses.

But not everyone is convinced of this by a long shot. Just after the crisis I wrote a feature article for Nature looking at new thinking about modeling economic systems and financial markets in particular. Researching the article, I came across lots of good new thinking about ways to model markets and go beyond the standard framework of economics. That all went into the article. I also suggested to my editor that we had to at least raise at the end of the article the nexus of influence between Wall St and the political system, and I proposed in particular to write a little about the famous paper by Romer and Akerlof, Looting: The Economic Underworld of Bankruptcy for Profit, which gives a simple and convincing argument in essence about how corporate managers (not only in finance) can engineer vast personal profits by running companies into the ground. Oddly, my editor in effect said "No, we can't include that because it's not science."

But that doesn't mean it's not important.

But back to Basel III. I had an article exploring this in some detail in Physics World in August. It is not available online. As a demonstration of my still lagging Blogger skills, I've captured images of the 4 pages and put them below. Not the best picture quality, I'm afraid.

Wednesday, September 28, 2011

Financial Times numeracy check

This article from the Financial Times is unfortunately quite typical of the financial press (and yes, not only the financial press). Just ponder the plausibility of what is reported in the following paragraph, commenting on a proposal by José Manuel Barroso, European Commission president, to put a tax on financial transactions:
Mr Barroso did not release details of his plan, except to say it could raise some €55bn a year. However, a study carried out by the Commission has found that the tax could also dent long-term economic growth in the region by between 0.53 per cent and 1.76 per cent of gross domestic product.
The article doesn't mention who did the study, or give a link to it. But there's worse. If reported accurately, it seems the European Commission's economists -- or whoever they had do the study mentioned -- actually think that the "3" in 0.53 and the "6" in 1.76 mean something. That's quite impressive accuracy when talking about economic growth. In a time of great uncertainty.

I would bet a great deal that a more accurate statement of the confidence of their results would be, say, between 0 and 2 percent crudely, or maybe even -1 and 3. But that would be admitting that no one has any certainty about what's coming next, and that's not part of the usual practice.

High-frequency trading: taming the chaos

I have an opinion piece that will be published later today in the next few days in Bloomberg Views. It is really just my attempt to bring attention to some very good points made in a recent speech by Andrew Haldane of the Bank of England. For anyone interested in further details, you can read the original speech (I highly recommend this) or two brief discussions I've given here looking at the first third of the speech and the second third of the speech.

I may not get around to writing a detailed analysis of the third part, which focuses on possible regulatory measures to lessen the chance of catastrophic Flash Crash type events in the future. But the ideas raised in this part are fairly standard -- a speed limit on trading, rules which would force market makers to participate even in volatile times (as was formerly the case for market makers) and so on. I think the most interesting part by far is the analysis of the recent increase in the frequency of abrupt market jumps (fat-tail events) over very short times, and of the risks facing market makers and how they respond as volatility increases. I think this should all help to frame the debate over HFT -- which seems extremely volatile itself -- in somewhat more scientific terms.

I also suggest that anyone who finds any of this interesting should go to the Bank of England website and read some of Andrew Haldane's other speeches. Every one is brilliant and highly illuminating.

Monday, September 26, 2011

What Will Cause The Collapse?

Bear markets only end when price reaches a gut wrenching low. When speculators have been beaten down so much that mere mention of their favorite stock or commodity induces nausea and they do not want to touch it in their life time, ever. By that token the bear markets in the US stock markets will reach its nadir when S&P will be near 400, Dow near 3000. At these levels, 2008 will sure look like a trailer. At that level, there will be no safe haven except cash. That will be the ultimate collapse that the mega bears are hoping for.

Will that happen? If that were to happen, when will that happen? The global economy seems to be lurching from one crisis to another and still S&P has not breached 1000 yet. What we are seeing, is that a normal correction in a bull market or beginning of a bear market ? 

The Western civilization has modeled itself on the Keynesian theory where higher government spending has created an illusion of growth. In reality, there has been no wage growth in the USA in the last 20 years. The macro story is same in the USA or Greece or Japan. Only difference is, some countries can print their own money, some cannot. But printing money does not give the solution to the problem of solvency. If you do not believe, ask Robert Mugabe of Zimbabwe. With 200% debt to GDP ratio, Japan is trying for the last 20 years to bring in prosperity. And yet, the Nikkei is down from 38000 to 9000. 
The ZRIP and easy money policy that are being followed by the Fed or JCB indicates that the Central Bankers of the world are worried about deflation, not inflation. With the Balance Sheet contraction upon us, all the central banks are trying to inject as much liquidity in the financial system as possible. Their only hope of survival is to re-float the financial markets. 

What QE1 and QE2 did was just that. It helped re-inflate the collapsed bubble. There were no growth then, there is no growth now. So why then the markets are not collapsing, given that there is no QE3, there are sovereign default threats and GDP is barely moving. What is holding up the support level? 

The answer can possibly be found in the last G20 meeting. With the world economies so inter connected, it is a kind of MAD world. (Mutual Assured Destruction). Therefore, China wants to keep the markets going in the USA and in Europe so that it can keep its millions of rural poor employed and keep a lid on social unrest. US of A provide dollar swap lines to virtually dead European banks because not keeping them alive will de-stabilize the world economy big time. SNB pegs its currency to sick Euro so as to keep the Swiss Franc low. Every country is doing what it can to keep the music going.
Under such a situation, the world economy will lurch from one crisis to another, only to be propped up by more money. Volatility will be high. Even safe havens like gold swing between 2% to 7% in one market session. But TPTB (The Powers That Be) will not let the bottom fall off. The world markets are pushing on a string.

Only thing that can destroy this equilibrium is going to be something which is beyond the control of the CB and TPTB. A true black swan event. I do not think that the coming black swan event will be financial. Even if Greece defaults, it will not cause the system collapse. It has now been almost two years and the default has been priced in. The only black swan event I can imagine is going to be Geo-Political. For e.g. Israel Iran conflict. 

When the markets know that the governments are watching their back, they keep gambling. And they are doing exactly the same thing. Today, the US markets went up by 2% on no good news. Nothing fundamentally changed from last week. It is as if, yes they can. Right now, it is money moving from the pocket of one hedge fund and bank to another. Ordinary investors are not a part of it. The gamblers will continue to do so till such an external shock crashes the system. What will force that is anybody’s guess. Till that time, keep the good times rolling.

I will be travelling for the next two weeks and the posts will be somewhat irregular. But if I come across something interesting, I will do my best to bring it to your attention.

Hope Is Eternal and Futures Are Up

The futures are up in the morning and possibly we will see a gap-up opening. But I do not think we should jump back in the markets yet. Most likely this rally will be short lived. It is more of a dead cat bounce after a big sell off week. The EU has not yet come up with any definitive solutions and we can expect the volatility to continue well in October. I think SPX will have trouble going past 1150.

Corrections in Gold and Silver continue.  Silver can reach around $22 level or below. I am not sure about Gold and hence staying away from it for now. The funny thing is, historically, gold has performed better in a deflationary environment. If anyone cares to remember that gold was in a 20 year bear market when the stock markets were going up and inflation was much higher than it is today. So if we see a yearend rally from end of October, we might see a sharp selloff of gold.

For now, it is better to be in cash and wait for a good low in October before any buying opportunity comes up.

P.S; The markets opened gap up but now some are in red and some are struggling to keep the opening gain. In the mean time, I hear Cramer in CNBC that he is buying gold. Hmmmm. If the snake oil salesman is now pushing gold, may be it is time to seriously short gold.
This one is from Kitco at 10.45 AM on 26th Sept. 2011;

Did gold really go down 62.90?
Yes. The stronger US Dollar was responsible for 3.50 of that drop.
Gold price Change due to Strengthening of USD
Gold price Change due to Predominant Sellers
Gold Price: Total Change

Overconfidence is adaptive?

A fascinating paper in Nature from last week suggests that overconfidence may actually be an adaptive trait. This is interesting as it strikes at one of the most pervasive assumptions in all of economics -- the idea of human rationality, and the conviction that being rational must always be more adaptive than being irrational. Quite possibly not:

Humans show many psychological biases, but one of the most consistent, powerful and widespread is overconfidence. Most people show a bias towards exaggerated personal qualities and capabilities, an illusion of control over events, and invulnerability to risk (three phenomena collectively known as ‘positive illusions’)2, 3, 4, 14. Overconfidence amounts to an ‘error’ of judgement or decision-making, because it leads to overestimating one’s capabilities and/or underestimating an opponent, the difficulty of a task, or possible risks. It is therefore no surprise that overconfidence has been blamed throughout history for high-profile disasters such as the First World War, the Vietnam war, the war in Iraq, the 2008 financial crisis and the ill-preparedness for environmental phenomena such as Hurricane Katrina and climate change9, 12, 13, 15, 16.

If overconfidence is both a widespread feature of human psychology and causes costly mistakes, we are faced with an evolutionary puzzle as to why humans should have evolved or maintained such an apparently damaging bias. One possible solution is that overconfidence can actually be advantageous on average (even if costly at times), because it boosts ambition, morale, resolve, persistence or the credibility of bluffing. If such features increased net payoffs in competition or conflict over the course of human evolutionary history, then overconfidence may have been favoured by natural selection5, 6, 7, 8.

However, it is unclear whether such a bias can evolve in realistic competition with alternative strategies. The null hypothesis is that biases would die out, because they lead to faulty assessments and suboptimal behaviour. In fact, a large class of economic models depend on the assumption that biases in beliefs do not exist17. Underlying this assumption is the idea that there must be some evolutionary or learning process that causes individuals with correct beliefs to be rewarded (and thus to spread at the expense of individuals with incorrect beliefs). However, unbiased decisions are not necessarily the best strategy for maximizing benefits over costs, especially under conditions of competition, uncertainty and asymmetric costs of different types of error8, 18, 19, 20, 21. Whereas economists tend to posit the notion of human brains as general-purpose utility maximizing machines that evaluate the costs, benefits and probabilities of different options on a case-by-case basis, natural selection may have favoured the development of simple heuristic biases (such as overconfidence) in a given domain because they were more economical, available or faster.
 The paper studies this question in a simple analytical model of an evolutionary environment in which individuals compete for resources. If the resources are sufficiently valuable, the authors find, overconfidence can indeed be adaptive:
Here we present a model showing that, under plausible conditions for the value of rewards, the cost of conflict, and uncertainty about the capability of competitors, there can be material rewards for holding incorrect beliefs about one’s own capability. These adaptive advantages of overconfidence may explain its emergence and spread in humans, other animals or indeed any interacting entities, whether by a process of trial and error, imitation, learning or selection. The situation we model—a competition for resources—is simple but general, thereby capturing the essence of a broad range of competitive interactions including animal conflict, strategic decision-making, market competition, litigation, finance and war.
Very interesting. But I just had a thought -- perhaps this may also explain why many economists seem to exhibit such irrational exuberance over the value of neo-classical theory itself?

High-frequency trading, the downside -- Part II

In this post I'm going to look a little further at Andrew Haldane's recent Bank of England speech on high-frequency trading. In Part I of this post I explored the first part of the speech which looked at evidence that HFT has indeed lowered bid-ask spreads over the past decade, but also seems to have brought about an increase in volatility. Not surprisingly, one measure doesn't even begin to tell the story of how HFT is changing the markets. Haldane explores this further in the second part of the speech, but also considers in a little more detail where this volatility comes from.

In well known study back in 1999, physicist Parameswaran Gopikrishnan and colleagues (from Gene Stanley's group in Boston) undertook what was then the most detailed look at market fluctuations (using data from the S&P Index in this case) over periods ranging from 1 minute up to 1 month. This early study established a finding which (I believe) has now been replicated across many markets -- market returns over timescales from 1 minute up to about 4 days all followed a fat-tailed power law distribution with exponent α close to 3. This study found that the return distribution became more Gaussian for times longer than about 4 days. Hence, there seems to be rich self-similarity and fractal structure to market returns on times down to 1 around second.

What about shorter times? I haven't followed this story for a few years. It turns out that in 2007, Eisler and Kertesz looked at a different set of data -- for total transactions on the NYSE between 2000 and 2002 -- and found that the behaviour at short times (less than 60 minutes) was more Gaussian. This is reflected in the so-called Hurst exponent H having an estimated value close to 0.5. Roughly speaking, the Hurst exponent describes -- based on empirical estimates -- how rapidly a time series tends to wander away from its current value with increasing time. Calculate the root mean square deviation over a time interval T and for a Gaussian random walk (Brownian motion) this should grow in proportion to T to the power H= 1/2. A Hurst exponent higher than 1/2 indicates some kind of interesting persistent correlations in movements.

However, as Haldane notes, Reginald Smith last year showed that stock movements over short times since around 2005 have begun showing more fat-tailed behaviour with H above 0.5. That paper shows a number of figures showing H rising gradually over the period 2002-2009 from 0.5 to around 0.6 (with considerable  fluctuation on top of the trend). This rise means that the market on short times has increasingly violent excursions, as Haldane's chart 11 below illustrates with several simulations of time series having different Hurst exponents:

The increasing wildness of market movements has direct implications for the risks facing HFT market makers, and hence, the size of the bid-ask spread reflecting the premium they charge. As Haldane notes, the risk a market maker faces -- in holding stocks which may lose value or in encountering counterparties with superior information about true prices -- grows with the likely size of price excursions over any time period. And this size is directly linked to the Hurst exponent.

Hence, in increasingly volatile markets, HFTs become less able to provide liquidity to the market precisely because they have to protect themselves:
This has implications for the dynamics of bid-ask spreads, and hence liquidity, among HFT firms. During a market crash, the volatility of prices (σ) is likely to spike. From equation (1), fractality heightens the risk sensitivity of HFT bid-ask spreads to such a volatility event. In other words, liquidity under stress is likely to prove less resilient. This is because one extreme event, one flood or drought on the Nile, is more likely to be followed by a second, a third and a fourth. Reorganising that greater risk, market makers’ insurance premium will rise accordingly.

This is the HFT inventory problem. But the information problem for HFT market-makers in situations of stress is in many ways even more acute. Price dynamics are the fruits of trader interaction or, more accurately, algorithmic interaction. These interactions will be close to impossible for an individual trader to observe or understand. This algorithmic risk is not new. In 2003, a US trading firm became insolvent in 16 seconds when an employee inadvertently turned an algorithm on. It took the company 47 minutes to realise it had gone bust.

Since then, things have stepped up several gears. For a 14-second period during the Flash Crash, algorithmic interactions caused 27,000 contracts of the S&P 500 E-mini futures contracts to change hands. Yet, in net terms, only 200 contracts were purchased. HFT algorithms were automatically offloading contracts in a frenetic, and in net terms fruitless, game of pass-the-parcel. The result was a magnification of the fat tail in stock prices due to fire-sale forced machine selling.

These algorithmic interactions, and the uncertainty they create, will magnify the effect on spreads of a market event. Pricing becomes near-impossible and with it the making of markets. During the Flash Crash, Accenture shares traded at 1 cent, and Sotheby’s at $99,999.99, because these were the lowest and highest quotes admissible by HFT market-makers consistent with fulfilling their obligations. Bid-ask spreads did not just widen, they ballooned. Liquidity entered a void. That trades were executed at these “stub quotes” demonstrated algorithms were running on autopilot with liquidity spent. Prices were not just information inefficient; they were dislocated to the point where they had no information content whatsoever.
This simply follow from the natural dynamics of the market, and the situation market makers find themselves in. If they want to profit, if they want to survive, they need to manage their risks, and these risks grow rapidly in times of high volatility. Their response is quite understandable -- to leave the market, or least charge much more for their service. 

Individually this is all quite rational, yet the systemic effects aren't likely to benefit anyone. The situation, Haldane notes, resembles a Tragedy of the Commons in which individually rational actions lead to a collective disaster, fantasies about the Invisible Hand notwithstanding:
If the way to make money is to make markets, and the way to market markets is to make haste, the result is likely to be a race – an arms race to zero latency. Competitive forces will generate incentives to break the speed barrier, as this is the passport to lower spreads which is in turn the passport to making markets. This arms race to zero is precisely what has played out in financial markets over the past few years.

Arms races rarely have a winner. This one may be no exception. In the trading sphere, there is a risk the individually optimising actions of participants generate an outcome for the system which benefits no-one – a latter-day “tragedy of the commons”. How so? Because speed increases the risk of feasts and famines in market liquidity. HFT contribute to the feast through lower bid-ask spreads. But they also contribute to the famine if their liquidity provision is fickle in situations of stress.
Haldane then goes on to explore what might be done to counter these trends. I'll finish with a third post on this part of the speech very soon. 

But what is perhaps most interesting in all this is how much of Haldane's speech refers to recent work done by physicists -- Janos Kertesz, Jean-Philippe Bouchaud, Gene Stanley, Doyne Farmer and others -- rather than studies more in the style of neo-classical efficiency theory. It's encouraging to see that at least one very senior banking authority is taking this stuff seriously.

Saturday, September 24, 2011

Correction in Gold & Silver And Coming Year End Rally.

In my last post I wrote; “I am not sure if the corrections are over or we shall see renewed weakness again. …….. The weakness in the market may continue till October.”  That was after the markets rallied for a week and people were hoping for a trade-able bottom and there were talks of new Bull Run.  However Euro did not live up to the hype and came down couple of hundred pips.  I said that the big money is leaving Europe and they want to keep that flight low key and orderly and the correction in precious metal may have just started.

Right on cue, price of gold came down by over 5% in the week. Everyone is speculating why the price of gold and silver came down. The easy answer that is being talked about is that CME hiked the margin. But that is not the complete answer. Yes, margin hike deter additional position taking and sometimes forces the weak speculators to liquidate. But the more compelling reason was the position liquidation and margin call by the Mutual Funds and Hedge Funds. I think most funds were caught by surprise by the violent plunge in the stock market.  So when the sell orders came in, they held on to their loss positions and liquidated the position in precious metal which was in profit. Moreover, there were widespread disappointments with the Fed action.  Again, in the last post I said that the rally was “in anticipation of Santa Clause coming early on September 21st. There may be some disappointments and we might see some selling the news.”

I would like to show two charts. First the gold chart.

If gold falls below $1550 by next week, the next support is around $ 1050 / $ 1100. But even that kind of correction would keep gold on a long term bull run. Whether that will happen or not, is anybody’s guess. But the chances of the continued correction are high.

The 2nd chart is of Silver. 

Silver has already broken the trend line big time and the next support is around $ 25. I think even that support will be broken. By the same token, gold may also suffer. My favourite chartist Chris Kimble has this Eiffel Tower chart for gold. So make your own conclusion.

Coming back to the stock markets, I expect the weakness to continue in October, till the Europeans show some guts and political will to throw more good money after the bad. The Obama administration is pushing Germany hard and want a trillion euro rescue fund. The problem is that Madam Markel is losing the political capital and the German population is becoming sick of supporting Greece.  But the cost of not supporting Euro is very high and Germany may just want to buy more time. That Greece will default is given. The European banks, particularly the French Banks are going to suffer the most. The most important question is, is Greece going to be the only sovereign default? The Euro Zone may be able to handle Greek default but if Ireland and Portugal and Spain are added to the equation, then it is a different ball game.

I do not expect the markets to fall much below from here onward. My short term downside target is around 1125 in SPX.  If that is broken, the next support level is around 1050 in SPX. A close below that would mean big trouble. However I do not expect the bottom to fall off yet and most likely we would see a bounce of the lows.

The central bankers and the world governments are missing the woods for the trees. For them everything is a liquidity problem when in fact the problem facing the world is solvency problem. The only way they are trying to solve the crisis is by pumping in more money in the banking system. But it s a giant black hole sucking the life out of the world economy. The balance sheet contraction is on us, however much the central bankers and governments may try. Remember the QE2? After $ 600 billion, the US stock markets are now below the level of QE2. The desperate fight is on to save the system and for a while PTB (Powers That Be) will succeed. I am expecting a yearend rally to start from end of October which may well take the stock markets to a new high in 2012 before the game is over.

For now, let us see how low gold goes.

Friday, September 23, 2011

Brouwer's fixed point theorem...why mathematics is fun


I'm not going to post very frequently on Brouwer's fixed point theorem, but I had to look into it a little today. A version of it was famously used by Ken Arrow and Gerard Debreu in their 1954 proof that general equilibrium models in economics (models of a certain kind which require about 13 assumptions to define) do indeed have an equilibrium set of prices which makes supply equal demand for all goods. There's a nice review article on that here for anyone who cares.

Brouwer's theorem essentially says that when you take a convex set (a disk, say, including both the interior and the boundary) and map it into itself in some smooth and continuous way, there has to be one point which stays fixed, i.e. is mapped into itself. This has some interesting and counter-intuitive implications, as some contributor to Wikipedia has pointed out:
The theorem has several "real world" illustrations. For example: take two sheets of graph paper of equal size with coordinate systems on them, lay one flat on the table and crumple up (without ripping or tearing) the other one and place it, in any fashion, on top of the first so that the crumpled paper does not reach outside the flat one. There will then be at least one point of the crumpled sheet that lies directly above its corresponding point (i.e. the point with the same coordinates) of the flat sheet. This is a consequence of the n = 2 case of Brouwer's theorem applied to the continuous map that assigns to the coordinates of every point of the crumpled sheet the coordinates of the point of the flat sheet immediately beneath it.

Similarly: Take an ordinary map of a country, and suppose that that map is laid out on a table inside that country. There will always be a "You are Here" point on the map which represents that same point in the country.


In comments, "computers can be gamed" rightly points out that the theorem only works if one considers a smooth mapping of a set into itself. This is very important.

Indeed, go to the Wikipedia page for Brouwer's theorem and in addition to the examples I mentioned above, they also give a three dimensional example -- the liquid in a cup. Stir that liquid, they suggest, and -- since the initial volume of liquid simply gets mapped into the same volume, with elements rearranged -- there must be one point somewhere which has not moved. But this is a mistake unless you carry out the stirring with extreme care -- or use a high viscosity liquid such as oil or glycerine.

Ordinary stirring of water creates fluid turbulence -- disorganized flow in which eddies create smaller eddies and you quickly get discontinuities in the flow down to the smallest molecular scales. In this case -- the ordinary case -- the mapping from the liquid's initial position to its later position is NOT smooth, and the theorem doesn't apply. 

Class warfare and public goods

I think this is about the best short description I've heard yet of why wealth isn't created by heroic individuals (a la Ayn Rand's most potent fantasies). I just wish Elizabeth Warren had been appointed head of the new Bureau of Consumer Protection. Based on the words below, I can see why there was intense opposition from Wall St. She's obviously not a Randroid:

I hear all this, you know, “Well, this is class warfare, this is whatever.”—No!

There is nobody in this country who got rich on his own. Nobody.

You built a factory out there—good for you! But I want to be clear.

You moved your goods to market on the roads the rest of us paid for.

You hired workers the rest of us paid to educate.

You were safe in your factory because of police forces and fire forces that the
rest of us paid for.

You didn’t have to worry that maurauding bands would come and seize everything at your factory, and hire someone to protect against this, because of the work the rest of us did.

Now look, you built a factory and it turned into something terrific, or a great idea—God bless. Keep a big hunk of it.

But part of the underlying social contract is you take a hunk of that and pay forward for the next kid who comes along.

Thinking about thinking

Psychologist Daniel Kahneman has a book coming out in November. Thinking, fast and slow. It's all about mental heuristics and the two different functional levels of the brain -- the fast instinctive part which is effortless but prone to errors and the slow rational part which takes effort to use but which can (in some cases) correct some of the errors of the first part. His Nobel Prize Lecture from 2002 is a fascinating read so I'm looking forward to the book.

But meanwhile, has some videos and text of a series of very informal talks Kahneman recently gave. These give some fascinating insight into the origins some of his thinking on decision theory, prospect theory (why we value gains and losses relative to our own current position, rather than judge outcomes in terms of total wealth), why corporations make bad decisions and don't work too hard to improve their ability to make better ones, and so on. Here's one nice example of many:

The question I'd like to raise is something that I'm deeply curious about, which is what should organizations do to improve the quality of their decision-making? And I'll tell you what it looks like, from my point of view.

I have never tried very hard, but I am in a way surprised by the ambivalence about it that you encounter in organizations. My sense is that by and large there isn't a huge wish to improve decision-making—there is a lot of talk about doing so, but it is a topic that is considered dangerous by the people in the organization and by the leadership of the organization. I'll give you a couple of examples. I taught a seminar to the top executives of a very large corporation that I cannot name and asked them, would you invest one percent of your annual profits into improving your decision-making? They looked at me as if I was crazy; it was too much.

I'll give you another example. There is an intelligence agency, and the CIA, and a lot of activity, and there are academics involved, and there is a CIA university. I was approached by someone there who said, will you come and help us out, we need help to improve our analysis. I said, I will come, but on one condition, and I know it will not be met. The condition is: if you can get a workshop where you get one of the ten top people in the organization to spend an entire day, I will come. If you can't, I won't. I never heard from them again.

What you can do is have them organize a conference where some really important people will come for three-quarters of an hour and give a talk about how important it is to improve the analysis. But when it comes to, are you willing to invest time in doing this, the seriousness just vanishes. That's been my experience, and I'm puzzled by it.

Wednesday, September 21, 2011

High-frequency trading -- the downside, Part I

Andrew Haldane of the Bank of England has given a stream of recent speeches -- more like detailed research reports -- offering deep insight into various pressing issues in finance. One of his most recent speeches looks at high-frequency trading (HFT), noting its positive aspects as well as its potential negative consequences. Importantly, he has tried to do this in non-ideological fashion, always looking to the data to back up any perspective.

The speech is wide ranging and I want to explore it points in some detail, so I'm going to break this post into three (I think) parts looking at different aspects of his argument. This is number one, the others will arrive shortly.

To begin with, Haldane notes that in the last decade as HFT has become prominent trading volumes have soared, and, as they have, the time over which stocks are held before being traded again has fallen:
... at the end of the Second World War, the average US share was held by the average investor for around four years. By the start of this century, that had fallen to around eight months. And by 2008, it had fallen to around two months.
It was about a decade ago that trading execution times on some electronic trading platforms fell below the one second barrier. But the steady march to ever fast trading goes on:
As recently as a few years ago, trade execution times reached “blink speed” – as fast as the blink of an eye. At the time that seemed eye-watering, at around 300-400 milli-seconds or less than a third of a second. But more recently the speed limit has shifted from milli-seconds to micro-seconds – millionths of a second. Several trading platforms now offer trade execution measured in micro-seconds (Table 1).

As of today, the lower limit for trade execution appears to be around 10 micro-seconds. This means it would in principle be possible to execute around 40,000 back-to-back trades in the blink of an eye. If supermarkets ran HFT programmes, the average household could complete its shopping for a lifetime in under a second.

It is clear from these trends that trading technologists are involved in an arms race. And it is far from over. The new trading frontier is nano-seconds – billionths of a second. And the twinkle in technologists’(unblinking) eye is pico-seconds – trillionths of a second. HFT firms talk of a “race to zero”.
Haldane then goes on to consider what effect this trend has had so far on the nature of trading, looking in particular at market makers.

First, he offers a useful clarification of why the bid-ask spread is normally taken as a useful measure of market liquidity (or more correctly, the inverse of market liquidity). As he points out, the profits market makers earn from the bid-ask spread represent a fee they require for taking risks that grow more serious with lower liquidity:
The market-maker faces two types of problem. One is an inventory-management problem – how much stock to hold and at what price to buy and sell. The market-maker earns a bid-ask spread in return for solving this problem since they bear the risk that their inventory loses value. ...Market-makers face a second, information-management problem. This arises from the possibility of trading with someone better informed about true prices than themselves – an adverse selection risk. Again, the market-maker earns a bid-ask spread to protect against this informational risk.

The bid-ask spread, then, is the market-makers’ insurance premium. It provides protection against risks from a depreciating or mis-priced inventory. As such, it also proxies the “liquidity” of the market – that is, its ability to absorb buy and sell orders and execute them without an impact on price. A wider bid-ask spread implies greater risk in the sense of the market’s ability to absorb volume without affecting prices.
The above offer no new insights, but explains the relationship in a very clear way.

Next comes the question of whether HFT has made markets work more efficiently, and here things become more interesting. First, there is a great deal of evidence (some I've written about here earlier) showing that the rise of HFT has caused a decrease in bid-ask spreads, and hence an improvement in market liquidity. Haldane cites several studies:
For example, Brogaard (2010) analyses the effects of HFT on 26 NASDAQ-listed stocks. HFT is estimated to have reduced the price impact of a 100-share trade by $0.022. For a 1000-share trade, the price impact is reduced by $0.083. In other words, HFT boosts the market’s absorptive capacity. Consistent with that, Hendershott et al (2010) and Hasbrouck and Saar (2011) find evidence of algorithmic trading and HFT having narrowed bid-ask spreads.
His Chart 8 (reproduced below) shows a measure of bid-ask spreads on UK equities over the past decade, the data having been normalised by a measure of market volatility to "strip out volatility spikes."

It's hard to be precise, but the figure shows something like a ten-fold reduction in bid-ask spreads over the past decade. Hence, by this metric, HFT really does appear to have "greased the wheels of modern finance."

But there's also more to the story. Even if bid-ask spreads may have generally fallen, it's possible that other measures of market function have also changed, and not in a good way. Haldane moves on to another set of data, his Chart 9 (below), which shows data on volatility vs correlation for components of the S&P 500 since 1990. This chart indicates that there has been a general link between volatility and correlation -- in times of high market volatility, stock movements tend to be more correlated. Importantly, the link has grown increasingly strong in the latter period 2005-2010.

What this implies, Haldane suggests, is that HFT has driven this increasing link, with consequences.
Two things have happened since 2005, coincident with the emergence of trading platform fragmentation and HFT. First, both volatility and correlation have been somewhat higher. Volatility is around 10 percentage points higher than in the earlier sample, while correlation is around 8 percentage points higher. Second, the slope of the volatility / correlation curve is steeper. Any rise in volatility now has a more pronounced cross-market effect than in the past.... Taken together, this evidence points towards market volatility being both higher and propagating further than in the past.
This interpretation is as interesting as it is perhaps obvious in retrospect. Markets have calmer periods and stormier periods. HFT seems to have reduced bid-ask spreads in the calmer times, making markets work more smoothly. But it appears to have done just the opposite in stormy times:
Far from solving the liquidity problem in situations of stress, HFT firms appear to have added to it. And far from mitigating market stress, HFT appears to have amplified it. HFT liquidity, evident in sharply lower peacetime bid-ask spreads, may be illusory. In wartime, it disappears. This disappearing act, and the resulting liquidity void, is widely believed to have amplified the price discontinuities evident during the Flash Crash.13 HFT liquidity proved fickle under
stress, as flood turned to drought.
This is an interesting point, and shows how easy it is to jump to comforting but possible incorrect conclusions by looking at just one measure of market function, or by focusing on "normal" times as opposed to the non-normal times which are nevertheless a real part of market history.

As I said, the speech goes on to explore some other related arguments touching on other deep aspects of market behaviour. I hope to explore these in some detail soon.

Tuesday, September 20, 2011

A bleak perspective... but probably true

I try not to say too much about our global economic and environmental future as I have zero claim to any special insight. I do have a fairly pessimistic view, which is reinforced every year or so when I read in Nature or Science the latest bleak assessment of the rapid and likely irreversible decline of marine ecosystems. I simply can't see humans on a global scale changing their ways very significantly until some truly dreadful catastrophes strike.

Combine environmental issues with dwindling resources and the global economic crisis, and the near term future really doesn't look so rosy. On this topic, I have been enjoying an interview at Naked Capitalism with Satyajit Das (part 1, part2, with part 3 I think still to come) who has worked for more than 30 years in the finance industry. I'm looking forward to reading his new book "Extreme Money: Masters of the Universe and the Cult of Risk." Here's an excerpt from the interview which, as much as any analysis I've read, seems like a plausible picture for our world over the next few decades:
There are problems to which there are no answers, no easy solutions. Human beings are not all powerful creatures. There are limits to our powers, our knowledge and our understanding.

The modern world has been built on a ethos of growth, improving living standards and growing prosperity. Growth has been our answer to everything. This is what drove us to the world of ‘extreme money’ and financialisation in the first place. Now three things are coming together to bring that period of history to a conclusion – the end of financialisation, environmental concerns and limits to certain essential natural resources like oil and water. Environmental advocate Edward Abbey put it bluntly: “Growth for the sake of growth is the ideology of a cancer cell.”

We are reaching the end of a period of growth, expansion and, maybe, optimism. Increased government spending or income redistribution, even if it is implemented (which I doubt), may not necessarily work. Living standards will have to fall. Competition between countries for growth will trigger currency and trade wars – we are seeing that already with the Swiss intervening to lower their currency and emerging markets putting in place capital controls. All this will further crimp growth. Social cohesion and order may break down. Extreme political views might become popular and powerful. Xenophobia and nationalism will become more prominent as people look for scapegoats.

People draw comparisons to what happened in Japan. But Japan had significant advantages – the world’s largest savings pool, global growth which allowed its exporters to prosper, a homogeneous, stoic population who were willing to bear the pain of the adjustment. Do those conditions exist everywhere?

We will be caught in the ruins of this collapsed Ponzi scheme for a long time, while we try to rediscover more traditional sources of growth like innovation and productivity improvements – real engineering rather than financial engineering. But we will still have to pay for the cost of our past mistakes which will complicate the process.

Fyodor Dostoevsky wrote in The Possessed: “It is hard to change gods.” It seems to me that that’s what we are trying to do. It may be possible but it won’t be simple or easy. It will also take a long, long time and entail a lot of pain.

Friday, September 16, 2011

Milton Friedman's grand illusion

Three years ago I wrote an Op-Ed for the New York Times on the need for radical change in the way economists model whole economies. Today's General Equilibrium models -- and their slightly more sophisticated cousins, Dynamic Stochastic General Equilibrium models -- make assumptions with no basis in reality. For example, there is no financial sector in these model economies. They generally assume that the diversity of behaviour of all an economy's many firms and consumers can be ignored and simply included as the average behaviour of a few "representative" agents.

I argued then that it was about time economists started using far more sophisticated modeling tools, including agent based models, in which the diversity of interactions among economic agents can be included along with a financial sector. The idea is to model the simpler behaviours of agents as well as you can and let the macro-scale complex behaviour of the economy emerge naturally out of them, without making any restrictive assumptions about what kinds of things can or cannot happen in the larger economy. This kind of work is going forward rapidly. For some detail, I recommend this talk earlier this month by Doyne Farmer.

After that Op-Ed I received quite a number of emails from economists defending the General Equilibrium approach. Several of them mentioned Milton Friedman in their defense, saying that he had shown long ago that one shouldn't worry about the realism of the assumptions in a theory, but only about the accuracy of its predictions. I eventually found the paper to which they were referring, a classic in economic history which has exerted a huge influence over economists over the past half century. I recently re-read the paper and wanted to make a few comments on Friedman's main argument. It rests entirely, I think, on a devious or slippery use of words which makes it possible to give a sensible sounding argument for what is actually a ridiculous proposition. 

The paper is entitled The Methodology of Positive Economics and was first published in 1953. It's an interesting paper and enjoyable to read. Essentially, it seems, Friedman's aim is to argue for scientific standards for economics akin to those used in physics. He begins by making a clear definition of what he means by "positive economics," which aims to be free from any particular ethical position or normative judgments. As he wrote, positive economics deals with...
"what is," not with "what ought to be." Its task is to provide a system of generalizations that can be used to make correct predictions about the consequences of any change in circumstances. Its performance is to be judged by the precision, scope, and conformity with experience of the predictions it yields.
Friedman then asks how one should judge the validity of a hypothesis, and asserts that...
...the only relevant test of the validity of a hypothesis is comparison of its predictions with experience. The hypothesis is rejected if its predictions are contradicted ("frequently" or more often than predictions from an alternative hypothesis); it is accepted if its predictions are not contradicted; great confidence is attached to it if it has survived many opportunities for contradiction. Factual evidence can never "prove" a hypothesis; it can only fail to disprove it, which is what we generally mean when we say, somewhat inexactly, that the hypothesis has been "confirmed" by experience."

So far so good. I think most scientists would see the above as conforming fairly closely to their own conception of how science should work (and of course this view is closely linked to views made famous by Karl Popper).

Next step: Friedman goes on to ask how one chooses between several hypotheses if they are all equally consistent with the available evidence. Here too his initial observations seem quite sensible:
...there is general agreement that relevant considerations are suggested by the criteria "simplicity" and "fruitfulness," themselves notions that defy completely objective specification. A theory is "simpler" the less the initial knowledge needed to make a prediction within a given field of phenomena; it is more "fruitful" the more precise the resulting prediction, the wider the area within which the theory yields predictions, and the more additional lines for further research it suggests.
Again, right in tune I think with the practice and views of most scientists. I especially like the final point that part of the value of a hypothesis also comes from how well it stimulates creative thinking about further hypotheses and theories. This point is often overlooked.

Friedman's essay then shifts direction. He argues that the processes and practices involved in the initial formation of a hypothesis, and in the testing of that hypothesis, are not as distinct as people often think, Indeed, this is obviously so. Many scientists form a hypothesis and try to test it, then adjust the hypothesis slightly in view of the data. There's an ongoing evolution of the hypothesis in correspondence with the data and the kinds of experiments of observations which seem interesting.

To this point, Friedman's essay says nothing that wouldn't fit into any standard discussion of the generally accepted philosophy of science from the 1950s. But this is where it suddenly veers off wildly and attempts to support a view that is indeed quite radical. Friedman mentions the difficulty in the social sciences of getting
new evidence with which to test an hypothesis by looking at its implications. This difficulty, he suggests,
... makes it tempting to suppose that other, more readily available, evidence is equally relevant to the validity of the hypothesis-to suppose that hypotheses have not only "implications" but also "assumptions" and that the conformity of these "assumptions" to "reality" is a test of the validity of the hypothesis different from or additional to the test by implications. This widely held view is fundamentally wrong and productive of much mischief.
Having raised this idea that assumptions are not part of what should be tested, Friedman then goes on to attack very strongly the idea that a theory should strive at all to have realistic assumptions. Indeed, he suggests, a theory is actually superior insofar as its assumptions are unrealistic:
In so far as a theory can be said to have "assumptions" at all, and in so far as their "realism" can be judged independently of the validity of predictions, the relation between the significance of a theory and the "realism" of its "assumptions" is almost the opposite of that suggested by the view under criticism. Truly important and significant hypotheses will be found to have "assumptions" that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions... The reason is simple. A hypothesis is important if it "explains" much by little,...   To be important, therefore, a hypothesis must be descriptively false in its assumptions...
This is the statement that the economists who wrote to me used to defend unrealistic assumptions in General Equilibrium theories. Their point was that having unrealistic assumptions isn't just not a problem, but is a positive strength for a theory. The more unrealistic the better, as Friedman argued (and apparently proved, in the eyes of some economists).

Now, what is wrong with Friedman's argument, if anything?  I think the key issue is his use of the provocative terms such as "unrealistic" and "false" and "inaccurate" in places where he actually means "simplified," "approximate" or "incomplete."  He switches without warning between these two different meanings in order to make the conclusion seem unavoidable, and profound, when in fact it is simply not true, or something we already believe and hardly profound at all.

To see the problem, take a simple example in physics. Newtonian dynamics describes the motions of the planets quite accurately (in many cases) even if the planets are treated as point masses having no extension, no rotation, no oceans and tides, mountains, trees and so on. The great triumph of Newtonian dynamics (including his law of gravitational attraction) is it's simplicity -- it asserts that out of all the many details that could conceivably influence planetary motion, two (mass and distance) matter most by far. The atmosphere of the planet doesn't matter much, nor does the amount of sunlight it reflects. The theory of course goes further to describe how other details do matter if one considers planetary motion in more detail -- rotation does matter, for example, because it generates tides which dissipate energy, taking energy slowly away from orbital motion. 

But I don't think anyone would be tempted to say that Newtonian dynamics is a powerful theory because it is descriptively false in its assumptions. It's assumptions are actually descriptively simple -- that planets and The Sun have mass, and that a force acts between any two masses in proportion to the product of their masses and in inverse proportional to the distance between them. From these assumptions one can work out predictions for details of planetary motion, and those details turn out to be close to what we see. The assumptions are simple and plausible, and this is what makes the theory so powerful when it turns out to make powerful and accurate predictions.

Indeed, if those same predictions came out of a theory with obviously false assumptions -- all planets are perfect cubes, etc. -- it would be less powerful by far because it would be less believable. It's ability to make predictions would be as big a mystery as the original phenomenon of planetary motion itself -- how can a theory that is so obviously not in tune with reality still make such accurate predictions?

So whenever Friedman says "descriptively false" I think you can instead write "descriptively simple", and clarify the meaning by adding a phrase of the sort "which identify the key factors which matter most." Do that replacement in Friedman's most provocative phrase from above and you have something far more sensible:
A hypothesis is important if it "explains" much by little,...   To be important, therefore, a hypothesis must be descriptively simple in its assumptions. It must identify the key factors which matter most...

That's not quite so bold, however, and it doesn't create a license for theorists to make any assumptions they want without being criticized if those assumptions stray very far from reality.

Of course, there is a place in science for unrealistic assumptions. A theory capturing core realities of a problem may simply be too difficult to work with, making it impossible to draw out testable predictions. Scientists often simplify the assumptions well past the point of plausibility just to be able to calculate something (quantum gravity in one dimension, for example), hoping that insight gained in the process may make it possible to step back toward a more realistic theory. But false assumptions in this case are merely tools for getting to a theory that doesn't have to make false assumptions.

Of course, there is another matter which Friedman skipped over entirely in his essay. He suggested that economic theories should be judged solely on the precision of their predictions, not the plausibility of their assumptions. But he never once in the essay gave a single example of an economic theory with unrealistic or descriptively false assumptions which makes impressively accurate predictions. A curious omission.

Thursday, September 15, 2011

Corrections In Gold?

Markets are playing exactly according to the script. Stocks rallied for 4th day straight on the news of European fix. I have been writing for a while that CBs( Central Bankers of the world) have learned their lesson from the 2008 crisis and they would prevent such a crisis from developing again. So we see that the ECB, BOE, SNB, BOJ, BOC and of course the FED has come together to provide liquidity to the market.  Once again, I have been writing that the only thing the CBs know is how to pump in money in the system and although such an action will only delay the inevitable balance sheet contraction, it will provide some much needed time for the speculators and too big to fail banks to make some more money.

The CBs have set up US Dollar swap lines for the “Zombie European Banks” who survive on leverage and are actually dead men walking. That has resulted in the short squeeze in the world stock markets.  This should lead to further decline in US Dollar and a rally in Equities.

However, I am not sure if the corrections are over or we shall see renewed weakness again. It seems that the “Big Banks” are buying all that their clients are selling. Again, this is in anticipation of Santa Clause coming early on September 21st. There may be some disappointments and we might see some selling the news. The weakness in the market may continue till October. But the bottom-line is, the mega bears are going to be disappointed that world has not ended as predicted and Zero hedge will go ballistic with different conspiracy theories. For the next three months, we will see these dooms day soothsayers go rabid with froth in the mouth. They may still be vindicated at a later date sometime in 2012, but for entirely different reasons. More on that later.  

The new trade now is “Buy Euro and Sell Gold”.  In fact, the corrections in precious metal may have just started.  Let me show you the chart from Chris Kimble.

Both the precious metals have broken multiple support lines and from technical perspective, it may be time to short them with a tight stop.

From fundamental perspective, once again, the big money is leaving Europe but they want that flight to be low key and orderly. So on the superficial level, there will be show of strength of Euro and with the help of the CBs, sell gold.  Precious metals should be considered as insurance to inflation and break down of fiat currency. I do not see US Dollar going out of fashion any time soon. In fact, with the balance sheet contraction looming over the global economy, brake-up of Euro zone and a possible Geo-political clash in Middle East, the flight to safety will be towards US Dollar.

It is going to be an interesting time.

The long history of options

I just finished reading Niall Ferguson's book The Ascent of Money, which I strongly recommend to anyone interested in the history of economics and especially finance. Some readers of this blog may suspect that I am at times anti-finance, but this isn't really true. Ferguson makes a very convincing argument that finance is a technology -- a rich and evolving set of techniques for solving problems -- which has been as important to human well-being as knowledge of mechanics, chemistry and fire. I don't think that's at all overstated -- finance is a technology for sharing and cooperating in our management of wealth, savings and risk in the face of uncertainty. It's among the most basic and valuable technologies we possess.

Having said that, I am critical of finance when I think it is A) based on bad science, or B) used dishonestly as a tool by some people to take advantage of others. Naturally, because finance is complicated and difficult to understand there are many instances of both A and B. And of course one often finds concepts from category A aiding acts of category B.

But one thing I found particularly interesting in Ferguson's history is the early origins of options contracts and other derivatives. The use of derivatives has of course exploded since the work of Black and Scholes in the 1970s provided a more or less sensible way to price some of them. It's easy to forget that options have been around at least since the mid 1500s (in Dutch and French commodities markets). They were in heavy use by the late 1600s in the coffee houses of London were shareholders traded stocks of the East India Company and roughly 100 other joint-stock companies.

Looking a little further, I came across this excellent review article on the early history of options by Geoffrey Poitras of Simon Fraser University. This article goes into much greater detail than Ferguson on the history of options. As Poitras notes, early use in commodities markets arose quite naturally to meet key needs of the time (as any new technology does):

The evolution of trading in free standing option contracts revolved around two important elements: enhanced securitization of the transactions; and the emergence of speculative trading. Both these developments are closely connected with the concentration of commercial activity, initially at the large medieval market fairs and, later, on the bourses. Though it is difficult to attach specific dates to the process, considerable progress was made by the Champagne fairs with the formalization of the lettre de foire and the bill of exchange, e.g., Munro (2000). The sophisticated settlement process used to settle accounts at the Champagne fairs was a precursor of the clearing methods later adopted for exchange trading of securities and commodities. Over time, the medieval market fairs came to be surpassed by trade in urban centres such as Bruges (de Roover 1948; van Houtte 1966) and, later, in Antwerp and Lyons. Of these two centres, Antwerp was initially most important for trade in commodities while Lyons for trade in bills. Fully developed bourse trading in commodities emerged in Antwerp during the second half of the 16th century (Tawney 1925, p.62-5; Gelderblom and Jonker 2005). The development of the Antwerp commodity market provided sufficient liquidity to support the development of trading in ‘to arrive’ contracts. Due to the rapid expansion of seaborne trade during the period, speculative transactions in ‘to arrive’ grain that was still at sea were particularly active. Trade in whale oil, herring and salt was also important (Gelderblom and Jonker 2005; Barbour 1950; Emery 1895). Over time, these contracts came to be actively traded by speculators either directly or indirectly involved in trading that commodity but not in need of either taking or making delivery of the specific shipment.
 Another interesting point is the wide use in the 1500s of trading instruments which were essentially flat out gambles, not so unlike the credit default swaps of our time (ostensibly used to manage risk, but often used to make outright bets). As Poitras writes,
The concentration of liquidity on the Antwerp Exchange furthered speculative trading centered around the important merchants and large merchant houses that controlled either financial activities or the goods trade. The milieu for such trading was closely tied to medieval traditions of gambling (Van der Wee 1977): “Wagers, often connected with the conclusion of commercial and financial transactions, were entered into on the safe return of ships, on the possibility of Philip II visiting the Netherlands, on the sex of children as yet unborn etc. Lotteries, both private and public, were also extremely popular, and were submitted as early as 1524 to imperial approval to prevent abuse.”
 One other interesting point (among many) is the advice of observers of the 17th century options markets to use easy credit to fund speculative activity. A man named Josef de la Vega in 1688 wrote a book on the markets called Confusion de Confusiones (still an apt title), and offered some fairly reckless advice to speculators:
De la Vega (p.155) goes on to describe an even more naive trading strategy: “If you are [consistently] unfortunate in all your operations and people begin to think that you are shaky, try to compensate for this defect by [outright] gambling in the premium business, [i.e., by borrowing the amount of the premiums]. Since this procedure has become general practice, you will be able to find someone who will give you credit (and support you in difficult situations, so you may win without dishonor).”

The possibility that the losses may continue is left unrecognized.

Or, of course, perhaps the possibility of continuing losses was recognized, and it was also recognized that these losses would in effect belong to someone else -- the person from whom the funds were borrowed.

These points aren't especially important, but they do bring home the point that almost everything we've seen in the past 20 years and in the recent financial crisis have precursors stretching back centuries. We're largely listening to an old tune being replayed with modern instruments.

Tuesday, September 13, 2011

European Union. Win of Hope Over Commonsense

In continuation of the analysis of the crisis in Europe, here is a beautiful and well researched article from "Startfor". Startfor provides the best Geo-political analysis ever and is an invaluable tool for understanding the global macro economics. It cuts through the noise and presents the clear picture. Read on;

Before 1492, Europe was a backwater of small nationalities struggling over a relatively small piece of cold, rainy land. But one technological change made Europe the center of the international system: deep-water navigation.

The ability to engage in long-range shipping safely allowed businesses on the Continent’s various navigable rivers to interact easily with each other, magnifying the rivers’ capital-generation capacity. Deep-water navigation also allowed many of the European nations to conquer vast extra-European empires. And the close proximity of those nations combined with ever more wealth allowed for technological innovation and advancement at a pace theretofore unheard of anywhere on the planet. As a whole, Europe became very rich, became engaged in very far-flung empire-building that redefined the human condition and became very good at making war. In short order, Europe went from being a cultural and economic backwater to being the engine of the world.

At home, Europe’s growing economic development was exceeded only by the growing ferocity of its conflicts. Abroad, Europe had achieved the ability to apply military force to achieve economic aims — and vice versa. The brutal exploitation of wealth from some places (South America in particular) and the thorough subjugation and imposed trading systems in others (East and South Asia in particular) created the foundation of the modern order. Such alternations of traditional systems increased the wealth of Europe dramatically.
But “engine” does not mean “united,” and Europe’s wealth was not spread evenly. Whichever country was benefitting had a decided advantage in that it had greater resources to devote to military power and could incentivize other countries to ally with it. The result ought to have been that the leading global empire would unite Europe under its flag. It never happened, although it was attempted repeatedly. Europe remained divided and at war with itself at the same time it was dominating and reshaping the world.

……………..The tensions underlying Europe were bought to a head by German unification in 1871 and the need to accommodate Germany in the European system, of which Germany was both an integral and indigestible part. The result was two catastrophic general wars in Europe that began in 1914 and ended in 1945 with the occupation of Europe by the United States and the Soviet Union and the collapse of the European imperial system. Its economy shattered and its public plunged into a crisis of morale and a lack of confidence in the elites, Europe had neither the interest in nor appetite for empire.
Europe was exhausted not only by war but also by the internal psychosis of two of its major components. Hitler’s Germany and Stalin’s Soviet Union might well have externally behaved according to predictable laws of geopolitics. Internally, these two countries went mad, slaughtering both their own citizens and citizens of countries they occupied for reasons that were barely comprehensible, let alone rationally explicable. From my point of view, the pressure and slaughter inflicted by two world wars on both countries created a collective mental breakdown.

………………Paradoxically, it was the United States that gave the first shape to Europe’s future, beginning with Western Europe. World War II’s outcome brought the United States and Soviet Union to the center of Germany, dividing it. A new war was possible, and the reality and risks of the Cold War were obvious. The United States needed a united Western Europe to contain the Soviets. It created NATO to integrate Europe and the United States politically and militarily. This created the principle of transnational organizations integrating Europe. The United States also encouraged economic cooperation both within Europe and between North America and Europe — in stark contrast to the mercantilist imperiums of recent history — giving rise to the European Union’s precursors. Over the decades of the Cold War, the Europeans committed themselves to a transnational project to create a united Europe of some sort in a way not fully defined.

………………..The European Union was designed not simply to be a useful economic tool but also to be a means of European redemption. The focus on economics was essential. It did not want to be a military alliance, since such alliances were the foundation of Europe’s tragedy. By focusing on economic matters while allowing military affairs to be linked to NATO and the United States, and by not creating a meaningful joint-European force, the Europeans avoided the part of their history that terrified them while pursuing the part that enticed them: economic prosperity. The idea was that free trade regulated by a central bureaucracy would suppress nationalism and create prosperity without abolishing national identity. The common currency — the euro — is the ultimate expression of this hope. The Europeans hoped that the existence of some Pan-European structure could grant wealth without surrendering the core of what it means to be French or Dutch or Italian.

Yet even during the post-World War II era of security and prosperity, some Europeans recoiled from the idea of a transfer of sovereignty. The consensus that many in the long line of supporters of European unification believed existed simply didn’t. And today’s euro crisis is the first serious crisis that Europe has faced in the years since, with nationalism beginning to re-emerge in full force.

In the end, Germans are Germans and Greeks are Greeks. Germany and Greece are different countries in different places with different value systems and interests. The idea of sacrificing for each other is a dubious concept. The idea of sacrificing for the European Union is a meaningless concept. The European Union has no moral claim on Europe beyond promising prosperity and offering a path to avoid conflict. These are not insignificant goals, but when the prosperity stops, a large part of the justification evaporates and the aversion to conflict (at least political discord) begins to dissolve.

Germany and Greece each have explanations for why the other is responsible for what has happened. For the Germans, it was the irresponsibility of the Greek government in buying political power with money it didn’t have to the point of falsifying economic data to obtain eurozone membership. For the Greeks, the problem is the hijacking of Europe by the Germans. Germany controls the eurozone’s monetary policy and has built a regulatory system that provides unfair privileges, so the Greeks believe, for Germany’s exports, economic structure and financial system. Each nation believes the other is taking advantage of the situation.

Read more: The Crisis of Europe and European Nationalism | STRATFOR 
Republished with permission of STRATFOR.