Friday, October 14, 2011

Difficulties with learning...

I just finished reading this wonderful short review of game theory (many thanks to ivansml for pointing this out to me) and its applications and limitations by Martin Shubik. It's a little old -- it appeared in the journal Complexity in 1998 -- but offers a very broad perspective which I think still holds today. Game theory in the pure sense generally views agents as coming to their strategies through rational calculation; this perspective has had huge influence in economics, especially in the context of relatively simple games with few players and not too many possible strategies. This part of game theory is well developed, although Shubik suggests there are probably many surprises left to learn.

Where the article really comes alive, however, is in considering the limitations to this strictly rational approach in games of greater complexity. In physics, the problem of two rigid bodies in gravitational interaction can be solved exactly (ignoring radiation, of course), but you get generic chaos as soon as you have three bodies or more. The same is true, Shubik argues, in game theory. Extend the number of players above three and as the number of possible permutations of strategies proliferates it is no longer plausible to assume that agents act rationally. The decision problems become too complex. One might still try to search for optimal N player solutions as a guide to what might be possible, but the rational agent approach isn't likely to be profitable as a guide to the likely behaviour and dynamics in such complex games. I highly recommend Shubik's short article to anyone interested in game theory, and especially its application to real world problems where people (or other agents) really can't hope to act on the basis of rational calculation, but instead have to use heuristics, follow hunches, and learn adaptively as they go.

Some of the points Shubik raises find perfect illustration in a recent study (I posted on it here) of typical dynamics in two-player games when the number of possible strategies gets large. Choose the structure of the games at random and the most likely outcome is a rich ongoing evolution of strategic behaviour which never settles down into any equilibrium. But these games do seem to show characteristic dynamical behaviour such as "punctuated equilibrium" -- long periods of relative quiescence which get broken apart sporadically by episodes of tumultuous change -- and clustered volatility -- the natural clustering together of periods of high variability. These qualitative aspects appear to be generic features of the non-equilibrium dynamics of complex games. Interesting that they show up generically in markets as well.

When problems are too complex -- which is typically the case -- we try to learn and adapt rather than "solving" the problem in any sense. Our learning itself may also never settle down into any stable form, but continually change as we find something that works well for a time, and then suddenly find it fails and we need to learn again.