Is irrationality necessary?

Much has been made lately of the anti-science stance of a large segment of the US population. (See for example Chris Mooney’s book). The acceptance of anthropomorphic climate change or the theory of evolution is starkly divided by political inclinations. However, as I have argued in the past, seemingly irrational behavior can actually make sense from an evolutionary perspective. As I have posted on before, one of the best ways to find an optimal solution to a problem is to search randomly, the Markov Chain Monte Carlo method being the quintessential example. Randomness is useful for searching in places you wouldn’t normally go and in overcoming unwanted correlations, which I recently attributed to most of our current problems (see here). Thus, we may have been evolutionarily selected to have diverse viewpoints and degrees of rational thinking. Given some situation, there is only one rationally optimal response and in the case of incomplete information, which is almost always true, it could be wrong. Thus, when a group of individuals is presented with a challenge, it may be more optimal for the group if multiple strategies, including irrational ones, are tried rather than putting all the eggs into one rational basket. I truly doubt that Australia could have been discovered 60 thousand years ago without some irrationally risky decisions. Even within science, people pursue ideas based on tenuous hunches all the time. Many great discoveries were made because people ignored conventional rational wisdom and did something irrational. Many have failed as a result as well. However, society as a whole is arguably better since generally success goes global while failure stays local.

It is not even necessary to have great differences in cognitive abilities to produce a wide range in rationality. One only needs to have a reward system that is stimulated by a wide range of signals.  So while some children are strongly rewarded by finding self-consistent explanations to questions others are rewarded by acting rashly. Small initial differences would then amplify over time as the children seek environments that maximize their rewards. Sam Wang and Sandra Aamodt covered this in their book, Welcome to Your Brain. Thus you would end up with a society with a wide variety of rationality.

 

 

Talk today at Johns Hopkins

I’m giving a computational neuroscience lunch seminar today at Johns Hopkins.  I will be talking about my work with Michael Buice, now at the Allen Institute, on how to go beyond mean field theory in neural networks. Technically, I will present our recent work on computing correlations in a network of coupled neurons systematically with a controlled perturbation expansion around the inverse network size. The method uses ideas from kinetic theory with a path integral construction borrowed and adapted by Michael from nonequilibrium statistical mechanics.  The talk is similar to the one I gave at MBI in October.  Our paper on this topic will appear soon in PLoS Computational Biology. The slides can be found here.

Von Neumann’s response

Here’s Von Neumann’s response to straying from pure mathematics:

“[M]athematical ideas originate in empirics, although the genealogy is sometimes long and obscure. But, once they are so conceived, the subject begins to live a peculiar life of its own and is better compared to a creative one, governed by almost entirely aesthetic considerations, than to anything else, and, in particular, to an empirical science. There is, however, a further point which, I believe, needs stressing. As a mathematical discipline travels far from its empirical source, or still more, if it is a second and third generation only indirectly inspired by ideas coming from ‘reality’, it is beset with very grave dangers. It becomes more and more purely aestheticising, more and more purely l’art pour l’art. This need not be bad, if the field is surrounded by correlated subjects, which still have closer empirical connections, or if the discipline is under the influence of men with an exceptionally well-developed taste. But there is a grave danger that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganised mass of details and complexities. In other words, at a great distance from its empirical source, or after much ‘abstract’ inbreeding, a mathematical subject is in danger of degeneration.”

Thanks to James Lee for pointing this out.

Complexity is the narrowing of possibilities

Complexity is often described as a situation where the whole is greater than the sum of its parts. While this description is true on the surface, it actually misses the whole point about complexity. Complexity is really about the whole being much less than the sum of its parts. Let me explain. Consider a television screen with 100 pixels that can be either black or white. The number of possible images the screen can show is 2^{100}. That’s a really big number. Most of those images would look like random white noise. However, a small set of them would look like things you recognize, like dogs and trees and salmon tartare coronets. This narrowing of possibilities, or a reduction in entropy to be more technical, increases information content and complexity. However, too much reduction of entropy, such as restricting the screen to be entirely black or white, would also be considered to have low complexity. Hence, what we call complexity is when the possibilities are restricted but not completely restricted.

Another way to think about it is to consider a very high dimensional system, like a billion particles moving around. A complex system would be if the attractor of this six billion dimensional system (3 for position and 3 for velocity of each particle), is a lower dimensional surface or manifold.  The flow of the particles would then be constrained to this attractor. The important thing to understand about the system would then not be the individual motions of the particles but the shape and structure of the attractor. In fact, if I gave you a list of the positions and velocities of each particle as a function of time, you would be hard pressed to discover that there even was a low dimensional attractor. Suppose the particles lived in a box and they moved according to Newton’s laws and only interacted through brief elastic collisions. This is an ideal gas and what would happen is that the motions of the positions of the particles would be uniformly distributed throughout the box while the velocities would obey a Normal distribution, called a Maxwell-Boltzmann distribution in physics. The variance of this distribution is proportional to the temperature. The pressure, volume, particle number and temperature will be related by the ideal gas law, PV=NkT, with the Boltzmann constant set by Nature. An ideal gas at equilibrium would not be considered complex because the attractor is a simple fixed point. However, it would be really difficult to discover the ideal gas law or even the notion of temperature if one only focused on the individual particles. The ideal gas law and all of thermodynamics was discovered empirically and only later justified microscopically through statistical mechanics and kinetic theory. However, knowledge of thermodynamics is sufficient for most engineering applications like designing a refrigerator. If you make the interactions longer range you can turn the ideal gas into a liquid and if you start to stir the liquid then you can end up with turbulence, which is a paradigm of complexity in applied mathematics. However, the main difference between an ideal gas and turbulent flow is the dimension of the attractor. In both cases, the attractor dimension is still much smaller than the full range of possibilities.

The crucial point is that focusing on the individual motions can make you miss the big picture. You will literally miss the forest for the trees. What is interesting and important about a complex system is not what the individual constituents are doing but how they are related to each other. The restriction to a lower dimensional attractor is manifested by the subtle correlations of the entire system. The dynamics on the attractor can also often be represented by an “effective theory”. Here the use of the word “effective” is not to mean that it works but rather that the underlying microscopic theory is superseded by a macroscopic one. Thermodynamics is an effective theory of the interaction of many particles. The recent trend in biology and economics had been to focus on the detailed microscopic interactions (there is push back in economics in what has been dubbed the macro-wars). As I will relate in future posts, it is sometimes much more effective (in the works better sense) to consider the effective (in the macroscopic sense) theory than a detailed microscopic theory. In other words, there is no “theory” per se of a given system but rather sets of effective theories that are to be selected based on the questions being asked.

Von Neumann

Steve Hsu has a link to a fascinating documentary on John Von Neumann. It’s definitely worth watching.  Von Neumann is probably the last great polymath. Mathematician Paul Halmos laments that Von Neumann perhaps wasted his mathematical gifts by spreading himself too thin. He worries that Von Neumann will only be considered a minor figure in pure mathematics several hundred years hence. Edward Teller believes that Von Neumann simply enjoyed thinking above all else.