Information content of the brain revisited

My post – The gigabit machine, was reposted on the web aggregator site reddit.com recently.  Aside from increasing traffic to my blog by tenfold for a few days, the comments on reddit made me realize that I wasn’t completely clear in my post.  The original post was about a naive calculation of the information content in the brain and how it dwarfed the information content of the genome.  Here, I use the term information in the information theoretical sense, which is about how many bits must be specified to define a system.  So a single light switch that turns on and off has one bit of information while ten light switches have 10 bits.  If we suppose that the brain has about 10^{11} neurons, with about 10^4 connections each, then there are 10^{15} total connections.  If we make the very gross assumption that each connection can be either “on” or “off”, then we arrive at 10^{15} bits.  This would be a lower bound on the amount of information required to specify the brain and it is already a really huge number.  The genome has 3 billion bases and each base can be one of four types or two bits, so this gives a total of 6 billion bits.  Hence, the information contained in the genome is just rounding noise compared to the potential information contained in the brain.  I then argued that education and training was insufficient to make up this shortfall and that most of the brain must be specified by uncontrolled events.

The criticism I received in the comments on reddit was that this doesn’t imply that the genome did not specify the brain. An example that was brought up was the Mandelbrot set where highly complex patterns can arise from a very simple dynamical system.  I thought this was a bad example because it takes a countably infinite amount of information to specify the Mandelbrot set but I understood the point which is that a dynamical system could easily generate complexity that appears to have higher information content.  I even used such an argument to dispel the notion that the brain must be simpler than the universe in this post.  However, the key point is that the high information content is only apparent; the actual information content of a given state is no larger than that contained in the original dynamical system and initial conditions.   What this would mean for the brain is that the genome alone could in principle set all the connections in the brain but these connections are not independent.  There would be correlations or other high order statistical relationships between them.  Another way to say this is that while in principle there are 2^{10^{15}} possible brains, the genome can only specify 2^{6\times10^{9}} of them, which is still a large number.  Hence, I believe that the conclusions of my original post still hold – the connections in the brain are either set mostly by random events or they are highly correlated (statistically related).

Advertisements

5 thoughts on “Information content of the brain revisited

  1. I disagree with your statement on the Mandelbrot Set. See Kolmogorov Complexity, which effectively continues where Information theory leaves off. According to that the Mandelbrot Set is low complexity.

    Like

  2. I just saw your Boltzman Brain write up, it is clear you know of the concept. So I do not understand what your point about the infinite bits is. Perhaps the argument is too subtle for me to grasp, too many bits to encode in my current energy state.

    Like

  3. The Mandelbrot set is the set of points in the complex plane that remain bounded when iterated by the quadratic map z^2+C. Hence, in order to compute it you must iterate every point on the complex plane. The Kolmogorov complexity of it is thus infinite. Blum, Cucker, Shub, and Smale proved that the Mandelbrot set is not computable. In order to know for sure that a given point will stay bounded is akin to solving the halting problem. A better example of a simple map generating complexity is the Julia set where the set is comprised of the points in the orbit when iterating a complex map. A simpler example is the logistic map which generates a chaotic orbit. In these cases the Kolmogorov complexity is low even though the orbit is highly complex. However the points are not independent. It’s like a random number generator. There is structure in the values generated although reverse engineering the algorithm from which the numbers were generated is hard to do.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s