Big blue brain

Appearing in this week’s edition of Cell is a paper summarizing the current status of Henry Markram’s Blue Brain Project. You can download the paper for free until Oct 22 here. The paper reports on a morphological and electrophysiological statistically accurate reconstruction of a rat somatosensory cortex. I think it is a pretty impressive piece of work. They first did a survey of cortex (14 thousand recorded and labeled neurons) to get probability distributions for various types of neurons and their connectivities. The neurons are classified according to their morphology (55 m-types), electrophysiology (11 e-types), and synaptic dynamics (6 s-types). The neurons are connected according to an algorithm outlined in a companion paper in Frontiers in Computational Neuroscience that reproduces the measured connectivity distribution. They then created a massive computer simulation of the reconstructed circuit and show that it has interesting dynamics and can reproduce some experimentally observed behaviour.

Although much of the computational neuroscience community has not really rallied behind Markram’s mission, I’m actually more sanguine about it now. Whether the next project to do the same for the human brain is worth a billion dollars, especially if this is a zero sum game, is another question. However, it is definitely a worthwhile pursuit to systematically catalogue and assess what we know now. Just like how IBM’s Watson did not really invent any new algorithms per se, it clearly changed how we perceive machine learning by showing what can be done if enough resources are put into it. One particularly nice thing the project has done is to provide a complete set of calibrated models for all types of cortical neurons. I will certainly be going to their data base to get the equations for spiking neurons in all of my future models. I think one criticism they will face is that their model basically produced what they put in but to me that is a feature not a bug. A true complete description of the brain would be a joint probability distribution for everything in the brain. This is impossible to compute in the near future no matter what scale you choose to coarse grain over. No one really believes that we need all this information and thus the place to start is to assume that the distribution completely factorizes into a product of independent distributions. We should at least see if this is sufficient and this work is a step in that direction.

However, the one glaring omission in the current rendition of this project is an attempt to incorporate genetic and developmental information. A major constraint in how much information is needed to characterize the brain is how much is contained in the genome. How much of what determines a neuron type and its location is genetically coded, determined by external inputs, or is just random? When you see great diversity in something there are two possible answers: 1) the details matter a lot or 2) details do not matter at all. I would want to know the answer to this question first before I tried to reproduce the brain.

Advertisements

3 thoughts on “Big blue brain

  1. I’ve barely heard of the blue brain project.
    Interesting post—the last paragraph seems most apt. I remember people were trying to assign a number to the amount of information in the genomes of various organisms (just googled it—louis wolpert J Theor Biol 1969–free on line—shows i’m not quite senile yet though i did have to spend 10 minutes trying to find my wallet this morning; thought someone stole it ).

    I dont really know how one would quantify the information in a genome, brain, or even computer program or logical system—there seem to be many ways, possibly coarse grainings—see richard muller on ‘the ‘illusion of randomness’ or slutsky’s theorem 1937 econometrica (slutsky-yule effect). I like taking the default position (boltzmann’s equal a priori probabilities —‘no prior’) which is that actually everyhting in the universe is really just due to measurment error—spurious patterns which make people confused. Houdini studied this.

    I guess chaitin and complexity people can assign algorithms to hierarchies of various sorts. (see the blog ‘godel’s lost letter and p=np’; i like the kleene ‘degrees of complexity view’ though dont understasnd it—one person who studied this sort of thing was lars svenious of U Md—i met him once to discuss grad school; his son ian is a well known veteran of dc ‘anarcho-punk’ rock scene; he just collaborated with the ‘cornel west theory’ —dc hip hop band). ..

    The notion of ‘positional information’ as generating patterns such as cell differences — ie the idea that actualy all cells are initially identical but the patterns of their interactions ‘breaks their symmetry’ —eg are criminals or geniuses born that way, or made? —seems to be a very general theme. S Kauffman (‘edge of chaos’, Santa fe institute) had is own theory—boolean nets, and NK models (a sort of spin glass) seemed to also take the default view (all cells are the same just connected in different ways—one has to choose the right social network).

    Assuming one can factor a joint probability distribution seems to be another version of the default position —‘statistical independence’ . (i think Hubbel’s neutral theory of biodiversity also more or less does this —the world is a normal distribution, though perhaps through various coarse grainings it may be uniform, exponential, or any other kind of error etc.). .Boltzmann used this as the ‘molecular chaos’ assumption (which people tried to prove via ergodic theory—you basically get out what you put in—axioms=theorems, just rearranged. This is where the arrow of time comes from ) http://www.arxiv.org/abs/0812.0240 (G F R Ellis). (It seemed to me all of J Stat Mech from 1900 to even now was devoted to ways of relazing the factorization assumption—add some perterbations…) I also wonder what use knowing how to model the brain has , but i guess economic theory suggests alot of these things will eventually be found useful. Automation of/or superintelligence may not replace jobs (even possibly bad ones). . (The same issue goes for Cantor’s transfinite set theory—Shelah, large cardinal axioms, Feferman etc.—will it ever be used in modeling nature? I guess some philosophers such as alan Badiou say its useful for politics—i think he’s some kind of trotskyist; Trotsky’s aid in Mexico (van heijenoort) —where he was asassinated by Stalin—had similar interests (book ‘from trotsky to godel’ (and mexico to harvard).

    Like

  2. I don’t think any of their data can change the way we think about (neo)-cortex! There is no new information there (but only a lot of new reconstruction of cortical neurons). Sadly it does not provide a hint for phenomena that we should work on it in a theory or even in an experimental level!

    Basically, it is a presentation of a big and very detailed modeling with exact same problems of all detailed models over history of computational neurosci. Also, they calm that they do not fit any parameters but this is not true (if one dig into details of the network model that they use for simulations). Same result can be achieved by a huge down scaling of their massive simulations (it is already done many many times)! Interestingly, they get back the exact results as purely theoretical works have been predicted (to get into AS or SN) in a cortical population dynamics in an exact same arbitrary fashion of these theoretical models!

    It is an expensive review and a data set of neuronal reconstruction that they will be over written by the network dynamics!

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s