Proving I’m me

I have an extremely difficult time remembering the answers to my security questions for restoring forgotten passwords. I don’t have an invariant favourite movie, or book, or colour. I have many best friends from childhood and they have various permutations of their names. Did I use their first name, nick name, full name? Even my Mother’s maiden name can be problematic because there are various ways to transliterate Chinese names and I don’t always remember which I used. The city I met my wife is ambiguous. Did I use the specific town per se or the major city the town is next to? Did I include the model of my first car or just the make. Before I can work my way through the various permutations, I’m usually locked out of my account forever.

As much as I appreciate and rely on computers, software, and the internet, objectively they all still suck. My iPhone is perhaps better than the alternative but it sucks. My laptop sucks. Apple makes awful products. Google, Amazon, Uber and the rest are not so great either. I don’t remember all the times Google Maps has steered me wrong. The tech landscape may be saturated but there is definitely room for something better.

Commentary on the Blue Brain Project

Definitely read Christof Koch and Michael Buice’s commentary on the Blue Brain Project paper in Cell. They nicely summarize all the important points of the paper and propose a Turing Test for models. The performance of a model can be assessed by how long it would take an experimenter to figure out if the data from proposed neurophysiological experiments was coming from a model or the real thing. I think that this is a nice idea but there is one big difference between the Turing Test for artificial intelligence and brain simulations and that is that everyone has an innate sense of what it means to be human but no one knows what a real brain should be doing. In that sense, it is not really a Turing Test per se but rather the replication of experiments in a more systematic way than is done now. You do an experiment on a real brain then repeat it on the model and see if they get comparable results.

Big blue brain

Appearing in this week’s edition of Cell is a paper summarizing the current status of Henry Markram’s Blue Brain Project. You can download the paper for free until Oct 22 here. The paper reports on a morphological and electrophysiological statistically accurate reconstruction of a rat somatosensory cortex. I think it is a pretty impressive piece of work. They first did a survey of cortex (14 thousand recorded and labeled neurons) to get probability distributions for various types of neurons and their connectivities. The neurons are classified according to their morphology (55 m-types), electrophysiology (11 e-types), and synaptic dynamics (6 s-types). The neurons are connected according to an algorithm outlined in a companion paper in Frontiers in Computational Neuroscience that reproduces the measured connectivity distribution. They then created a massive computer simulation of the reconstructed circuit and show that it has interesting dynamics and can reproduce some experimentally observed behaviour.

Although much of the computational neuroscience community has not really rallied behind Markram’s mission, I’m actually more sanguine about it now. Whether the next project to do the same for the human brain is worth a billion dollars, especially if this is a zero sum game, is another question. However, it is definitely a worthwhile pursuit to systematically catalogue and assess what we know now. Just like how IBM’s Watson did not really invent any new algorithms per se, it clearly changed how we perceive machine learning by showing what can be done if enough resources are put into it. One particularly nice thing the project has done is to provide a complete set of calibrated models for all types of cortical neurons. I will certainly be going to their data base to get the equations for spiking neurons in all of my future models. I think one criticism they will face is that their model basically produced what they put in but to me that is a feature not a bug. A true complete description of the brain would be a joint probability distribution for everything in the brain. This is impossible to compute in the near future no matter what scale you choose to coarse grain over. No one really believes that we need all this information and thus the place to start is to assume that the distribution completely factorizes into a product of independent distributions. We should at least see if this is sufficient and this work is a step in that direction.

However, the one glaring omission in the current rendition of this project is an attempt to incorporate genetic and developmental information. A major constraint in how much information is needed to characterize the brain is how much is contained in the genome. How much of what determines a neuron type and its location is genetically coded, determined by external inputs, or is just random? When you see great diversity in something there are two possible answers: 1) the details matter a lot or 2) details do not matter at all. I would want to know the answer to this question first before I tried to reproduce the brain.