Brain emulation

The podcast Econtalk featured eclectic economist and blogger Robin Hanson last week. The discussion was on the Singularity, which I posted on here.  Hanson’s take on the Singularity is less mystical than usual. He notes that the global economic growth rate, which he equates with the human population growth rate, has had a few punctuated events that can be considered to be singularities. The first was the arrival of humans, the second was the advent of agriculture and the third was the industrial revolution. Currently, we are doubling economic activity every 15 years. The historical increase in growth rate is a factor of 200 at each singularity. From this he predicts that after the next singularity the economic doubling time will drop to two weeks So if you put a penny in the bank, you’ll be a millionaire after a year. He examines what sorts of technological advance would get growth rates like this and concludes that artificial intelligence (AI) that can improve itself is the only plausible source. This would actually be a second order effect since humans are already possess a biological intelligence that can improve itself.

Hanson then asks what sort of scientific or technological advance would be required to create self-improving AI. He gives three possibilities. The first is that we follow traditional AI and develop algorithms that are intelligent or can become intelligent. The second is that we better understand how biological brains work and then use that knowledge to build an AI. The third is that we develop technology to do a full-scale emulation of a biological brain. He then posits that the third approach is the most likely as it doesn’t require that we understand how our brains actually work, only that we can faithfully copy it.

Personally, I’m not so sure that the third approach will be any easier than the second. For one, it is not clear how much detail is required to emulate a brain. Hanson assumes that we just need to have computational models of all the different types of neurons (e.g. Hodgkin-Huxley type equations) and then combine that with the complete mapping of the connections. Given that people can come in and out of comas and recover makes it plausible that the brain flows to an attractor so that it is not overly sensitive to initial conditions. Thus, we may not need to specify all the neuron potentials and can accept his hypothesis as theoretically plausible.

Hanson makes the claim that the mouse brain has already been mapped by taking two-dimensional slices, imaging them with electron microscopy, and then computationally reconstructing the entire brain. While this may have been done for the long-range connections between brain regions this has not yet been done at the detailed local level. This is something that two of my colleagues, Mitya Chkloskii at Janelia Farms and Sebastian Seung at MIT are working towards.

However, just knowing the neuron types and connections may not be sufficient to create a fully functioning brain. For example, the full architecture of the nervous system of the nematode worm C. elegans (302 neurons and 7000 connections) has been known for quite some time but no one has been able to emulate it yet. This is because we don’t know the strengths of the connections. A brute force parameter search for a working set of parameters would be intractable with any possible computer technology (2^{7000} is much bigger than anything in the universe). So we can’t just try out different parameters until we find a set that will work.

The technology to measure the connection strengths will face significant hurdles. The synaptic strengths are specified by the change in electrochemical potential of the cell membrane when neurotransmitters are received. This in turn is determined by the relevant molecular networks and ion channel states at each synapse. We are no where close to being able to do this with C. elegans much less a human brain with 10^{11} neurons and 10^{14} synapses. An attempt to measure the synaptic strengths electrophysiologically while the brain is active would be equally if not more difficult since the measurement would need to be done without significantly perturbing the system. Thus, it is not clear what will come first – the ability to understand enough principles of how the brain works to engineer one or to copy a brain in sufficient detail to reconstruct one. Perhaps, it will be a combination of both because a deeper understanding of the brain will let us know what is the minimal set that needs to be copied.

10 thoughts on “Brain emulation

  1. I also heard that rather strange episode. One aspect I was wondering about, and maybe you can give some insight, is this: If all the memories etc are supposed to be captured by the scan, then would that scan not necessarily have some sort of a Heisenberg effect which would change the scanned brain’s configuration?

    Like

  2. It would depend on how the memories are stored. If it is stored as classical information then there is no theoretical constraint to scanning the state. However, it could be that the molecular states involved in memories could involve quantum information, then the uncertainty principle would be invoked. Right now there is no evidence that quantum information is involved although there has been recent work that there are biological systems like photosynthesis that do involve quantum states so it is not as far fetched as it sounds.

    Like

  3. You may not need to perform herculean feats of electrophysiology to get an accurate map of C elegans’ syanptic strengths. Using genetically encoded calcium and voltage sensitive dyes along with a suitable microscope slaved to track a worm through space should enable you to gather this data as the worm behaves in a natural way.

    Like

  4. Hi Karl,

    That is a good point. There is always a chance of a technological innovation that could make emulation possible. However, I doubt that this method, which would work in small animals, would scale to mammals or humans but I could be wrong.

    Like

  5. Could you elaborate on emulating C elegans?

    My understanding is that people just haven’t tried. Around 10 years ago some people thought it was doable and announced the “Nematode Upload Project.” I believe they gave up for not getting funding. Maybe someone convinced them to give up because they didn’t know connection strengths, but it would be nice to know if that is the case. Otherwise, I take this as strong evidence that there are no particular identified obstacles, though it is easy to list things that *might* go wrong.

    Like

  6. There have been isolated attempts to model aspects of it but you are right that there has never been a full on effort. I believe Mitya Chklosvskii at Janelia is mounting a challenge. Part of the evidence for the difficulty of the task is that the anatomy of the lobster gastric mill has been known for some time and a concerted effort by a number of strong labs like Eve Marder at Brandeis has not fully cracked that. It is computationally intractable to search 300 dimensional parameter space so if we do crack the nematode, it will take some cleverness.

    Like

Leave a comment