I gave the Bodian Seminar at the Zanvyl Krieger Mind/Brain Institute of Johns Hopkins today. I talked about cortical dynamics in the presence of conflicting stimuli. My slides are here. A summary of part of my talk can be found here. Other pertinent papers can be found here and here.
Archive for the ‘Biology’ Category
The biggest news for neuroscientists in President Obama’s State of the Union Address was the announcement of the Brain Activity Map (BAM) project (e.g. see here and here). The goal of this project as outlined in this Neuron paper is to develop the technological capability to measure the spiking activity of every single neuron in the brain simultaneously. I used to fantasize about such a project a decade ago but now I’m more ambivalent. Although the details of the project have not been announced, people involved are hoping for 300 million dollars per year for ten years. I do believe that a lot will be learned in pursuing such a project but it may also divert resources for neuroscience towards this one goal. Given that the project is mostly technological, it may also mostly bring in new engineers and physicists to neuroscience rather than fund current labs. It could be a huge boon for computational neuroscience because the amount of data that will be recorded will be enormous. It will take a lot of effort just to curate this data much less try to analyze and makes sense of it. Finally, on a cautionary note, it could be that much of the data will be superfluous. After all, we understand how gases behave (at least enough to design refrigerators and airplanes, etc.) without measuring the positions and velocities of every molecule in a room. I’m not sure we would have figured out the ideal gas law, the Carnot cycle, or the three laws of thermodynamics if we just relied on an “Air Activity Map Project” a century ago. There is probably a lot of compression going on in the brain. If we knew how this compression worked, we could then just measure the nonredundant information. That would certainly make the BAM project a whole lot easier.
The NAND (Not AND) gate is all you need to build a universal computer. In other words, any computation that can be done by your desktop computer, can be accomplished by some combination of NAND gates. If you believe the brain is computable (i.e. can be simulated by a computer) then in principle, this is all you need to construct a brain. There are multiple ways to build a NAND gate out of neuro-wetware. A simple example takes just two neurons. A single neuron can act as an AND gate by having a spiking threshold high enough such that two simultaneous synaptic events are required for it to fire. This neuron then inhibits the second neuron that is always active except when the first neuron receives two simultaneous inputs and fires. A network of these NAND circuits can do any computation a brain can do. In this sense, we already have all the elementary components necessary to construct a brain. What we do not know is how to put these circuits together. We do not know how to do this by hand nor with a learning rule so that a network of neurons could wire itself. However, it could be that the currently known neural plasticity mechanisms like spike-timing dependent plasticity are sufficient to create a functioning brain. Such a brain may be very different from our brains but it would be a brain nonetheless.
The fact that there are an infinite number of ways to creating a NAND gate out of neuro-wetware implies that there are an infinite number of ways of creating a brain. You could take two neural networks with the same set of neurons and learning rules, expose them to the same set of stimuli and end up with completely different brains. They could have the same capabilities but be wired differently. The brain could be highly sensitive to initial conditions and noise so any minor perturbation would lead to an exponential divergence in outcomes. There might be some regularities (like scaling laws) in the connections that could be deduced but the exact connections would be different. If this were true then the connections would be everything and nothing. They would be so intricately correlated that only if taken together would they make sense. Knowing some of the connections would be useless. The real brain is probably not this extreme since we can sustain severe injuries to the brain and still function. However, the total number of hard-wired conserved connections cannot exceed the number of bits in the genome. The other connections (which is almost all of them) are either learned or are random. We do not know which is which.
To clarify my position on the Hopfield Hypothesis, I think we may already know enough to create a brain but we do not know enough to understand our brain. This distinction is crucial. What my lab has been interested in lately is to understand and discover new treatments for cognitive disorders like Autism (e.g. see here). This implies that we need to know how perturbations at the cellular and molecular levels affect the behavioural level. This is an obviously daunting task. Our hypothesis is that the bridge between these two extremes is the canonical cortical circuit consisting of recurrent excitation and lateral inhibition. We and others have shown that such a simple circuit can explain the neural firing dynamics in diverse tasks such as working memory and binocular rivalry (e.g. see here). The hope is that we can connect the genetic and molecular perturbations to the circuit dynamics and then connect the circuit dynamics to behavior. In this sense, we can circumvent the really hard problem of how the canonical circuits are connected to each other. This may not lead to a complete understanding of the brain or the ability to treat all disorders but it may give insights into how genes and medication act on cognitive function.
SPAUN (Semantic Pointer Architecture Unified Network) is a model of a functioning brain out of Chris Eliasmith’s group at the University of Waterloo. I first met Chris almost 15 years ago when I visited Charlie Anderson at Washington University, where Chris was a graduate student. He was actually in the philosophy department (and still is) with a decidedly mathematical inclination. SPAUN is described in Chris’s paper in Science (obtain here) and in a forthcoming book. SPAUN can perform 8 fairly diverse and challenging cognitive tasks using 2.5 million neurons with an architecture inspired by the brain. It takes input through visual images and responds by “drawing” with a simulated arm. It decodes images, extracts features and compresses them, stores them in memory, computes with them, and then translates the output into a motor action. It can count, copy, memorize, and do a Raven’s Progressive Matrices task. While it can’t learn novel tasks, it is pretty impressive.
However, what is most impressive to me about SPAUN is not how well it works but that it mostly implements known concepts from neuroscience and machine learning. The main newness was putting it all together. This harkens back to what I called the Hopfield Hypothesis, which is that we already know all the elementary pieces for neural functioning. What we don’t know is how they fit and work together. I think one of the problems in computational neuroscience is that we’re too timid. I first realized this many years ago when I saw a talk by roboticist Rodney Brooks. He showed us robots with very impressive capabilities (this was when he was still at MIT) that were just implementing well-known machine learning rules like back-propagation. I recall thinking that robotics was way ahead of us and that reverse engineering may be harder than engineering. I also think that we will likely construct a fully functioning brain before we understand it. It could be that if you connect enough neurons together that incorporate a set of necessary mechanisms and then expose it to the world, it would start to develop and learn cognitive capabilities. However, it would be as difficult to reverse engineer exactly what this constructed brain was doing as it is to reverse engineer a real brain. It may also be computationally undecidable or intractable to a priori determine the essential set of necessary mechanisms or the number of neurons you need. You might just have to cobble something together and try it out. A saving grace may be that these elements may not be unique. There could be a large family of mechanisms that you could draw from to create a thinking brain.
The 2012 Noble Prize in physiology or medicine went to John Gurdon and Shinya Yamanaka for turning mature cells into stem cells. Yamanaka shook the world just six years ago in a Cell paper (it can be obtained here) that showed how to reprogram adult fibroblast cells into pluripotent stem cells (iPS cells) by simply inducing four genes – Oct3/4, Sox2, c-Myc, and Klf4. Although he may not frame it this way, Yamanaka arrived at these four genes by applying a simple theorem of formal logic, which is that a set of AND conditions is equivalent to negations of OR conditions. For example, the statement A AND B is True is the same as Not A OR Not B is False. In formal logic notation you would write . The problem then is given that we have about 20,000 genes, what subset of them will turn an adult cell into an embryonic-like stem cell. Yamanaka first chose 24 genes that are known to be expressed in stem cells and inserted them into an adult cell. He found that this made the cell pluripotent. He then wanted to find a smaller subset that would do the same. This is where knowing a little formal logic goes a long way. There are possible subsets that can be made out of 24 genes so trying all combinations is impossible. What he did instead was to run 24 experiments where each gene is removed in turn and then checked to see which cells were not viable. These would be the necessary genes for pluripotency. He found that pluripotent stem cells never arose when either Oct3/4, Sox2, c-Myc or Klf4 were missing. Hence, a pluripotent cell needed all four genes and when he induced them, it worked. It was a positively brilliant idea and although I have spoken out against the Nobel Prize (see here), this one is surely deserved.
Jonah Lehrer , staff writer for the New Yorker and a best selling science author, resigned in disgrace today. He admitted to fabricating quotes from Bob Dylan in his most recent book:
New York Times: An article in Tablet magazine revealed that in his best-selling book, “Imagine: How Creativity Works,” Mr. Lehrer had fabricated quotes from Bob Dylan, one of the most closely studied musicians alive. Only last month, Mr. Lehrer had publicly apologized for taking some of his previous work from The Wall Street Journal, Wired and other publications and recycling it in blog posts for The New Yorker, acts of recycling that his editor called “a mistake.”
…Mr. Lehrer might have kept his job at The New Yorker if not for the Tablet article, by Michael C. Moynihan, a journalist who is something of an authority on Mr. Dylan.
Reading “Imagine,” Mr. Moynihan was stopped by a quote cited by Mr. Lehrer in the first chapter. “It’s a hard thing to describe,” Mr. Dylan said. “It’s just this sense that you got something to say.”
Lehrer was a regular on Radiolab and he seemed to always really know his science. I have linked to his articles in the past (see here). His publisher is withdrawing his book and giving refunds to anyone returning it. I haven’t read the book, but from the excerpts and his interviews on it, I think the science is probably accurate. I don’t really know what he was thinking but my guess was that he was just trying to spice up the book and imagined a quote that Dylan might say. The fabricated quote above is pretty innocuous. He probably didn’t think anyone would notice. Maybe he felt pressure to write a best seller. Maybe he was overconfident. In any case, he definitely shouldn’t have done it. It is unfortunate because he was a gifted writer and boon to neuroscience and science in general.
A new paper by Steve Gotts, myself, and Alex Martin has officially been published in the journal Cognitive Neuroscience:
Stephen J. Gotts, Carson C. Chow & Alex Martin (2012): Repetition priming and repetition suppression: Multiple mechanisms in need of testing, Cognitive Neuroscience, 3:3-4, 250-259 [PDF]
This paper is a review of the topic but is partially based on the PhD thesis work of Steve Gotts when we were both in Pittsburgh over a decade ago. Steve was a CNBC graduate student at Carnegie Mellon University and came to visit me one day to tell me about his research project to reconcile the psychological phenomenon of repetition priming with a neurophysiological phenomenon called repetition suppression. It is well known that performance improves when you repeat a task. For example, you will respond faster to words on a random list if you have seen the word before. This is called repetition priming. The priming effect can occur over time scales as short as a few seconds to your life time. Steve was focused on the short time effect. A naive explanation for why you would respond faster to priming is that the pool of neurons that code for the word become slightly more active so when the word reappears they fire more readily. This hypothesis could only be tested when electrophysiological recordings of cells in awake behaving monkeys and functional magnetic resonance imaging data in humans finally became available in the mid-nineties. As is often the case in science, the opposite was observed. Neural responses actually decreased and this was called repetition suppression. So an interesting question arose: How do you get priming with suppression? Steve had a hypothesis and it involved work I had done so he came to see if I wanted to collaborate.
I joined the math department at Pitt in the fall of 1998 (the webpage has a nice picture of Bard Ermentrout, Rodica Curtu and Pranay Goel standing at a white board). I had just come from doing a post doc with Nancy Kopell at BU. At that time, the computational neuroscience community was interested in how a population of spiking neurons would become synchronous. The history of synchrony and coupled oscillators is long with many threads but I got into the game because of the weekly meetings Nancy organized at BU, which we dubbed “N-group”. People from all over the Boston area would participate. It was quite exciting at that time. One day Xiao Jing Wang, who was at Brandeis at the time, came to give a seminar on his joint work with Gyorgy Buzsaki on gamma oscillations in the hippocampus, which resulted in this highly cited paper. What the paper was really about was how inhibition could induce synchrony in a network with heterogeneous connections. It had already been shown by a number of people that a network with inhibitory synapses could synchronize a network of spiking neurons. This was somewhat counter intuitive because the conventional wisdom was that inhibition would lead to anti-synchrony. The key ingredient was that the inhibition had to be slow. Xiao Jing argued from his simulations that the hippocampus had a sweet spot for synchronization for the gamma band (i.e. frequencies around 40Hz). I was highly intrigued by his result and spent the next two years trying to understand the simulations mathematically. This resulted in four papers:
C.C. Chow, J.A. White, J. Ritt, and N. Kopell, `Frequency control in synchronized networks of inhibitory neurons’, J. Comp. Neurosci. 5, 407-420 (1998). [PDF]
J.A. White, C.C. Chow, J. Ritt, C. Soto-Trevino, and N. Kopell, `Synchronization and oscillatory dynamics in heterogeneous, mutually inhibited neurons’, J. Comp. Neurosci. 5, 5-16 (1998). [PDF]
C.C. Chow, `Phase-locking in weakly heterogeneous neuronal networks’, Physica D 118, 343-370 (1998). [PDF]
C.C. Chow and N. Kopell, `Dynamics of spiking neurons with electrical coupling’, Neural Comp. 12, 1643-1678 (2000). [PDF]
In a nutshell, these papers showed that in a heterogeneous network, neurons will tend to synchronize around the time scale of the synaptic inhibition, which in the case of the inhibitory neurotransmitter receptor GABA_A is around 25 ms or 40 Hz. When the firing frequency is too high the neurons tend to fire asynchronously and when the frequency is too slow, neurons tend to stop firing all together.
Steve read my papers (and practically everything else) and thought that this might be the resolution of his question. Now, it had also been known for a while that when neurons fire they tend to slow down. This is due to both spike-frequency adaptation and synaptic depression, so repetition suppression is not entirely surprising since when neurons are stimulated they will tend to fire slower. What is surprising is that slowing down makes you respond faster. Steve thought that maybe suppression synchronized neurons and made them more effective in getting downstream neurons to fire. In essence, what he needed to find was a mechanism that increases the gain of a neuron for a decrease in input and synchrony was a solution. I helped him work out some technical details and he wrote a very nice thesis showing how this could work and match the data. He then went on to work with Bob Desimone and Alex Martin at NIH. However, we never wrote the theoretical paper from his thesis because of a critique that we never got around to answering. The issue was that if a lowering of network frequency can elicit priming then why does a reduction in contrast in the primed stimulus, which also reduces network frequency, not do the same? This came up after Steve had left and I turned my attention to other things. The answer is probably because not all frequency reductions are equal. A reduction in contrast lowers the total input to the early part of the visual system while synaptic depression will have the largest effect on the most active neurons. The ensuing dynamics will likely be different but we never had the time to fully flesh this out. Although, I always wanted to get back to this, the project sat idle for me for about eight years until Steve sent me an email one day saying that he’s writing a review with Alex on the topic and wanted to know if I wanted to be included. I was delighted. The paper covers all the current theories for priming and suppression and is accompanied by commentaries from many of the key players in the field. I’ve just covered a small part of the many interesting issues brought up in the review.
It is not at all clear what technology will attain “human-level” intelligence first. Robin Hanson proposes brain emulation (e.g see here). I’ve been skeptical of emulation and am leaning towards machine learning (e.g. see here). However, given the recent technological advances of connectomics and 3D printing, brain emulation or rather replication might not be as distant as I thought. 3D printing is a technology to manufacture any 3 dimensional object by sequentially depositing 2 dimensional layers. You can find out more about it here including building your own 3D printer . People now regularly use open source software to take any object they may want, slice it into 2 dimensional layers then print it. The technology has reached the point where you can print with any material that can be squirted including biological material (see video here). People in the field are currently gearing up to print complete organs like kidneys and the liver. It is not overly far-fetched that they could print out an entire brain in the future. Recent progress in connectomics can be tracked here. The current state of the art involves taking electron microscope images of thin slices of neural tissue. The hard part is to reassemble these 2D slices back into a 3D brian, the reverse of 3D printing. However, perhaps what we can do is to 3D print the images first to obtain a faithful 3D reconstruction of the brain and then use the model to assist in the software reconstruction. If you had molecular level image resolution, you could even try to print out a functioning brain, complete with docked synaptic vesicles ready to be released!
In my post on panpsychism, a commenter, Matt Sigl, made a valiant defense of the ideas of Koch and Tononi about consciousness. I claimed in my post that panpsychism, where some or all the constituents of a system possess some elementary form of consciousness, is no different from dualism, which says that mind and body are separate entities. Our discussion, which can be found in the comment thread, made me think more about what it means for a theory to be monistic and understandable. I have now revised my claim to be that panpsychism is either dualist or superfluous. Tononi’s idea of integrated information may be completely correct but panpsychism would not add anything more to it. In my view, a monistic theory is one where all the properties of a system can be explained by the fundamental governing rules. Most importantly there can only be a finite set of rules. A system with an infinite set of rules is not understandable since every situation has its own consequence. There would be no predictability; there would be no science. There would only be history where we could write down each rule whenever we observed it.
Consider a system of identical particles that can move around in a three dimensional space and interact with each other in a pairwise fashion. Let the motion of these particles obey Newton’s laws, where their acceleration is determined by a force that is given by an interaction rule or potential. The proportionality constant between acceleration and force is the mass, which is assigned to each particle. The particles are then given an initial position and velocity. All of these rules can be specified in absolute precise terms mathematically. Space can be discrete so the particles can only occupy a finite or countably infinite number of points or continuous where the particles can occupy an uncountable infinite number of points.
Depending on how I define the interactions, select the masses, and specify the initial conditions, various things could happen. For example, I could have an attractive interaction, start all the particles with no velocity at the same point, and they would stay clumped together. This clumped state is a fixed point of the system. If I can move one of the particles slightly away from the point and it falls back to the clump then the fixed point is stable. However, even a stable fixed point doesn’t mean all initial conditions will end up clumped. For example, if I have a square law attraction like gravity, then particles can orbit one another or scatter off of each other. For many initial conditions, the particles could just bounce around indefinitely and never settle into a fixed point. For more than two particles, the fate of all initial conditions is generally impossible to predict. However, I claim that the configuration of the system at any given time is explainable or understandable because I could in principle simulate the system from a given specific initial condition and determine its trajectory for any amount of time. For a continuous system, where positions require an infinite amount of information to specify, an understandable system would be one where one could prove that there is always an initial condition that can be specified with a finite amount of information that remains close to any arbitrary initial condition.
If I make the dynamics sufficiently complex then there could be some form of basic chemistry and even biology. This need not be fully quantum mechanical; Bohr-like atoms may be enough. If the system can form sufficiently complex molecules then evolution could take over and generate multi-cellular life forms. At some point, animals with brains could arise. These animals could possess memory and enough computational capability to strategize and plan for the future. There could be an entire ecosystem of plants and animals at multiple scales interacting in highly complex ways. All of this could be understandable in the sense that all of the observed dynamics could be simulated on a big enough computer if you knew the rules and the initial conditions. You may even be lucky enough that almost all initial conditions will lead to complex life.
At this point, all the properties of the system can be completely specified by an outside observer. Understandable means that all of these properties can be shown to arise from a finite set of rules and initial conditions. Now, suppose that some of the animals are also conscious in the sense that they have a subjective experience. The panpsychic hypothesis is that consciousness is a property of some or all the particles. However, proponents must then explain why even the biggest rock does not seem conscious or human consciousness disappears when we are in deep sleep. Tononi and Koch try to finesse this problem by saying that it is only if one has enough integrated information does one notice the effect of the accumulated consciousness. However, bringing in this secondary criterion obviates the panpsychic hypothesis because there is now a systematic way to identify consciousness that is completely consistent with an emergent theory of consciousness. This doesn’t dispel the mystery of ”the hard problem” of consciousness of what exactly happens when the threshold is crossed to give subjective experience. However, the resolution is either that consciousness can be described by the finite set of rules of the constituent particles or there is a dualistic explanation where the brain “taps” into some other system that generates consciousness. Panpsychism does not help in resolving this dilemma. Finally, it might be that the question of whether or not a system has sufficient integrated information to exhibit noticeable consciousness may be undecidable in which case there would be no algorithm to test for consciousness. The best that one could do is to point to specific cases. If this were true then panpsychism does not solve any problem at all. We would never have a theory of consciousness. We would only have examples.
I have a memory of being in a bookstore and picking up a book with the title “The philosophy of Star Trek”. I am not sure of how long ago this was or even what city it was in. However, I cannot seem to find any evidence of this book on the web. There is a book entitled “Star Trek and Philosophy: The wrath of Kant“, but that is not the one I recall. I bring this up because in this book that may or may not exist, I remember reading a chapter on the philosophy of transporters. For those who have never watched the television show Star Trek, a transporter is a machine that can dematerialize something and rematerialize it somewhere else, presumably at the speed of light. Supposedly, the writers of the original show invented the device so that they could move people to planet surfaces from the starship without having to use a shuttle craft, for which they did not have the budget to build the required sets.
What the author was wondering was whether or not the particles of a transported person were the same particles as the ones in the pre-transported person or were people reassembled with stock particles lying around in the new location. The implication being that this would then illuminate the question of whether what constitutes “you” depends on your constituent particles or just the information on how to organize the particles. I remember thinking that this is a perfect example of how physics can render questions of philosophy obsolete. What we know from quantum mechanics is that particles are indistinguishable. This means that it makes no sense to ask whether a particle in one location is the same as a particle at a different location or time. A particle is only specified by its quantum properties like its mass, charge, and spin. All electrons are identical. All protons are identical and so forth. Now they could be in different quantum states, so a more valid question is whether a transporter transports all the quantum information of a person or just the classical information, which is much smaller. However, this question is really only relevant for the brain since we know we can transplant all the other organs from one person to another. The neuroscience enterprise, Roger Penrose notwithstanding, implicitly operates on the principle that classical information is sufficient to characterize a brain.
Andrew Huxley, died last week at the age of 94. Huxley, with his research advisor Alan Hodgkin, proved that the mechanism for action potential propagation in nerve cells were due to the passage of ions through voltage gated ion channels in the cell membrane. In the course of their work, they developed the Hodgkin-Huxley model of the neuron and launched the field of computational neuroscience. While their work was a monumental achievement and deserved a Nobel prize, it still built upon the work of many others. I recommend reading the Nobel address of both Huxley (see here) and Hodgkin (see here) for the story of their discovery.
I have noticed that panpsychism, which is the idea that some or all elements of matter possess some form of consciousness, subjective experience, mental awareness, or whatever you would like to call it, seems to be gaining favour these days. Noted neuroscientist Christoff Koch has recently suggested that consciousness may be a property of matter like mass or charge. I was just listening to a Philosophy Bites podcast where philosopher Galen Strawson (listen here) was forcefully arguing that panpsychism or micropsychism was in fact the most plausible prior if one is a physicalist or monist (i.e. someone who believes that everything is made of the same stuff). He argued that it was much more plausible for electrons to possess some tiny amount of consciousness then for it to emerge from the interactions of a large number of neurons.
What I want to point out is that panpsychism is a closeted form of dualism (i.e. mind is different from matter). I believe philosopher David Chalmers, who coined the term “The hard problem of consciousness“, would agree. Unlike consciousness, mass and charge can be measured and obey well-defined rules. If I were to make a computer simulation of the universe, I could incorporate mass and charge into the physical laws, be they Newton’s Laws and Maxwell’s equations, the Standard Model of particle physics, String theory, or whatever will replace that. However, I have no idea how to incorporate consciousness into any simulation. Deeming consciousness to be a property of matter is no different from Cartesian dualism. Both off-load the problem to a separate realm. You can be a monist or a panpsychist but you cannot be both.
I’m currently at the New Jersey Institute of Technology for the ninth annual Frontiers in Applied and Computational Mathematics conference. Here are the slides for my talk. It’s on computational neuroscience and has nothing to do with obesity. Also, it only seems like lots of slides because of the animations.
I attended a conference on Criticality in Neural Systems at NIH this week. I thought I would write a pedagogical post on the history of critical phenomena and phase transitions since it is a long and somewhat convoluted line of thought to link criticality as it was originally defined in physics to neuroscience. Some of this is a recapitulation of a previous post.
Criticality is about phase transitions, which is a change in the state of matter, such as between gas and liquid. The classic paradigm of phase transitions and critical phenomena is the Ising model of magnetization. In this model, a bunch of spins that can be either up or down (north or south) sit on lattice points. The lattice is said to be magnetized if all the spins are aligned and unmagnetized or disordered if they are randomly oriented. This is a simplification of a magnet where each atom has a magnetic moment which is aligned with a spin degree of freedom of the atom. Bulk magnetism arises when the spins are all aligned. The lowest energy state of the Ising model is for all the spins to be aligned and hence magnetized. If the only thing that spins had to deal with was the interaction energy then we would be done. What makes the Ising model interesting and for that matter all of statistical mechanics is that the spins are also coupled to a heat bath. This means that the spins are subjected to random noise and the size of this noise is given by the temperature. The noise wants to randomize the spins. The presence of randomness is why there is the word “statistical” in statistical mechanics. What this means is that we can never say for certain what the configuration of a system is but only assign probabilities and compute moments of the probability distribution. Statistical mechanics really should have been called probabilistic mechanics.
Two of my old colleagues were interviewed on the CBC radio science show Quirks and Quarks recently. This is the show I used to listen to in my youth in Canada. In March, astrophysicist Arif Babul, a classmate at the University of Toronto, talked about recent work he had done on abnormal clumping of dark matter in a collision site between clusters of galaxies. Here is the link. Neuroscientist Sebastian Seung, whom I’ve known since graduate school, talked about his recent book Connectome. Link here. I was impressed by how well both were able to explain their work in clear and simple terms. Their use of metaphors was particularly good. I think these are two very good examples of how to talk about science to the general public.
My post – The gigabit machine, was reposted on the web aggregator site reddit.com recently. Aside from increasing traffic to my blog by tenfold for a few days, the comments on reddit made me realize that I wasn’t completely clear in my post. The original post was about a naive calculation of the information content in the brain and how it dwarfed the information content of the genome. Here, I use the term information in the information theoretical sense, which is about how many bits must be specified to define a system. So a single light switch that turns on and off has one bit of information while ten light switches have 10 bits. If we suppose that the brain has about neurons, with about connections each, then there are total connections. If we make the very gross assumption that each connection can be either “on” or “off”, then we arrive at bits. This would be a lower bound on the amount of information required to specify the brain and it is already a really huge number. The genome has 3 billion bases and each base can be one of four types or two bits, so this gives a total of 6 billion bits. Hence, the information contained in the genome is just rounding noise compared to the potential information contained in the brain. I then argued that education and training was insufficient to make up this shortfall and that most of the brain must be specified by uncontrolled events.
The criticism I received in the comments on reddit was that this doesn’t imply that the genome did not specify the brain. An example that was brought up was the Mandelbrot set where highly complex patterns can arise from a very simple dynamical system. I thought this was a bad example because it takes a countably infinite amount of information to specify the Mandelbrot set but I understood the point which is that a dynamical system could easily generate complexity that appears to have higher information content. I even used such an argument to dispel the notion that the brain must be simpler than the universe in this post. However, the key point is that the high information content is only apparent; the actual information content of a given state is no larger than that contained in the original dynamical system and initial conditions. What this would mean for the brain is that the genome alone could in principle set all the connections in the brain but these connections are not independent. There would be correlations or other high order statistical relationships between them. Another way to say this is that while in principle there are possible brains, the genome can only specify of them, which is still a large number. Hence, I believe that the conclusions of my original post still hold – the connections in the brain are either set mostly by random events or they are highly correlated (statistically related).
The Aspen Center for Physics will be hosting a 3-week long workshop on Physics of Behavior between May 27 and June 16, 2012, with an application deadline of January 31, 2012. The idea of the workshop stems from the understanding that the role of physics in biology is broad, as physical constraints define the strategies and the biological machinery that living systems use to shape their behavior in the dynamic, noisy, and resource-limited physical world. To date, such holistic, physics-driven picture of behavior has been achieved, arguably, only for bacterial chemotaxis. Can a similar understanding emerge for other, more complex living systems?To begin answering this, we would like to use the Aspen Center workshop to bring together a diverse group of scientists, from field biologists to theoretical physicists, broadly interested in animal behavior. We would like to broaden the horizons of physicists by inviting experts who quantify behavior of a wide range of model organisms, from molecular circuits to mammals. We would like to explore behavior as possibly optimal responses given the physical and the statistical structure of environment. Our topics will include, in particular, navigation and foraging, active sensing, locomotion and rhythmic behavior, and learning, memory, and adaptive behaviors.
As workshop organizers, we encourage you to apply. We would also like you to encourage other people who are active in this field to apply. We do need to be clear, however, that we cannot guarantee admission to the workshop. Admission to the workshop is granted not by the workshop organizers, but by the Admissions Committee of the Center (with some input from the workshop organizers). The Admissions Committee will endeavor to accommodate as many applicants to the Workshop as possible, but because of the constraints imposed by the rest of the AspenCenter for Physics program, they may not be able to admit everyone who applies.
We encourage you to visit the web site of the workshop here , and of the Center, http://www.aspenphys.org/, for more information and for application instructions. For those of you unfamiliar with the Center, it is located in lively and beautiful Aspen, CO. It’s a great place to work, to enjoy the mountains, and to bring family. The Center partially subsidizes lodging for admitted participants. The Center requires that theorists commit for a minimum stay of two weeks, and a three week stay is preferred. Shorter durations are possible for experimentalists.
We hope you will choose to apply. Please don’t hesitate to contact us if you have questions.
Ila Fiete, UT Austin
Ilya Nemenman, Emory U
Leslie Osborne, U Chicago
William Ryu, U Toronto
Greg Stephens, Princeton U
A follow-up to our PNAS paper on a new theory of steroid-mediated gene induction is now available on PLoS One here. The title and abstract is below. In the first paper, we proposed a general mathematical framework to compute how much protein will be produced from a steroid-mediated gene. It had been noted in the past that the dose response curve of product given steroid amount follows a Michaelis-Menten curve or first order Hill function (e.g. Product = Amax [S]/(EC50+[S], where [S] is the added steroid concentration).. In our previous work, we exploited this fact and showed that a complete closed form expression for the dose response curve could be written down for an arbitrary number of linked reactions. The formula also indicates how added cofactors could increase or decrease the Amax or EC50. What we do in this paper is to show how this expression can be used to predict the mechanism and order in the sequence of reactions a given cofactor will act by analyzing how two cofactors affect the Amax and EC50.
Deducing the Temporal Order of Cofactor Function in Ligand-Regulated Gene Transcription: Theory and Experimental Verification
Edward J. Dougherty, Chunhua Guo, S. Stoney Simons Jr, Carson C. Chow
Abstract: Cofactors are intimately involved in steroid-regulated gene expression. Two critical questions are (1) the steps at which cofactors exert their biological activities and (2) the nature of that activity. Here we show that a new mathematical theory of steroid hormone action can be used to deduce the kinetic properties and reaction sequence position for the functioning of any two cofactors relative to a concentration limiting step (CLS) and to each other. The predictions of the theory, which can be applied using graphical methods similar to those of enzyme kinetics, are validated by obtaining internally consistent data for pair-wise analyses of three cofactors (TIF2, sSMRT, and NCoR) in U2OS cells. The analysis of TIF2 and sSMRT actions on GR-induction of an endogenous gene gave results identical to those with an exogenous reporter. Thus new tools to determine previously unobtainable information about the nature and position of cofactor action in any process displaying first-order Hill plot kinetics are now available.