Michael Buice and I have finally published our paper entitled “Dynamic finite size effects in spiking neural networks” in PLoS Computational Biology (link here). Finishing this paper seemed like a Sisyphean ordeal and it is only the first of a series of papers that we hope to eventually publish. This paper outlines a systematic perturbative formalism to compute fluctuations and correlations in a coupled network of a finite but large number of spiking neurons. The formalism borrows heavily from the kinetic theory of plasmas and statistical field theory and is similar to what we used in our previous work on the Kuramoto model (see here and here) and the “Spike model” (see here). Our heuristic paper on path integral methods is here. Some recent talks and summaries can be found here and here.
Month: January 2013
Creating vs treating a brain
The NAND (Not AND) gate is all you need to build a universal computer. In other words, any computation that can be done by your desktop computer, can be accomplished by some combination of NAND gates. If you believe the brain is computable (i.e. can be simulated by a computer) then in principle, this is all you need to construct a brain. There are multiple ways to build a NAND gate out of neuro-wetware. A simple example takes just two neurons. A single neuron can act as an AND gate by having a spiking threshold high enough such that two simultaneous synaptic events are required for it to fire. This neuron then inhibits the second neuron that is always active except when the first neuron receives two simultaneous inputs and fires. A network of these NAND circuits can do any computation a brain can do. In this sense, we already have all the elementary components necessary to construct a brain. What we do not know is how to put these circuits together. We do not know how to do this by hand nor with a learning rule so that a network of neurons could wire itself. However, it could be that the currently known neural plasticity mechanisms like spike-timing dependent plasticity are sufficient to create a functioning brain. Such a brain may be very different from our brains but it would be a brain nonetheless.
The fact that there are an infinite number of ways to creating a NAND gate out of neuro-wetware implies that there are an infinite number of ways of creating a brain. You could take two neural networks with the same set of neurons and learning rules, expose them to the same set of stimuli and end up with completely different brains. They could have the same capabilities but be wired differently. The brain could be highly sensitive to initial conditions and noise so any minor perturbation would lead to an exponential divergence in outcomes. There might be some regularities (like scaling laws) in the connections that could be deduced but the exact connections would be different. If this were true then the connections would be everything and nothing. They would be so intricately correlated that only if taken together would they make sense. Knowing some of the connections would be useless. The real brain is probably not this extreme since we can sustain severe injuries to the brain and still function. However, the total number of hard-wired conserved connections cannot exceed the number of bits in the genome. The other connections (which is almost all of them) are either learned or are random. We do not know which is which.
To clarify my position on the Hopfield Hypothesis, I think we may already know enough to create a brain but we do not know enough to understand our brain. This distinction is crucial. What my lab has been interested in lately is to understand and discover new treatments for cognitive disorders like Autism (e.g. see here). This implies that we need to know how perturbations at the cellular and molecular levels affect the behavioural level. This is an obviously daunting task. Our hypothesis is that the bridge between these two extremes is the canonical cortical circuit consisting of recurrent excitation and lateral inhibition. We and others have shown that such a simple circuit can explain the neural firing dynamics in diverse tasks such as working memory and binocular rivalry (e.g. see here). The hope is that we can connect the genetic and molecular perturbations to the circuit dynamics and then connect the circuit dynamics to behavior. In this sense, we can circumvent the really hard problem of how the canonical circuits are connected to each other. This may not lead to a complete understanding of the brain or the ability to treat all disorders but it may give insights into how genes and medication act on cognitive function.
A meal for a day
The Center for Science in the Public Interest has some examples of meals in restaurants that contain the caloric requirements for a whole day. And you doubted the push hypothesis for the obesity epidemic.
The spawn of SPAUN
SPAUN (Semantic Pointer Architecture Unified Network) is a model of a functioning brain out of Chris Eliasmith’s group at the University of Waterloo. I first met Chris almost 15 years ago when I visited Charlie Anderson at Washington University, where Chris was a graduate student. He was actually in the philosophy department (and still is) with a decidedly mathematical inclination. SPAUN is described in Chris’s paper in Science (obtain here) and in a forthcoming book. SPAUN can perform 8 fairly diverse and challenging cognitive tasks using 2.5 million neurons with an architecture inspired by the brain. It takes input through visual images and responds by “drawing” with a simulated arm. It decodes images, extracts features and compresses them, stores them in memory, computes with them, and then translates the output into a motor action. It can count, copy, memorize, and do a Raven’s Progressive Matrices task. While it can’t learn novel tasks, it is pretty impressive.
However, what is most impressive to me about SPAUN is not how well it works but that it mostly implements known concepts from neuroscience and machine learning. The main newness was putting it all together. This harkens back to what I called the Hopfield Hypothesis, which is that we already know all the elementary pieces for neural functioning. What we don’t know is how they fit and work together. I think one of the problems in computational neuroscience is that we’re too timid. I first realized this many years ago when I saw a talk by roboticist Rodney Brooks. He showed us robots with very impressive capabilities (this was when he was still at MIT) that were just implementing well-known machine learning rules like back-propagation. I recall thinking that robotics was way ahead of us and that reverse engineering may be harder than engineering. I also think that we will likely construct a fully functioning brain before we understand it. It could be that if you connect enough neurons together that incorporate a set of necessary mechanisms and then expose it to the world, it would start to develop and learn cognitive capabilities. However, it would be as difficult to reverse engineer exactly what this constructed brain was doing as it is to reverse engineer a real brain. It may also be computationally undecidable or intractable to a priori determine the essential set of necessary mechanisms or the number of neurons you need. You might just have to cobble something together and try it out. A saving grace may be that these elements may not be unique. There could be a large family of mechanisms that you could draw from to create a thinking brain.
Saving large animals
One story in the news lately is the dramatic increase in the poaching of African elephants (e.g. New York Times). Elephant numbers have plunged dramatically in the past few years and their outlook is not good. This is basically true of most large animals like whales, pandas, rhinos, bluefin tuna, whooping cranes, manatees, sturgeon, etc. However, one large animal has done extremely well while the others have languished. In the US it had a population of zero 500 years ago and now it’s probably around 100 million.That animal as you have probably guessed is the cow. While wild animals are being hunted to extinction or dying due to habitat loss and climate change, domestic animals are thriving. We have no shortage of cows, pigs, horses, dogs, and cats.
Given that current conservation efforts are struggling to save the animals we love, we may need to try a new strategy. A complete ban on ivory has not stopped the ivory trade just as a ban on illicit drugs has not stopped drug use. Prohibition does not seem to be a sure way to curb demand. It may just be that starting some type of elephant farming may be the only way to save the elephants. It could raise revenue to help protect wild elephants and could drop the price in ivory sufficiently to make poaching less profitable. It could also backfire and increase the demand for ivory.
Another counter intuitive strategy may be to sanction limited hunting of some animals. The introduction of wolves into Yellowstone park has been a resounding ecological success but it has also angered some people like ranchers and deer hunters. The backlash against the wolf has already begun. One ironic way to save wolves could be to legalize the hunting of them. This would give hunters an incentive to save and conserve wolves. Given that the set of hunters and ranchers often have a significant intersection, this could dampen the backlash. There is a big difference in attitudes towards conservation when people hunt to live versus hunting for sport. When it’s a job, we tend to hunt to extinction like buffalo, cod, elephants, and bluefin tuna. However, when it’s for sport, people want to ensure the species thrives. While I realize that this is controversial and many people have a great disdain for hunting, I would suggest that hunting is no less humane and perhaps more than factory abattoirs.
Two New Papers
I have two new papers in the Journal of Biological Chemistry:
Z Zhang, Y Sun, YW Cho, CC Chow, SS Simons. PA1 Protein, a New Competitive Decelerator Acting at More than One Step to Impede Glucocorticoid Receptor-mediated Transactivation. J Biol Chem:42-58 (2012). [PDF]
JA Blackford, C Guo, R Zhu, EJ Dourgherty, CC Chow and SS Simons. Identification of Location and Kinetically Defined Mechanism of Cofactors and Reporter Genes in the Cascade of Steroid-regulated Transactivation. J Biol Chem:40982-95 (2012). [PDF]
Both are applications of our theory for steroid-mediated gene induction. The theory is applicable for any biochemical system where the dose response curve strictly follows a Michaelis-Menten curve. A summary of the theory can be found here and here. Slides for talks on the topic can be found here. In Zhang et al, we use the theory to predict a mechanism for a protein called PA1. In Blackford et al, we show that DNA is the effective rate limiting step for gene transcription in steady state, which we dub concentration limiting step, since there are really no rates at steady state.
Misconceptions about the Debt
The US National Debt is currently slightly above the US GDP, which has the pundit class up in arms about how we will be imposing a huge cost on our children and grandchildren or we are at risk of interest rates sky rocketing and be like Greece. Another worry is that the US will end up printing lots of money to service the debt and we will head into hyperinflation like Zimbabwe or Weimar Germany. However, much of this hand wringing is misplaced. None of these outcomes are necessarily true when applied to a nation that issues its own currency. The Japanese debt is well over twice GDP and they still have low interest rates and are even slightly deflationary.
The first misconception is that the US government just prints money when it wants to. Actually, the Federal Reserve creates money by buying US Treasury bonds or other debt and issues money in return. Currently, it is buying a lot of mortgage-backed securities in an attempt to increase the money supply and boost the economy. However, you will notice that inflation isn’t particularly high and that is because even though there is more pubic money there is much less private money. Companies aren’t investing because they see no demand in a depressed economy. If and when the US economy starts to grow again then this extra money is a threat for inflation. The Federal Reserve could then sell the bonds it owns and then simply destroy the money to reduce the money supply. However, remember that true inflation is a situation where prices and wages both increase. In such a case, the big losers are the creditors. If you have a big mortgage, nothing is better than inflation. A situation where prices increase but your wages don’t is not inflation. That is your standard of living going down.
The second misconception is that foreigners like China and Japan hold all of our debt. In actuality, the US public and government entities hold about two thirds of our debt and foreigners hold the other third or about five trillion. So, we really are mostly in debt to ourselves. Our situation is not like a Dickens’ novel where the child has to go to debtor’s prison because of the father’s debts. Rather it is more like a family where some of the children will owe money to the other children. A US default on the US part of the debt would just result in a massive redistribution of money within the US. So, if you don’t own a lot of US treasury bonds, you really shouldn’t worry about a default. Of course, a default could cause other economic problems but that is a side effect.
The third misconception is that the US is in massive debt because government spending has increased drastically. Actually, the major reason that the debt and deficit is so high is that the economy is depressed. This reduces tax revenue and increases automatic outlays like unemployment insurance and food stamps. Additionally, the Bush tax cuts and the two unfunded wars of the past decade have cost about four trillion dollars. The fiscal cliff that we just partially averted was not a debt crisis. It would have been a massive hit to the GDP through a tax increase and spending reduction. It was about reducing the debt too fast and inducing another recession. What we need now above all is to increase economic growth. We have a massive unemployment problem, not a debt crisis. Right now, interest rates are extremely low. We should be borrowing as much money as we can to invest in our infrastructure and promote growth.
A corollary to too much government spending is that we need entitlement reform. The truth is that we need healthcare reform. Social security is actually in fairly decent shape. Yes, the retiring baby boomer population will be a large burden but that can be solved with some minor tweaks. The real problem is that Medicare and Medicaid costs are increasing much faster than GDP growth. Although, I have argued before that spending 80% of our GDP on healthcare may not be so unreasonable (see here). Also, discretionary spending like scientific research, national parks, infrastructure projects and so forth are a minuscule part of the federal budget. We could double it and it would just be rounding noise.
The fourth misconception is that people will suddenly stop buying US bonds and interest rates will increase leading to a Greek-like situation. This can never happen because the US controls its own currency. The Federal Reserve can always buy enough bonds to keep interest rates low. What will happen is that the US dollar will go down in value compared to other currencies but this will only help exports. Greece is stuck with the Euro and most of its debt is owned by foreign banks. The forced austerity has also caused their economy to shrink, which makes servicing the debt even more difficult. If Greece had its own currency, it could simply devalue it or inflate to get out of its debt. The solution for Greece is to either default, to have the European central bank buy its debt (effective default), or for the European central bank to induce inflation.
The Land Sub Experiment
Gary Taubes penned a column in Nature last month arguing for a rigorous test of the energy balance hypothesis versus what he calls the hormonal hypothesis for the cause of obesity. Taubes writes
Before the Second World War, European investigators believed that obesity was a hormonal or regulatory disorder. Gustav von Bergmann, a German authority on internal medicine, proposed this hypothesis in the early 1900s.
The theory evaporated with the war. After the lingua franca of science switched from German to English, the German-language literature on obesity was rarely cited. (Imagine the world today if physicists had chosen to ignore the thinking that emerged from Germany and Austria before the war.)
Instead, physicians embraced the ideas of the University of Michigan physician Louis Newburgh, who argued that obese individuals had a “perverted appetite” that failed to match the calories that they consumed with their bodies’ metabolic needs. “All obese persons are alike in one fundamental respect,” Newburgh insisted, “they literally overeat.” This paradigm of energy balance/overeating/gluttony/sloth became the conventional, unquestioned explanation for why we get fat. It is, as Bernard would say, the fixed idea.
This history would be no more than an interesting footnote in obesity science if there were not compelling reason to believe that the overeating hypothesis has failed. In the United States, and elsewhere, obesity and diabetes rates have climbed to crisis levels in the time that Newburgh’s energy-balance idea has held sway, despite the ubiquity of the advice based on it: if we want to lose fat, we have to eat less and/or move more. Yet rather than blame the advice, we have taken to blaming individuals for not following it ‘properly’.
The alternative hypothesis — that obesity is a hormonal, regulatory defect — leads to a different prescription. In this paradigm, it is not excess calories that cause obesity, but the quantity and quality of carbohydrates consumed. The carbohydrate content of the diet must be rectified to restore health.
As I have argued before (see here and here), these two hypotheses are not conflicting. The question of whether or not carbs make you fat is not an either-or issue but a quantitative one. I also agree that we don’t yet know the answer and a definitive carefully controlled experiment is required. I call this the “Land Sub Experiment” because what we need to do is to completely sequester individuals from the outside world for up to a year or more so that we can precisely measure everything they eat and how much energy they expend. We can then compare a group that consumes mostly carbs to one that doesn’t. The NIH will actually be involved in the NuSi study that Taubes describes and Kevin Hall is directly involved in the planning. I anxiously await the outcome. On a side note, a recent meta-analysis (see here) reports that being overweight actually lowers your mortality rate.