The Hopfield Hypothesis

In 2000, John Hopfield and Carlos Brody put out an interesting challenge to the neuroscience community. They came up with a neural network, constructed out of simple well known neural elements, that could do a simple speech recognition task. The network was robust to noise and the speed the sentences were spoken. They conducted some numerical experiments on the network and provided the “data” to anyone interested. People were encouraged to submit solutions for how the network worked and Jeff Hawkins of Palm Pilot fame kicked in a small prize for the best answer. The initial challenge with the mock data and the implementation details were published separately in PNAS. Our computational neuroscience journal club at Pitt worked on the problem for a few weeks. We came pretty close to getting the correct answer but missed one crucial element.

Hopfield wanted to present the model as a challenge to serve as an example that sometimes more data won’t help you understand a problem. I’ve extrapolated this thought into the statement that perhaps we already know all the neurophysiology we need to understand the brain but just haven’t put the pieces together in the right way yet. I call this the Hopfield Hypothesis. I think many neuroscientists believe that there are still many unknown physiological mechanisms that need to be discovered and so what we need are not more theories but more experiments and data. Even some theorists believe this notion. I personally know one very prominent computational neuroscientist who believes that there may be some mechanism that we have not yet discovered that is essential for understanding the brain.

Currently, I’m a proponent of the Hopfield Hypothesis. That is not to say I don’t think there will be mechanisms, and important ones at that, yet to be discovered. I’m sure this is true but I do think that much of how the brain functions could be understood with what we already know, namely that the brain is composed of populations of excitatory and inhibitory neurons with connections that obey synaptic plasticity rules such as long-term potentiation and spike-time dependent plasticity with adaptation mechanisms such as synaptic facilitation, synaptic depression, and spike frequency adaptation that operate on multiple time scales. Thus far, using these mechanisms we can construct models of working memory, synchronous neural firing, perceptual rivalry, decision making, and so forth. However, we still don’t have the big picture. My sense is that neural systems are highly scale dependent so as we begin to analyze and simulate larger and more complex networks, we will find new unexpected properties and get closer to figuring out the brain.


2 thoughts on “The Hopfield Hypothesis

  1. I am afraid, this will be a very long comment.

    To summarize, there are areas where we almost completely lack data and have not sufficient means to predict mechanism, so that I would not at all be surprised to one day see something really surprisingly different from the long list of things we know so far.

    So let’s see. What are the known physiological mechanisms? And of what new kind could be (possibly) still unknown physiological mechanisms in neuroscience?

    To my mind, we face three problems in neuroscience (if we talk about theory; to my mind any mechanism must have a theoretical, i.e., dynamical, foundation):

    (1) to  construct mathematical (or computational) models at various scales, from the molecular to membrane to cell to circuits to tissue to neural nuclei to whole organ and then

    (2) to develop the proper mathematical (or computational) techniques to integrate these such that one scale affects those at a different scale.

    One example is the HH model that integrates molecular dynamics and membrane phenomena. Let us summarize all these levels and their interaction (1)-(2) as the “integrated neural circuitry”.

    And let us stop here for a moment.

    Already with respect to the latter, the second problem, it is where I see difficulties. We do not even know what we mean with “unknown physiological mechanisms” on that level of integration, do we? I mean, we do not have sufficient experimental or clinical data to propose and test mechanisms.

    Of course we can try to theoretically reconstruct the brain piece by piece — assuming we have solved problem (1) —  and build a virtual brain in a supercomputer to predict the unknown mechanisms. In fact, that is what some do. I’m more than skeptical about such an approach. Mainly because of the next problem.

    (3) to include the environment.

    The functioning brain critically depends on closing various feedback loops via the external environment (motor – external action – sensory data – processing – motor). If I interpret the available data correctly — and I may not, mental functioning does not persist in completely locked-in patients, who have lost the ability to interact with their environment through any behavioral means. All attempts to contact these patients with brain-machine-interfaces after they were already completely locked-in for some time have failed.

    Simply said, the brain cannot be treated as an isolated system. Which means some mechanisms must depend on this interaction. Even maintaining the brain. And certainly developing it.

    Of course, we know this already. Yet, I think, this might go one step beyond the argument that the brain is a dissipative structure. Of course, only as an open system we can keep the brain “alive”. This is necessary, though it is probably not sufficient for functioning — this is at least a hypotheses. (And hence we may have to rethink the definition of being alive, but let’s not touch this.)

    So even if — and here I certainly agree — we know today enough about how the brain’s elements function in isolation with all its constituents and internal feedback loops, i.e., as a whole organ, namely “being composed of populations of excitatory and inhibitory neurons with connections that obey synaptic plasticity rules such as long-term potentiation and spike-time dependent plasticity with adaptation mechanisms such as synaptic facilitation, synaptic depression, and spike frequency adaptation that operate on multiple time scales”, we do not know well physiological mechanisms that include the external world — except for some behavioral animal studies and very crude imaging experiments in human.  

    We may call this epineurocience. As epigenetics — the study of heritable changes other than changes in the underlying DNA — we need to look in neural functioning by mechanisms other than the underlying integrated neural circuitry (1)-(2). Still the question is, do we know enough to tackle this epineurocience? I doubt it, because we hardly got any data so far.

    There could be some new, that is, completely different learning rule that involve external meaningful feedback. So: two kind of non-localities: memory and environment.

    Do we have the creativity (and the mathematics at hand) to at least propose missing new mechanisms, for instance this new non-local learning rule?  This, in fact, would be great, because mathematical models are there to predict new mechanisms. If it were only the external feedback, I would not see a major problem. But it must be meaningful feedback and thus involve from the start mental function in the brain, the memory, while we looking for models that explain memory formation (and facilitation).

    This way, I come to my conclusion summaries above.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s