# Systematic fluctuation expansion for neural networks

A new paper  “Systematic fluctuation expansion for neural network activity equations“, by Michael Buice, Jack Cowan and myself has just been uploaded to the q-bio arXiv.  The paper arose from a confluence of my desire to adapt moment hierarchy approaches from kinetic theory to studying fluctuations in neural networks and Michael and Jack’s field theory formulation of stochastic neural dynamics (see here).  In this paper, we show that the two approaches are identical and give a systematic scheme to derive the equations.  We give an example for self-consistent equations for the first two moments.

Classically, neural networks have been described either by rate equations, such as the Wilson-Cowan equation of the form $\dot{a_i} = -r a_i + f(\sum_{j} w_{ij} a_j + S_i)$ (and the continuum version) or networks of (more biophysical)  spiking neurons.  Although rate equations average over neural spikes,  they have been extremely successful in describing many neural phenomena.  Wilson and Cowan, Grossberg, Amari, Hopfield, Ermentrout, and many others, have used these types of equations to describe phenomena as diverse as associative memory, working memory, persistent activity, hallucinations, orientation tuning, and neural activity waves.  In fact, the term neural network, has essentially been co-opted to imply a network of rate equations (i.e., multi-layer perceptron) with a back propagation learning rule for the weights to perform supervised learning.

However, rate equations do not account for any correlation effects between neurons or regions. They thus cannot be used directly to model phenomena such as synchrony, spike-time dependent synaptic potentiation and depression, fluctuations due to finite-network size effects, or any spike-time based neural code, all of which may or may not be important for cognition and behaviour. To study these phenomena, it is usually necessary to analyze or simulate networks of spiking neurons, which are computationally more expensive and analytically more difficult than the rate equations.

It would thus be convenient to have a set of equations that “augmented” the population rate equation.  For example, it could be a coupled set of equations for the activity at location x and time t, $a(x,t)$, and the correlations between two locations $C(x_1,t_1;y_1,t_2)$.  Higher order correlations could also be included if desired.  One can visualize heurstically of how such a system would behave by considering the effect of the arrival times of inputs on the firing rate or gain of a neuron.  It is known that the correlations between the inputs can affect the ouput of the neuron, and differences in the output of the neurons in a network can affect the correlations between the firing of different neurons.

Thus the goal is to begin with a network of spiking neurons (i.e. a microscopic model) and systematically average over the dynamics to generate a set of averaged equations for the moments (i.e.a macroscopic model).  In our paper, we considered a Markov model that is consistent with the Wilson-Cowan rate equations as the microscopic theory and explicitly construct a moment hierarchy.  We showed that the equation for the first moment (i.e. activity) depends on the correlations, and the correlations depend on the third order moments and so forth.  We then argued that a controlled truncation can be applied if we assume that the firing statistics are near Poisson distributed.

Previously, Michael and Jack showed that the generating functional for the  Markov model can be expressed as a path integral over all possible time-dependent states of the model. The connection between stochastic dynamics and field theory has been exploited in statistical mechanics for decades. A simplistic definition of field theory is the application of asymptotic perturbation methods (such as the method of steepest descents) for solving integrals in infinite dimensions.  We then showed that by using a series of Legendre transformations and using the method of steepest descents, which is called the loop expansion in field theory, we can derive the exact same moment hierarchy that was obtained using the kinetic theory approach.  A nice feature of the field theory is that the truncation of the moment hierarchy is given by the order of the loop expansion.  We derived the equations for the first two moments and showed some examples for all-to-all connectivity.

## 7 thoughts on “Systematic fluctuation expansion for neural networks”

1. […] Systematic fluctuation expansion for neural networks slides By Carson Chow I gave a lecture on my recent work with Michael Buice and Jack Cowan on deriving generalized activity equations for neural networks.  My slides are here.  The talk is based on the paper we uploaded to the arXiv recently and I summarized here. […]

Like

2. […] Activity Equations, Neural Comp. 22:377-426 (2010) is now in print.  The summary of the paper is here and a PDF can be obtained […]

Like

3. […] This talk was mostly on the paper with Michael Buice and Jack Cowan that I summarized previously here.  However, I also contrasted our work with the recent work of Paul Bressloff who uses a system […]

Like

Like

5. […] previous work on the Kuramoto model (see here and  here) and the “Spike model” (see here).  Our heuristic paper on path integral methods is  here.  Some recent talks and summaries can […]

Like

6. […] which is a phenomenological population activity equation for a set of neurons, which I summarized here. That paper built upon the work that Michael had started in his PhD thesis with Jack Cowan. The […]

Like

7. […] a set of generalized Wilson-Cowan equations that includes correlation dynamics (e.g. see here, here, and here ). Although both formalisms utilize path integrals, they are actually conceptually quite […]

Like