# Systematic fluctuation expansion for neural networks

A new paper  “Systematic fluctuation expansion for neural network activity equations“, by Michael Buice, Jack Cowan and myself has just been uploaded to the q-bio arXiv.  The paper arose from a confluence of my desire to adapt moment hierarchy approaches from kinetic theory to studying fluctuations in neural networks and Michael and Jack’s field theory formulation of stochastic neural dynamics (see here).  In this paper, we show that the two approaches are identical and give a systematic scheme to derive the equations.  We give an example for self-consistent equations for the first two moments.

Classically, neural networks have been described either by rate equations, such as the Wilson-Cowan equation of the form $\dot{a_i} = -r a_i + f(\sum_{j} w_{ij} a_j + S_i)$ (and the continuum version) or networks of (more biophysical)  spiking neurons.  Although rate equations average over neural spikes,  they have been extremely successful in describing many neural phenomena.  Wilson and Cowan, Grossberg, Amari, Hopfield, Ermentrout, and many others, have used these types of equations to describe phenomena as diverse as associative memory, working memory, persistent activity, hallucinations, orientation tuning, and neural activity waves.  In fact, the term neural network, has essentially been co-opted to imply a network of rate equations (i.e., multi-layer perceptron) with a back propagation learning rule for the weights to perform supervised learning.