Whenever I think about the recession and the stimulus package, I’m reminded of the phenomenon of working memory and persistent activity in the brain, which I’ve worked on in the past. I’ll explain the connection at the end of the post. Working memory is the short term memory we use when we remember a phone number just long enough to call someone and then forget shortly afterward. Neuroscientists have found neurons in the pre-frontal cortex of monkeys that are correlated with the memory. So when you present the monkey a transient stimulus that it must remember, these neurons start firing and remain activated until the memory is no longer needed. This is a neural correlate of working memory. This implies that there must be bistability (or multistability) in the firing state of a neuron.
Hence, either a single neuron is bistable or a network of neurons are bistable. Except for some special circumstances, bistability in neurons have not been observed. Therefore, the network model is more likely in my opinion. So, how do you get network bistability? Well that can occur through recurrent excitation. For example, if a group of neurons are connected to each other with excitatory connections then it is possible that when they fire they excite each other and maintain firing. We can see this with a simple model. Let r be the firing rate of a neural unit (this could be one neuron or a pool of neurons). The firing rate is determined by the inputs u to the neuron through what is called the gain function f(u), which is usually some sigmoidal function. The firing rate obeys the equation . Now recurrent activity means that the inputs to the network are due to the firing of other neurons in the network so we can set and obtain the condition for a self-consistent activity state to be . Graphically, this looks like this:
In the figure, there are three possible solutions given by the intersections of the red and blue lines. Thus far there are no dynamics. We can add some dynamics by considering first order kinetics via . For these dynamics, it is easy to show that the middle solution is unstable while the other two are stable so we have a simple model of a two state working memory. A transient stimulus can kick the state from one stable state to the other.
Now this is for a single variable but we can generalize to a network. For example, suppose that there is a network of neural units labeled by i, so that the firing rate of each unit is given by . Now if we assume that the input the unit is due to a sum over the firing rates of the other neurons in the network through a weight function then we can write , and we get the equation . There is a nice symmetry between firing rate and input because this also implies that , which is the form I like to use. So, now depending on the choice of the weight function you can again get persistent actiivty of various shapes. If is a “Mexican hat” function (e.g. ), then you can get localized “bumps” of persistent activity. Adding dynamics and taking a continuum limit gives a “neural field” equation of the form
which has been analyzed by many people, including myself. Examples of equilibrium bump solutions of these equations are shown here (Guo and Chow, 2005):
Generally, you get a stable big bump and an unstable small bump. The no activity or u=0 state is also a solution.
Now, thus far I’ve only described what are called population rate or mean field equations. These equations are interpreted as some sort of average over the spiking dynamics of actual neurons. I’ve been careful not to call the neural units described by neurons because they need not be. The rigorous transformation of a network of spiking neurons to a network of rate neurons that includes dynamics has yet to be done and is something that Michael Buice and myself are actively pursuing. For example, there is nothing that says the dynamics should obey first order kinetics. However, one can demonstrate that the stationary solutions of a network of spiking neurons can be represented by a rate equation where the gain function corresponds to the firing rate function (FI curve) of a single neuron as long as the neurons fire asynchronously. Simulations also show that a network of interconnected spiking neurons can exhibit persistent activity. Below is an example of a raster plot of neural activity. Each point represents the time a neuron fires. The firing rate profile is below the raster.
One big difference between the mean field approximation and the spiking model is that the active state can shut off if the neurons become too synchronized as seen in this example (Laing and Chow, 2001; Gutkin et al. 2001)
An excitatory stimulus starts the bump that is then shut off by a second excitatory synchronizing pulse at time 70. Synchrony kills persistent activity because the input to a neuron arrives at once and then there isn’t any left to cause the neuron to fire again after its refractory period is over. Asynchronous or uncorrelated activity is essential for persistent activity.
So now I can make the connection to the economy. I see the economy as a recurrently connected network. Instead of passing synaptic pulses to each other we pass money. The circulation of money keeps the economy going. The economy also depends on positive feedback. If everyone suddenly decides to stop shopping then stores and manufacturers will lay off workers leading to more people with less money to shop. On the other hand when people start consuming more then stores an manufacturers will start to hire again leading to more shoppers. In the analogy to neural activity, the firing rate r could represent the amount of money or income a person has, which is a function of the cash input to the person. The input in turn is given by a sum over the money of a subset of other people.
Thus, we can envision that there could be multiple possible stable states for economic activity depending on the economic gain and weight (connectivity) functions and synchronous or correlated activity, like bubbles, can kill activity. We can also see how the a stimulus package could knock the economy out of a recession and into a higher stable state. However, it is also clear that the timing of a stimulus is important. For a neural network, the same amount of input spread over a long time is not the same as all of it administered at once. A very brief stimulus is bad because it synchronizes all the neurons, which then cannot maintain persistent activity. However, a very slow stimulus never gives enough input to kick the network to the higher state. This may also be true for the economy. In the neural system, what determines what is fast or slow is the time constant of the decay of activity. Knowing the time constant of the economy would then seem to be important for timing a stimulus. Obviously, the economy is much, much more complicated than this toy model but I think it captures some aspects of it.
C.R. Laing and C.C. Chow, `Stationary bumps in networks of spiking neurons’, Neural Comp. 13, 1473-1494 (2001).
B.S. Gutkin, C.R. Laing, C.L. Colby, C.C. Chow, and G.B. Ermentrout. `Turning on and off with excitation: the role of spike-timing asynchrony and synchrony in sustained neural activity’, J. Comp. Neurosci. 11, 121-134 (2001).
C.C. Chow and S. Coombes, ‘Existence and Wandering of bumps in a spiking neural network model’. SIAM Journal on Applied Dynamical Systems 5, 552-574 (2006)
All of these papers can be downloaded from here.