# Criticality

I attended a conference on Criticality in Neural Systems at NIH this week.  I thought I would write a pedagogical post on the history of critical phenomena and phase transitions since it is a long and somewhat convoluted line of thought to link criticality as it was originally defined in physics to neuroscience.  Some of this is a recapitulation of a previous post.

Criticality is about phase transitions, which is a change in the state of matter, such as between gas and liquid. The classic paradigm of phase transitions and critical phenomena is the Ising model of magnetization. In this model, a bunch of spins that can be either up or down (north or south) sit on lattice points. The lattice is said to be magnetized if all the spins are aligned and unmagnetized or disordered if they are randomly oriented. This is a simplification of a magnet where each atom has a magnetic moment which is aligned with a spin degree of freedom of the atom. Bulk magnetism arises when the spins are all aligned.  The lowest energy state of the Ising model is for all the spins to be aligned and hence magnetized. If the only thing that spins had to deal with was the interaction energy then we would be done.  What makes the Ising model interesting and for that matter all of statistical mechanics is that the spins are also coupled to a heat bath. This means that the spins are subjected to random noise and the size of this noise is given by the temperature. The noise wants to randomize the spins. The presence of randomness is why there is the word “statistical” in statistical mechanics. What this means is that we can never say for certain what the configuration of a system is but only assign probabilities and compute moments of the probability distribution. Statistical mechanics really should have been called probabilistic mechanics.

There is an epic battle between energy (H), which wants to order things and noise which wants to disorder things.  More technically, disorder is measured by entropy (S), which in this case is the logarithm of the number of microstates (e.g. spin configurations) that is consistent with a given energy (i.e. macrostate).  The temperature (T) is a control parameter that sets the scale between the two.  This battle is quantified in a free energy F = H- TS that systems try to minimize.  Each spin configuration corresponds to a free energy.  Energy and entropy are directly antagonistic in the sense that minimizing energy (making spins more aligned) also minimizes entropy and vice versa. The free energy is a function of all the spins, so there is one or a set of configurations that minimizes it. If there is also an external field, then the spins will want to align with that as well. I want to point out that The concept of phases and phase transitions arises because we assign spin configurations to a small number of equivalency classes, which are characterized by what is called an order parameter.  Incidentally, this is what emergence is about.  We say that magnetization is an emergent phenomenon only because we can classify spin configurations into two types. Emergence is about classification.  In order to say that something is more than the sum of its parts requires that you define that something.  I will post more on this later.

This is not to say that classification schemes are arbitrary.  Some order parameters have more salience than others.  However, it also means that identifying the order parameter for a complex system is nontrivial and sometimes the most crucial step.  In the Ising model, a particularly meaningful order parameter is the average spin alignment or magnetization.  The order parameter is a real number that covers some finite range, say zero for disordered to one for maximally ordered. Now it could have been that the order parameter just moved smoothly between one and zero as the temperature is increased. However, that is not what happens. The order parameter is zero if the temperature is above what is called a critical temperature.   The transition point between order and disorder is called the critical point.  In dynamical systems language, the system undergoes a pitchfork bifurcation; the critical point is the bifurcation point.

Computing the free energy exactly by summing over all the configurations is an extremely difficult task.  It is only possible for a few systems.  One of the tour de force calculations of the twenty century was Lars Onsager’s solution for the 2D Ising model (Ising model on a planar lattice). The free energy is proportional to the logarithm of the partition function Z, which is given by a sum over Boltzmann weights, i.e. $Z =Tr e^{-\beta H}$, where Tr is the trace or sum over all configurations, $\beta = 1/ k_B T$ is inverse temperature in energy units, and $H$ is the energy or Hamiltonian for a particular spin configuration and in the presence of an external magnetic field $h$.  The partition function is what people actually compute and it contains all the information of the system.  It is very similar to the generating function in probability theory. Thermodynamic quantities like the magnetic susceptibility are obtained by taking derivatives of  Z just like moments are obtained from derivatives of the generating function.

Modern statistical mechanics and critical phenomena starts with Landau who made the brilliant conceptual leap that we could simplify phase transitions and critical phenomena by coarse graining the system and constructing the partition function based on symmetry considerations.  The energy  of a configuration (i.e. Hamiltonian) should be a function of the order parameter. For the Ising model this is the magnetization, which is a vector. In the absence of an external magnetic field, the orientation of the magnetization shouldn’t affect the energy.  We can thus write the Hamiltonian as an expansion in terms of factors of the magnetization and external field that respect the symmetries in the system like $m\cdot m$ and $h\cdot m$, which is rotationally invariant.  The Hamiltonian must also have a finite lower bound since you can’t have infinite negative energy.  These considerations yield what is called the Landau-Ginzburg Hamiltonian:

$\beta H = \int d^d x [ \frac{t}{2} m^2(x) + u m^4(x) + \frac{K}{2}(\nabla m)^2 + \cdots - h\cdot m(x)]$

where $x$ is a coarse-grained spatial coordinate, $m(x)$ is the coarse-grained magnetization around $x$. This approach is the primary reason why physicists can be so infuriating to biologists and mathematicians. They will basically look at some complex system, write down some simple Hamiltonian (or set of equations based on symmetry or what not) and claim that it describes everything that is important about the system. As I’ve argued before, this only works for systems where the constraints imply a unique model, which doesn’t usually hold for biology.

The partition function is obtained by summing the Boltzmann weight over all possible spin configurations which is equivalent to all possible functional forms of $m(x)$.  This is why the trace operation in the partition function is a functional or path integral. Doing these integrals is what field theory is about.  The simplest approximation to the partition function is to take the function that maximizes the integral. This is also called the saddle point approximation. The maximum is obtained when the Hamiltonian is minimal, which is easily found if it is also uniform in space. The magnetization is then given by the minimum of

$\frac{t}{2} \bar{m}^2 + u \bar{m}^4 + \cdots - h\cdot \bar{m}$    (*)

This is called mean field theory since it has no spatial dependence. The effect of fluctuations are contained in the gradient terms that are ignored in mean field theory.

Consider the case of no external field ($h=0$). If $t$ is positive in (*) then there is a stable fixed point at $\bar{m}=0$. This is the disordered state. If $t$ is negative then $u$ must be positive for there to be a fixed point.  $\beta H$ now has a double well shape (see figure) and there are two stable ordered fixed points, corresponding to either up or down magnetization. Spontaneous symmetry breaking will select one of them.  $t=0$ is the critical point. This is a pitchfork bifurcation where the order parameter changes continuously from one to two solutions. (t is proportional to $T-T_C$ near the critical point.) In statistical mechanics, a pitchfork bifurcation is called a second order phase transition (because there is discontinuity in the first derivative of the free energy). There is another type of phase transition for which there is no analogy in dynamical systems. Consider the case, where $h$ is nonzero and aligned in the north direction, and $t<0$. What this amounts to is a tilt to the double well potential so that one of the minima is lower than the other, which is the lowest energy state. Hence, as $h$ is changed from north to south, the minimum energy magnetization state will change discontinuously from north to south. This is called a first order phase transition because there is a discontinuity in the minimum of the free energy.

doublewell figure

Criticality refers to what happens near the critical point.   Because of the discontinuity in the derivatives of the free energy, there will be singularities in thermodynamic quantities like the magnetic susceptibility and correlation length. These quantities will diverge as $t^a$, where $a$ is a critical exponent. This power law behavior is a property of criticality. For example, near  the critical point $\bar{m}\propto (T-T_C)^{1/2}$ below the critical temperature and $\bar{m}=0$ above. The exponent 1/2 is called a critical exponent and is universal for all mean field systems with rotational symmetry.  We can similarly show that the magnetic susceptibility and heat capacity also diverge at the critical point as a power law of $t$, with a specific critical exponent. More importantly to neuroscience, it can be shown that near the critical point the correlation function $\langle m(x)m(x') \rangle - \langle m(x) \rangle^2$ is proportional to $\xi^{(3-d)/2}$ and the correlation length $\xi$ diverges at the critical point as $t^{-1/2}$.  The apparent universality of critical exponents was discovered experimentally for a bunch of different systems and Landau’s theory explained it. However, there were also other experiments that showed departures, which meant that mean field theory wasn’t applicable. This led to the renormalization group method to compute the exponents when the effects of fluctuations are important.  I will post on universality and RG in the future.

The divergence of the correlation function at the critical point is why criticality may be important for neuroscience.  At criticality, every part of the system is correlated to every other part and there are no length scales since the correlations decay as a power law.  Spin clusters oriented the same way will exist at all sizes.  This is akin to neural avalanches where the size of neuronal assemblies obey a power law.  The question of why we seem to see power law behavior everywhere led to the concept of self-organized criticality.  The critical point in statistical mechanics requires tuning a control parameter but these systems are able to self-tune.  How this happens is an area of active research. I will post in the future, my thoughts on how criticality may or may not be important for the brain.