Archive for the ‘Kinetic Theory’ Category

Talk in Taiwan

November 1, 2013

I’m currently at the National Center for Theoretical Sciences, Math Division, on the campus of the National Tsing Hua University, Hsinchu for the 2013 Conference on Mathematical Physiology.  The NCTS is perhaps the best run institution I’ve ever visited. They have made my stay extremely comfortable and convenient.

Here are the slides for my talk on Correlations, Fluctuations, and Finite Size Effects in Neural Networks.  Here is a list of references that go with the talk

E. Hildebrand, M.A. Buice, and C.C. Chow, `Kinetic theory of coupled oscillators,’ Physical Review Letters 98 , 054101 (2007) [PRL Online] [PDF]

M.A. Buice and C.C. Chow, `Correlations, fluctuations and stability of a finite-size network of coupled oscillators’. Phys. Rev. E 76 031118 (2007) [PDF]

M.A. Buice, J.D. Cowan, and C.C. Chow, ‘Systematic Fluctuation Expansion for Neural Network Activity Equations’, Neural Comp., 22:377-426 (2010) [PDF]

C.C. Chow and M.A. Buice, ‘Path integral methods for stochastic differential equations’, arXiv:1009.5966 (2010).

M.A. Buice and C.C. Chow, `Effective stochastic behavior in dynamical systems with incomplete incomplete information.’ Phys. Rev. E 84:051120 (2011).

MA Buice and CC Chow. Dynamic finite size effects in spiking neural networks. PLoS Comp Bio 9:e1002872 (2013).

MA Buice and CC Chow. Generalized activity equations for spiking neural networks. Front. Comput. Neurosci. 7:162. doi: 10.3389/fncom.2013.00162, arXiv:1310.6934.

Here is the link to relevant posts on the topic.

New paper on neural networks

October 28, 2013

Michael Buice and I have a new paper in Frontiers in Computational Neuroscience as well as on the arXiv (the arXiv version has fewer typos at this point). This paper partially completes the series of papers Michael and I have written about developing generalized activity equations that include the effects of correlations for spiking neural networks. It combines two separate formalisms we have pursued over the past several years. The first was a way to compute finite size effects in a network of coupled deterministic oscillators (e.g. see here, herehere and here).  The second was to derive a set of generalized Wilson-Cowan equations that includes correlation dynamics (e.g. see here, here, and here ). Although both formalisms utilize path integrals, they are actually conceptually quite different. The first formalism adapted kinetic theory of plasmas to coupled dynamical systems. The second used ideas from field theory (i.e. a two-particle irreducible effective action) to compute self-consistent moment hierarchies for a stochastic system. This paper merges the two ideas to generate generalized activity equations for a set of deterministic spiking neurons.

New paper on neural networks

March 22, 2013

Michael Buice and I have just published a review paper of our work on how to go beyond mean field theory for systems of coupled neurons. The paper can be obtained here. Michael and I actually pursued two lines of thought on how to go beyond mean field theory and we show how the two are related in this review. The first line started in trying to understand how to create a dynamic statistical theory of a high dimensional fully deterministic system. We first applied the method to the Kuramoto system of coupled oscillators but the formalism could apply to any system. Our recent paper in PLoS Computational Biology was an application for a network of synaptically coupled spiking neurons. I’ve written about this work multiple times (e.g. here,  here, and here). In this series of papers, we looked at how you can compute fluctuations around the infinite system size limit, which defines mean field theory for the system, when you have a finite number of neurons. We used the inverse number of neurons as a perturbative expansion parameter but the formalism could be generalized to expand in any small parameter, such as the inverse of a slow time scale.

The second line of thought was with regards to the question of how to generalize the Wilson-Cowan equation, which is a phenomenological population activity equation for a set of neurons, which I summarized here. That paper built upon the work that Michael had started in his PhD thesis with Jack Cowan. The Wilson-Cowan equation is a mean field theory of some system but it does not specify what that system is. Michael considered the variable in the Wilson-Cowan equation to be the rate (stochastic intensity) of a Poisson process and prescribed a microscopic stochastic system, dubbed the spike model, that was consistent with the Wilson-Cowan equation. He then considered deviations away from pure Poisson statistics. The expansion parameter in this case was more obscure. Away from a bifurcation (i.e. critical point) the statistics of firing would be pure Poisson but they would deviate near the critical point, so the small parameter was the inverse distance to criticality. Michael, Jack and I then derived a set of self-consistent set of equations for the mean rate and rate correlations that generalized the Wilson-Cowan equation.

The unifying theme of both approaches is that these systems can be described by either a hierarchy of moment equations or equivalently as a functional or path integral. This all boils down to the fact that any stochastic system is equivalently described by a distribution function or the moments of the distribution. Generally, it is impossible to explicitly calculate or compute these quantities but one can apply perturbation theory to extract meaningful quantities. For a path integral, this involves using Laplace’s method or the method of steepest descents to approximate an integral and in the moment hierarchy method it involves finding ways to truncate or close the system. These methods are also directly related to WKB expansion, but I’ll leave that connection to another post.

New paper on finite size effects in spiking neural networks

January 25, 2013

Michael Buice and I have finally published our paper entitled “Dynamic finite size effects in spiking neural networks” in PLoS Computational Biology (link here). Finishing this paper seemed like a Sisyphean ordeal and it is only the first of a series of papers that we hope to eventually publish. This paper outlines a systematic perturbative formalism to compute fluctuations and correlations in a coupled network of a finite but large number of spiking neurons. The formalism borrows heavily from the kinetic theory of plasmas and statistical field theory and is similar to what we used in our previous work on the Kuramoto model (see here and  here) and the “Spike model” (see here).  Our heuristic paper on path integral methods is  here.  Some recent talks and summaries can be found here and here.

(more…)

Talk today at Johns Hopkins

December 12, 2012

I’m giving a computational neuroscience lunch seminar today at Johns Hopkins.  I will be talking about my work with Michael Buice, now at the Allen Institute, on how to go beyond mean field theory in neural networks. Technically, I will present our recent work on computing correlations in a network of coupled neurons systematically with a controlled perturbation expansion around the inverse network size. The method uses ideas from kinetic theory with a path integral construction borrowed and adapted by Michael from nonequilibrium statistical mechanics.  The talk is similar to the one I gave at MBI in October.  Our paper on this topic will appear soon in PLoS Computational Biology. The slides can be found here.

Complexity is the narrowing of possibilities

December 6, 2012

Complexity is often described as a situation where the whole is greater than the sum of its parts. While this description is true on the surface, it actually misses the whole point about complexity. Complexity is really about the whole being much less than the sum of its parts. Let me explain. Consider a television screen with 100 pixels that can be either black or white. The number of possible images the screen can show is 2^{100}. That’s a really big number. Most of those images would look like random white noise. However, a small set of them would look like things you recognize, like dogs and trees and salmon tartare coronets. This narrowing of possibilities, or a reduction in entropy to be more technical, increases information content and complexity. However, too much reduction of entropy, such as restricting the screen to be entirely black or white, would also be considered to have low complexity. Hence, what we call complexity is when the possibilities are restricted but not completely restricted.

Another way to think about it is to consider a very high dimensional system, like a billion particles moving around. A complex system would be if the attractor of this six billion dimensional system (3 for position and 3 for velocity of each particle), is a lower dimensional surface or manifold.  The flow of the particles would then be constrained to this attractor. The important thing to understand about the system would then not be the individual motions of the particles but the shape and structure of the attractor. In fact, if I gave you a list of the positions and velocities of each particle as a function of time, you would be hard pressed to discover that there even was a low dimensional attractor. Suppose the particles lived in a box and they moved according to Newton’s laws and only interacted through brief elastic collisions. This is an ideal gas and what would happen is that the motions of the positions of the particles would be uniformly distributed throughout the box while the velocities would obey a Normal distribution, called a Maxwell-Boltzmann distribution in physics. The variance of this distribution is proportional to the temperature. The pressure, volume, particle number and temperature will be related by the ideal gas law, PV=NkT, with the Boltzmann constant set by Nature. An ideal gas at equilibrium would not be considered complex because the attractor is a simple fixed point. However, it would be really difficult to discover the ideal gas law or even the notion of temperature if one only focused on the individual particles. The ideal gas law and all of thermodynamics was discovered empirically and only later justified microscopically through statistical mechanics and kinetic theory. However, knowledge of thermodynamics is sufficient for most engineering applications like designing a refrigerator. If you make the interactions longer range you can turn the ideal gas into a liquid and if you start to stir the liquid then you can end up with turbulence, which is a paradigm of complexity in applied mathematics. However, the main difference between an ideal gas and turbulent flow is the dimension of the attractor. In both cases, the attractor dimension is still much smaller than the full range of possibilities.

The crucial point is that focusing on the individual motions can make you miss the big picture. You will literally miss the forest for the trees. What is interesting and important about a complex system is not what the individual constituents are doing but how they are related to each other. The restriction to a lower dimensional attractor is manifested by the subtle correlations of the entire system. The dynamics on the attractor can also often be represented by an “effective theory”. Here the use of the word “effective” is not to mean that it works but rather that the underlying microscopic theory is superseded by a macroscopic one. Thermodynamics is an effective theory of the interaction of many particles. The recent trend in biology and economics had been to focus on the detailed microscopic interactions (there is push back in economics in what has been dubbed the macro-wars). As I will relate in future posts, it is sometimes much more effective (in the works better sense) to consider the effective (in the macroscopic sense) theory than a detailed microscopic theory. In other words, there is no “theory” per se of a given system but rather sets of effective theories that are to be selected based on the questions being asked.

Two talks

December 8, 2011

Last week I gave a talk on obesity at Georgia State University in Atlanta, GA. Tomorrow, I will be giving a talk on the kinetic theory of coupled oscillators at George Mason University in Fairfax, VA. Both of these talks are variations of ones I have given before so instead of uploading my slides, I’ll just point to links to previous talks, papers, and posts on the topics.  For obesity, see here and for kinetic theory, see here, here and here.

New paper

November 17, 2011

A new paper in Physical Review E is now available on line here.  In this paper Michael Buice and I show how you can derive an effective stochastic differential (Langevin) equation for a single element (e.g. neuron) embedded in a network by averaging over the unknown dynamics of the other elements. This then implies that given measurements from a single neuron, one might be able to infer properties of the network that it lives in.  We hope to show this in the future. In this paper, we perform the calculation explicitly for the Kuramoto model of coupled oscillators (e.g. see here) but it can be generalized to any network of coupled elements.  The calculation relies on the path or functional integral formalism Michael developed in his thesis and generalized at the NIH.  It is a nice application of what is called “effective field theory”, where new dynamics (i.e. action) are obtained by marginalizing or integrating out unwanted degrees of freedom.  The path integral formalism gives a nice platform to perform this averaging.  The resulting Langevin equation has a noise term that is nonwhite, non-Gaussian and multiplicative.  It is probably not something you would have guessed a priori.

 Michael A. Buice1,2 and Carson C. Chow11Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, Maryland 20892, USA
2Center for Learning and Memory, University of  Texas at Austin, Austin, Texas, USA

Received 25 July 2011; revised 12 September 2011; published 17 November 2011

Complex systems are generally analytically intractable and difficult to simulate. We introduce a method for deriving an effective stochastic equation for a high-dimensional deterministic dynamical system for which some portion of the configuration is not precisely specified. We use a response function path integral to construct an equivalent distribution for the stochastic dynamics from the distribution of the incomplete information. We apply this method to the Kuramoto model of coupled oscillators to derive an effective stochastic equation for a single oscillator interacting with a bath of oscillators and also outline the procedure for other systems.

Published by the American Physical Society

URL: http://link.aps.org/doi/10.1103/PhysRevE.84.051120
DOI: 10.1103/PhysRevE.84.051120
PACS: 05.40.-a, 05.45.Xt, 05.20.Dd, 05.70.Ln

Talk in Marseille

October 8, 2011

I just returned from an excellent meeting in Marseille. I was quite impressed by the quality of talks, both in content and exposition. My talk may have been the least effective in that it provoked no questions. Although I don’t think it was a bad talk per se, I did fail to connect with the audience. I kind of made the classic mistake of not knowing my audience. My talk was about how to extend a previous formalism that much of the audience was unfamiliar with. Hence, they had no idea why it was interesting or useful. The workshop was on mean field methods in neuroscience and my talk was on how to make finite size corrections to classical mean field results. The problem is that many of the participants of the workshop don’t use or know these methods. The field has basically moved on.

In the classical view, the mean field limit is one where the discreteness of the system has been averaged away and thus there are no fluctuations or correlations. I have been struggling over the past decade trying to figure out how to estimate finite system size corrections to mean field. This led to my work on the Kuramoto model with Eric Hildebrand and particularly Michael Buice. Michael and I have now extended the method to synaptically coupled neuron models. However, to this audience, mean field pertains more to what is known as the “balanced state”. This is the idea put forth by Carl van Vreeswijk and Haim Sompolinsky to explain why the brain seems so noisy. In classical mean field theory, the interactions are scaled by the number of neurons N so in the limit of N going to infinity the effect of any single neuron on the population is zero. Thus, there are no fluctuations or correlations. However in the balanced state the interactions are scaled by the square root of the number of neurons so in the mean field limit the fluctuations do not disappear. The brilliant stroke of insight by Carl and Haim was that a self consistent solution to such a situation is where the excitatory and inhibitory neurons balance exactly so the net mean activity in the network is zero but the fluctuations are not. In some sense, this is the inverse of the classical notion. Maybe it should have been called “variance field theory”. The nice thing about the balanced state is that it is a stable fixed point and no further tuning of parameters is required. Of course the scaling choice is still a form of tuning but it is not detailed tuning.

Hence, to the younger generation of theorists in the audience, mean field theory already has fluctuations. Finite size corrections don’t seem that important. It may actually indicate the success of the field because in the past most computational neuroscientists were trained in either physics or mathematics and mean field theory would have the meaning it has in statistical mechanics. The current generation has been completely trained in computational neuroscience with it’s own canon of common knowledge. I should say that my talk wasn’t a complete failure. It did seem to stir up interest in learning the field theory methods we have developed as people did recognize it provides a very useful tool to solve the problems they are interested in.

Addendum 2011-11-11

Here are some links to previous posts that pertain to the comments above.

https://sciencehouse.wordpress.com/2009/06/03/talk-at-njit/

https://sciencehouse.wordpress.com/2009/03/22/path-integral-methods-for-stochastic-equations/

https://sciencehouse.wordpress.com/2009/01/17/kinetic-theory-of-coupled-oscillators/

https://sciencehouse.wordpress.com/2010/09/30/path-integral-methods-for-sdes/

https://sciencehouse.wordpress.com/2010/02/03/paper-now-in-print/

https://sciencehouse.wordpress.com/2009/02/27/systematic-fluctuation-expansion-for-neural-networks/

Marseille talk

October 4, 2011

I am currently at CIRM in Marseille, France for a workshop on mean field methods in neuroscience.  My slides are here.

Snowbird meeting

May 24, 2011

I’m currently at the biannual SIAM Dynamical Systems Meeting in Snowbird Utah.  If a massive avalanche were to roll down the mountain and bury the hotel at the bottom, much of applied dynamical systems research in the world would cease to exist.  The meeting has been growing steadily for the past thirty years and has now maxed out the capacity of Snowbird.   The meeting will either eventually have to move to a new venue or restrict the number of speakers. My inclination is to move but I don’t think that is the most popular sentiment.  Thus far, I have found the invited talks to be very interesting.  Climate change seems to be the big theme this year.  Chris Jones and Raymond Pierrehumbert  both gave talks on that topic.  I chaired the session by noted endocrinologist and neuroscientist Stafford Lightman who gave a very well received talk on the dynamics of hormone secretion. Chiara Daraio gave a very impressive talk on manipulating sound propagation with chains of ball bearings.  She’s basically creating the equivalent of nonlinear optics and electronics in acoustics.  My talk this afternoon is on finite size effects in spiking neural networks.  It is similar but not identical to the one I gave in New Orleans in January (see here).  The slides are here.

2011 JMM talk

January 10, 2011

I’m on my way back from the 2011 Joint Mathematics Meeting.  I gave a talk yesterday on finite size effects in neural networks.  I gave a pedagogical talk on the strategy that Michael Buice and I have employed to analyze finite size networks in networks of coupled spiking neurons.  My slides are here.  We’ve adapted the formalism we used to analyze the finite size effects of the Kuramoto system (see here for summary) to a system of synaptically coupled phase oscillators.

Talk at Pitt

September 10, 2010

I visited the University of Pittsburgh today to give a colloquium.  I was supposed to have come in February but my plane was cancelled because of a snow storm.  This was not the really big snow storm that closed Washington, DC and Baltimore for a week but a smaller one that hit New England and not the DC area.  My flight was on Southwest and I presume that they have such a tightly correlated flight system, where planes circulate around the country in a “just in time” fashion, that a disturbance in one part of the country affects the rest of the country.  So while other airlines just had cancellations in New England, Southwest flights were cancelled for the day all across the US.  It seems that there is a trade off between business efficiency and robustness.  I drove this time. My talk was on the finite size effects in the Kuramoto model, which I’ve given several times already.  However, I have revised the slides on pedagogical grounds and they can be found  here.

Slides for second SIAM talk

July 21, 2010

Here are the slides for my SIAM talk on generalizing the Wilson-Cowan equations to include correlations.  This talk was mostly on the paper with Michael Buice and Jack Cowan that I summarized  here.  However, I also contrasted our work with the recent work of Paul Bressloff who uses a system size expansion of the Markov process that Michael and Jack proposed as a microscopic model for Wilson-Cowan in their 2007 paper.  The difference between the two approaches stems from  the interpretation of what the Wilson-Cowan equation describes.  In our interpretation, the Wilson-Cowan equation describes the firing rate or stochastic intensity of a Poisson process.  A Poisson distribution is notable because all cumulants are equal to the mean.  Our expansion is  in terms of factorial cumulants (we called them normal ordered cumulants in the paper because we didn’t know there was a name for them), which are deviations from Poisson statistics.  Bressloff, on the other hand, considers the Wilson -Cowan equation to be the average population firing rate of  a large population of neurons.  In the infinite size limit, there are no fluctuations.  His expansion is in terms of regular cumulants and the inverse system size is the small parameter.  In our formulation, the expansion parameter  is related to the distance to a critical point where the expansion would break down.   In essence, we use a Bogoliubov  hierarchy of time scales expansion where the higher order  factorial cumulants decay to steady state much faster than the lower order ones.

Two talks at University of Toronto

December 2, 2009

I’m currently at the University of Toronto to give two talks in a series that is jointly hosted by the Physics department and the Fields Institute.  The Fields Institute is like the Canadian version of the Mathematical Sciences Research Institute in the US and is named in honour of  Canadian mathematician J.C. Fields, who started the Fields Medal (considered to be the most prestigious prize for mathematics).  The abstracts for my talks are here.

The talk today was a variation on my kinetic theory of coupled oscillators talk.  The slides are here.  I tried to be more pedagogical in this version and because it was to be only 45 minutes long, I also shortened it quite a bit.   However, in many ways I felt that this talk was much less successful than the previous versions.  In simplifying the story, I left out much of the history behind the topic and thus the results probably seemed somewhat disembodied.  I didn’t really get across why a kinetic theory of coupled oscillators is interesting and useful.   Here is the post giving more of the backstory on the topic, which has a link to an older version of the talk as well.  Tomorrow, I’ll talk about my obesity work.

Talk at NJIT

June 3, 2009

I was at the FACM ’09 conference held at the New Jersey Institute of Technology the past two days.  I gave a talk on “Effective theories for neural networks”.  The slides are here.  This was an unsatisfying talk on two accounts.  The first was that I didn’t internalize how soon this talk came after the Snowbird conference and so I didn’t have enough time to properly prepare.   I thus ended up giving a talk that provided enough information to be confusing and hopefully thought provoking but not enough to be understood.   The second problem was that there is a flaw in what I presented.

I’ll give a brief backdrop to the talk for those unfamiliar with neuroscience.  The brain is composed of interconnected neurons and as a proxy for understanding the brain, computational neuroscientists try to understand what a collection of coupled neurons will do.   The state of a neuron is characterized by the voltage across its membrane and the state of its membrane ion channels.  When a neuron is given enough input,  there can be a  massive change of voltage and flow of ions called an action potential.  One of the ions that flows into the cell is calcium, which can trigger the release of neurotransmitter to influence other neurons.  Thus, neuroscientists are highly focused on how and when action potentials or spikes occur.

We can thus model a neural network at many levels.  At the bottom level, there is what I will call a microscopic description where we write down equations for the dynamics of the voltage and ion channels for each neuron.  These neuron models are sometimes called conductance-based neurons and the Hodgkin-Huxley neuron is the first and most famous of them.  They usually consist of two to four differential equations and can easily be a lot more.  On the other hand, if one is more interested in just the spiking rate,  then there is a reduced description for that.  In fact, much of the early progress in mathematically understanding neural networks used rate equations, examples being Wison and Cowan, Grossberg, Hopfield and Amari.  The question that I have always had was what is the precise connection between a microscopic description and a spike rate or activity description.  If I start with a network of conductance-based neurons can I derive the appropriate activity based description?

(more…)

Snowbird conference

May 20, 2009

I’m currently at the SIAM Dynamical Systems meeting in Snowbird, Utah.  I gave a short version of my talk on calculating finite size effects of the Kuramotor coupled oscillator model using kinetic theory and path integral approaches.  Here is the longer and more informative version of the talk.  I summarized the papers on this talk here.

Systematic fluctuation expansion for neural networks slides

March 24, 2009

I gave a talk today at the Mathematical Neuroscience workshop on my recent work with Michael Buice and Jack Cowan on deriving generalized activity equations for neural networks.  My slides are here.  The talk is based on the paper we uploaded to the arXiv recently and I summarized here.

Path integral methods for stochastic equations

March 22, 2009

I’m currently in Edinburgh for a Mathematical Neuroscience workshop. I gave a tutorial today on using field theoretic methods to solve stochastic differential equations (SDE’s). The slides are here.  The methods I presented have been around for decades but as far as I know they haven’t been collated together into a pedagogical review for nonexperts.  Also, there is an entire community of theorists and mathematicians that are unaware of path integral methods.  In particular, I apply the response function formalism stemming from the work of Martin Siggia and Rose.  Field theory and diagrammatic methods are a nice way to organize perturbation expansions for nonlinear SDE’s.  I plan to write a review paper on this topic in the next few months and will post it here.

Addendum: Jan 20, 2011.  The review paper can be found here.

Kinetic theory of coupled oscillators

January 17, 2009

Last week, I gave a physics colloquium at the Catholic University of America about recent work on using kinetic theory and field theory approaches to analyze finite-size corrections to networks of coupled oscillators.  My slides are here although they are converted from Keynote so the movies don’t work.   Coupled oscillators arise in  contexts as diverse as the brain, synchronized flashing of fireflies, coupled Josephson junctions, or unstable modes of the Millennium bridge in London.  Steve Strogatz’s book Sync gives a popular account of the field.  My talk considers the Kuramoto model

\dot{\theta}_i = \omega_i+\frac{K}{N}\sum_j\sin(\theta_j-\theta_i)   (1)

where the frequencies \omega are drawn from a fixed distribution g(\omega). The model describes the dynamics of the phases \theta of an all-to-all connected network of oscillators.  It can be considered to be the weak coupling limit of a set of nonlinear oscillators with different natural frequencies and a synchronizing phase response curve.

(more…)


Follow

Get every new post delivered to your Inbox.

Join 111 other followers