## Archive for the ‘Physics’ Category

### Talk in Taiwan

November 1, 2013

I’m currently at the National Center for Theoretical Sciences, Math Division, on the campus of the National Tsing Hua University, Hsinchu for the 2013 Conference on Mathematical Physiology.  The NCTS is perhaps the best run institution I’ve ever visited. They have made my stay extremely comfortable and convenient.

Here are the slides for my talk on Correlations, Fluctuations, and Finite Size Effects in Neural Networks.  Here is a list of references that go with the talk

E. Hildebrand, M.A. Buice, and C.C. Chow, `Kinetic theory of coupled oscillators,’ Physical Review Letters 98 , 054101 (2007) [PRL Online] [PDF]

M.A. Buice and C.C. Chow, `Correlations, fluctuations and stability of a finite-size network of coupled oscillators’. Phys. Rev. E 76 031118 (2007) [PDF]

M.A. Buice, J.D. Cowan, and C.C. Chow, ‘Systematic Fluctuation Expansion for Neural Network Activity Equations’, Neural Comp., 22:377-426 (2010) [PDF]

C.C. Chow and M.A. Buice, ‘Path integral methods for stochastic differential equations’, arXiv:1009.5966 (2010).

M.A. Buice and C.C. Chow, `Effective stochastic behavior in dynamical systems with incomplete incomplete information.’ Phys. Rev. E 84:051120 (2011).

MA Buice and CC Chow. Dynamic finite size effects in spiking neural networks. PLoS Comp Bio 9:e1002872 (2013).

MA Buice and CC Chow. Generalized activity equations for spiking neural networks. Front. Comput. Neurosci. 7:162. doi: 10.3389/fncom.2013.00162, arXiv:1310.6934.

Here is the link to relevant posts on the topic.

### New paper on neural networks

October 28, 2013

Michael Buice and I have a new paper in Frontiers in Computational Neuroscience as well as on the arXiv (the arXiv version has fewer typos at this point). This paper partially completes the series of papers Michael and I have written about developing generalized activity equations that include the effects of correlations for spiking neural networks. It combines two separate formalisms we have pursued over the past several years. The first was a way to compute finite size effects in a network of coupled deterministic oscillators (e.g. see here, herehere and here).  The second was to derive a set of generalized Wilson-Cowan equations that includes correlation dynamics (e.g. see here, here, and here ). Although both formalisms utilize path integrals, they are actually conceptually quite different. The first formalism adapted kinetic theory of plasmas to coupled dynamical systems. The second used ideas from field theory (i.e. a two-particle irreducible effective action) to compute self-consistent moment hierarchies for a stochastic system. This paper merges the two ideas to generate generalized activity equations for a set of deterministic spiking neurons.

### Richard Azuma, 1930 – 2013

September 20, 2013

I was saddened to learn that Richard “Dick” Azuma, who was a professor in the University of Toronto Physics department from 1961 to 1994 and emeritus after that, passed yesterday. He was a nuclear physicist par excellence and chair of the department when I was there as an undergraduate in the early 80′s. I was in the Engineering Science (physics option) program, which was an enriched engineering program at UofT. I took a class in nuclear physics with Professor Azuma during my third year. He brought great energy and intuition to the topic. He was one of the few professors I would talk to outside of class and one day I asked if he had any open summer jobs. He went out of his way to secure a position for me at the nuclear physics laboratory TRIUMF in Vancouver in 1984. That was the best summer of my life. The lab was full of students from all over Canada and I remain good friends with many of them today. I worked on a meson scattering experiment and although I wasn’t of much use to the experiment I did get to see first hand what happens in a lab. I wrote a 4th year thesis on some of the results from that experiment. I last saw Dick in 2010 when I went to Toronto to give a physics colloquium. He was still very energetic and as engaged in physics as ever. We will all miss him greatly.

### New paper on neural networks

March 22, 2013

Michael Buice and I have just published a review paper of our work on how to go beyond mean field theory for systems of coupled neurons. The paper can be obtained here. Michael and I actually pursued two lines of thought on how to go beyond mean field theory and we show how the two are related in this review. The first line started in trying to understand how to create a dynamic statistical theory of a high dimensional fully deterministic system. We first applied the method to the Kuramoto system of coupled oscillators but the formalism could apply to any system. Our recent paper in PLoS Computational Biology was an application for a network of synaptically coupled spiking neurons. I’ve written about this work multiple times (e.g. here,  here, and here). In this series of papers, we looked at how you can compute fluctuations around the infinite system size limit, which defines mean field theory for the system, when you have a finite number of neurons. We used the inverse number of neurons as a perturbative expansion parameter but the formalism could be generalized to expand in any small parameter, such as the inverse of a slow time scale.

The second line of thought was with regards to the question of how to generalize the Wilson-Cowan equation, which is a phenomenological population activity equation for a set of neurons, which I summarized here. That paper built upon the work that Michael had started in his PhD thesis with Jack Cowan. The Wilson-Cowan equation is a mean field theory of some system but it does not specify what that system is. Michael considered the variable in the Wilson-Cowan equation to be the rate (stochastic intensity) of a Poisson process and prescribed a microscopic stochastic system, dubbed the spike model, that was consistent with the Wilson-Cowan equation. He then considered deviations away from pure Poisson statistics. The expansion parameter in this case was more obscure. Away from a bifurcation (i.e. critical point) the statistics of firing would be pure Poisson but they would deviate near the critical point, so the small parameter was the inverse distance to criticality. Michael, Jack and I then derived a set of self-consistent set of equations for the mean rate and rate correlations that generalized the Wilson-Cowan equation.

The unifying theme of both approaches is that these systems can be described by either a hierarchy of moment equations or equivalently as a functional or path integral. This all boils down to the fact that any stochastic system is equivalently described by a distribution function or the moments of the distribution. Generally, it is impossible to explicitly calculate or compute these quantities but one can apply perturbation theory to extract meaningful quantities. For a path integral, this involves using Laplace’s method or the method of steepest descents to approximate an integral and in the moment hierarchy method it involves finding ways to truncate or close the system. These methods are also directly related to WKB expansion, but I’ll leave that connection to another post.

### Mass

February 10, 2013

Since the putative discovery of the Higgs boson this past summer, I have heard and read multiple attempts at explaining what exactly this discovery means. They usually go along the lines of “The Higgs mechanism gives mass to particles by acting like molasses in which particles move around …” More sophisticated accounts will then attempt to explain that the Higgs boson is an excitation in the Higgs field. However, most of the explanations I have encountered assume that most people already know what mass actually is and why particles need to be endowed with it. Given that my seventh grade science teacher didn’t really understand what mass was, I have a feeling that most nonphysicists don’t really have a full appreciation of mass.

To start out, there are actually two kinds of mass. There is inertial mass, which is the resistance to acceleration and is mass that goes into Newton’s second law of  $F = m a$ and then there is gravitational mass which is like the “charge” of gravity. The more gravitational mass you have the stronger the gravitational force. Although they didn’t need to be, these two masses happen to be the same.  The equivalence of inertial and gravitational mass is one of the deepest facts of the universe and is the reason that all objects fall at the same rate. Galileo’s apocryphal Leaning Tower of Pisa experiment was a proof that the two masses are the same. You can see this by noting that the gravitational force is given by

### New paper on finite size effects in spiking neural networks

January 25, 2013

Michael Buice and I have finally published our paper entitled “Dynamic finite size effects in spiking neural networks” in PLoS Computational Biology (link here). Finishing this paper seemed like a Sisyphean ordeal and it is only the first of a series of papers that we hope to eventually publish. This paper outlines a systematic perturbative formalism to compute fluctuations and correlations in a coupled network of a finite but large number of spiking neurons. The formalism borrows heavily from the kinetic theory of plasmas and statistical field theory and is similar to what we used in our previous work on the Kuramoto model (see here and  here) and the “Spike model” (see here).  Our heuristic paper on path integral methods is  here.  Some recent talks and summaries can be found here and here.

### Talk today at Johns Hopkins

December 12, 2012

I’m giving a computational neuroscience lunch seminar today at Johns Hopkins.  I will be talking about my work with Michael Buice, now at the Allen Institute, on how to go beyond mean field theory in neural networks. Technically, I will present our recent work on computing correlations in a network of coupled neurons systematically with a controlled perturbation expansion around the inverse network size. The method uses ideas from kinetic theory with a path integral construction borrowed and adapted by Michael from nonequilibrium statistical mechanics.  The talk is similar to the one I gave at MBI in October.  Our paper on this topic will appear soon in PLoS Computational Biology. The slides can be found here.

### Complexity is the narrowing of possibilities

December 6, 2012

Complexity is often described as a situation where the whole is greater than the sum of its parts. While this description is true on the surface, it actually misses the whole point about complexity. Complexity is really about the whole being much less than the sum of its parts. Let me explain. Consider a television screen with 100 pixels that can be either black or white. The number of possible images the screen can show is $2^{100}$. That’s a really big number. Most of those images would look like random white noise. However, a small set of them would look like things you recognize, like dogs and trees and salmon tartare coronets. This narrowing of possibilities, or a reduction in entropy to be more technical, increases information content and complexity. However, too much reduction of entropy, such as restricting the screen to be entirely black or white, would also be considered to have low complexity. Hence, what we call complexity is when the possibilities are restricted but not completely restricted.

Another way to think about it is to consider a very high dimensional system, like a billion particles moving around. A complex system would be if the attractor of this six billion dimensional system (3 for position and 3 for velocity of each particle), is a lower dimensional surface or manifold.  The flow of the particles would then be constrained to this attractor. The important thing to understand about the system would then not be the individual motions of the particles but the shape and structure of the attractor. In fact, if I gave you a list of the positions and velocities of each particle as a function of time, you would be hard pressed to discover that there even was a low dimensional attractor. Suppose the particles lived in a box and they moved according to Newton’s laws and only interacted through brief elastic collisions. This is an ideal gas and what would happen is that the motions of the positions of the particles would be uniformly distributed throughout the box while the velocities would obey a Normal distribution, called a Maxwell-Boltzmann distribution in physics. The variance of this distribution is proportional to the temperature. The pressure, volume, particle number and temperature will be related by the ideal gas law, PV=NkT, with the Boltzmann constant set by Nature. An ideal gas at equilibrium would not be considered complex because the attractor is a simple fixed point. However, it would be really difficult to discover the ideal gas law or even the notion of temperature if one only focused on the individual particles. The ideal gas law and all of thermodynamics was discovered empirically and only later justified microscopically through statistical mechanics and kinetic theory. However, knowledge of thermodynamics is sufficient for most engineering applications like designing a refrigerator. If you make the interactions longer range you can turn the ideal gas into a liquid and if you start to stir the liquid then you can end up with turbulence, which is a paradigm of complexity in applied mathematics. However, the main difference between an ideal gas and turbulent flow is the dimension of the attractor. In both cases, the attractor dimension is still much smaller than the full range of possibilities.

The crucial point is that focusing on the individual motions can make you miss the big picture. You will literally miss the forest for the trees. What is interesting and important about a complex system is not what the individual constituents are doing but how they are related to each other. The restriction to a lower dimensional attractor is manifested by the subtle correlations of the entire system. The dynamics on the attractor can also often be represented by an “effective theory”. Here the use of the word “effective” is not to mean that it works but rather that the underlying microscopic theory is superseded by a macroscopic one. Thermodynamics is an effective theory of the interaction of many particles. The recent trend in biology and economics had been to focus on the detailed microscopic interactions (there is push back in economics in what has been dubbed the macro-wars). As I will relate in future posts, it is sometimes much more effective (in the works better sense) to consider the effective (in the macroscopic sense) theory than a detailed microscopic theory. In other words, there is no “theory” per se of a given system but rather sets of effective theories that are to be selected based on the questions being asked.

### Von Neumann

December 4, 2012

Steve Hsu has a link to a fascinating documentary on John Von Neumann. It’s definitely worth watching.  Von Neumann is probably the last great polymath. Mathematician Paul Halmos laments that Von Neumann perhaps wasted his mathematical gifts by spreading himself too thin. He worries that Von Neumann will only be considered a minor figure in pure mathematics several hundred years hence. Edward Teller believes that Von Neumann simply enjoyed thinking above all else.

### Economic growth and reversible computing

November 12, 2012

In my previous post on the debt, a commenter made the important point that there are limits to economic growth. USCD physicist Tom Murphy has some thoughtful posts on the topic (see here and here). If energy use scales with economic activity then there will be a limit to economic growth because at some point we will use so much energy that the earth will boil to use Murphy’s metaphor. Even if we become energy efficient, if the rate of increase in efficiency is slower than the rate of economic growth, then we will still end up boiling. While I agree that this is true given the current state of economic activity and for the near future, I do wish to point out that it is possible to have indefinite economic growth and not use any more energy. As pointed out by Rick Bookstaber (e.g. see here), we are limited in how much we can consume because we are finite creatures. Thus, as we become richer, much of our excess wealth goes not towards increase consumption but the quality of that consumption. For example, the energy expenditure of an expensive meal prepared by a celebrity chef is not more than that from the local diner. A college education today is much more expensive than it was forty years ago without a concomitant increase in energy use. In some sense, much of modern real economic growth is effective inflation. Mobile phones have not gotten cheaper over the past decade because manufacturers keep adding more features to justify the price. We basically pay more for augmented versions of the same thing. So while energy use will increase for the foreseeable future, especially as the developed world catches up, it may not increase as fast as current trends.

However, the main reason why economic growth could possibly continue without energy growth is that our lives are becoming more virtual. One could conceivably imagine a future world in which we spend almost all of our day in an online virtual environment. In such a case, beyond a certain baseline of fulfilling basic physical needs of nutrition and shelter, all economic activity could be digital. Currently computers are quite inefficient. All the large internet firms like Google, Amazon, and Facebook require huge energy intensive server farms. However, there is nothing in principle to suggest that computers need to use energy at all. In fact, all computation can be done reversibly. This means that it is possible to build a computer that creates no entropy and uses no energy. If we lived completely or partially in a virtual world housed on a reversible computer then economic activity could increase indefinitely without using more energy. However, there could still be limits to this growth because computing power could be limited by other things such as storage capacity and relativistic effects. At some point the computer may need to be so large that information cannot be moved fast enough to keep up or the density of bits could be so high that it creates a black hole.

### Nobel dilemma

July 7, 2012

Now that the Higgs boson has been discovered, the question is who gets the Nobel Prize.  This will be tricky because  the discovery was made by two detector teams with hundreds of scientists using the CERN LHC accelerator involving hundreds more and although Higgs gets the eponymous credit for the prediction, there were actually three papers published almost simultaneously on the topic, with five of the authors still alive.  In fact, we only call it the Higgs boson because of a citation error by Nobel Laureate Steven Weinberg (see here).  One could even argue further that the Higgs boson is really just a variant of the Goldstone boson, discovered by Yoichiro Nambu (Nobel Laureate) and Jeffrey Goldstone (not a Laureate). This is a perfect example of why as I argued before (see here) that discoveries are rarely made by three or fewer people.  Whatever they decide, there will be plenty of disappointed people.

### Indistinguishability and transporters

June 16, 2012

I have a memory of  being in a bookstore and picking up a book with the title “The philosophy of Star Trek”.  I am not sure of how long ago this was or even what city it was in. However, I cannot seem to find any evidence of this book on the web.  There is a book entitled “Star Trek and Philosophy: The wrath of Kant“, but that is not the one I recall.  I bring this up because in this book that may or may not exist, I remember reading a chapter on the philosophy of transporters.  For those who have never watched the television show Star Trek, a transporter is a machine that can dematerialize something and rematerialize it somewhere else, presumably at the speed of light.  Supposedly, the writers of the original show invented the device so that they could move people to planet surfaces from the starship without having to use a shuttle craft, for which they did not have the budget to build the required sets.

What the author was wondering was whether or not the particles of a transported person were the same particles as the ones in the pre-transported person or were people reassembled with stock particles lying around in the new location.  The implication being that this would then illuminate the question of whether what constitutes “you” depends on your constituent particles or just the information on how to organize the particles.  I remember thinking that this is a perfect example of how physics can render questions of philosophy obsolete. What we know from quantum mechanics is that particles are indistinguishable. This means that it makes no sense to ask whether a particle in one location is the same as a particle at a different location or time.  A particle is only specified by its quantum properties like its mass, charge, and spin.   All electrons are identical.  All protons are identical and so forth.  Now they could be in different quantum states, so a more valid question is whether a transporter transports all the quantum information of a person or just the classical information, which is much smaller.  However, this question is really only relevant for the brain since we know we can transplant all the other organs from one person to another.   The neuroscience enterprise, Roger Penrose notwithstanding, implicitly operates on the principle that classical information is sufficient to characterize a brain.

### Criticality

May 4, 2012

I attended a conference on Criticality in Neural Systems at NIH this week.  I thought I would write a pedagogical post on the history of critical phenomena and phase transitions since it is a long and somewhat convoluted line of thought to link criticality as it was originally defined in physics to neuroscience.  Some of this is a recapitulation of a previous post.

Criticality is about phase transitions, which is a change in the state of matter, such as between gas and liquid. The classic paradigm of phase transitions and critical phenomena is the Ising model of magnetization. In this model, a bunch of spins that can be either up or down (north or south) sit on lattice points. The lattice is said to be magnetized if all the spins are aligned and unmagnetized or disordered if they are randomly oriented. This is a simplification of a magnet where each atom has a magnetic moment which is aligned with a spin degree of freedom of the atom. Bulk magnetism arises when the spins are all aligned.  The lowest energy state of the Ising model is for all the spins to be aligned and hence magnetized. If the only thing that spins had to deal with was the interaction energy then we would be done.  What makes the Ising model interesting and for that matter all of statistical mechanics is that the spins are also coupled to a heat bath. This means that the spins are subjected to random noise and the size of this noise is given by the temperature. The noise wants to randomize the spins. The presence of randomness is why there is the word “statistical” in statistical mechanics. What this means is that we can never say for certain what the configuration of a system is but only assign probabilities and compute moments of the probability distribution. Statistical mechanics really should have been called probabilistic mechanics.

### Proof by simulation

February 7, 2012

The process of science and mathematics involves developing ideas and then proving them true.   However, what is meant by a proof depends on what one is doing.  In science, a proof is empirical.  One starts with a hypothesis and then tests it experimentally or observationally.  In pure math, a proof means that a given statement is consistent with a set of rules and axioms.  There is a huge difference between these two approaches.  Mathematics is completely internal.  It simply strives for self-consistency.  Science is external.  It tries to impose some structure on an outside world.  This is why mathematicians sometimes can’t relate to scientists and especially physicists and vice versa.

Theoretical physicists don’t need to always follow rules.  What they can do is to make things up as they go along.  To make a music analogy – physics is like jazz.  There is a set of guidelines but one is free to improvise.  If in the middle of a calculation one is stuck because they can’t solve a complicated equation, then they can assume something is small or big or slow or fast and replace the equation with a simpler one that can be solved.  One doesn’t need to know if any particular step is justified because all that matters is that in the end, the prediction must match the data.

Math is more like composing western classical music.  There are a strict set of rules that must be followed.  All the notes must fall within the diatonic scale framework.  The rhythm and meter  is tightly regulated.  There are a finite number of possible choices at each point in a musical piece just like a mathematical proof.  However,  there are a countably infinite number of possible musical pieces just as there are an infinite number of possible proofs. That doesn’t mean that rules can’t be broken, just that when they are broken a paradigm shift is required to maintain self-consistency in a new system.  Whole new fields of mathematics and genres of music arise when the rules are violated.

The invention of the computer introduced a third means of proof.  Prior to the computer,  when making an approximation, one could either take the mathematics approach and try to justify the approximation by putting bounds on the error terms analytically or take the physicist approach and compare the end result with actual data.  Now one can numerically solve the more complicated expression and compare it directly to the approximation. I would say that I have spent the bulk of my career doing just that. Although, I don’t think there is anything intrinsically wrong with proving my simulation, I do find it to be unsatisfying at times. Sometimes it is nice to know that something is true by proving it in the mathematical sense and other times it is gratifying to compare predictions directly with experiments. The most important thing is to always be aware of what mode of proof one is employing.  It is not always clear-cut.

### Two talks

December 8, 2011

Last week I gave a talk on obesity at Georgia State University in Atlanta, GA. Tomorrow, I will be giving a talk on the kinetic theory of coupled oscillators at George Mason University in Fairfax, VA. Both of these talks are variations of ones I have given before so instead of uploading my slides, I’ll just point to links to previous talks, papers, and posts on the topics.  For obesity, see here and for kinetic theory, see here, here and here.

### New paper

November 17, 2011

A new paper in Physical Review E is now available on line here.  In this paper Michael Buice and I show how you can derive an effective stochastic differential (Langevin) equation for a single element (e.g. neuron) embedded in a network by averaging over the unknown dynamics of the other elements. This then implies that given measurements from a single neuron, one might be able to infer properties of the network that it lives in.  We hope to show this in the future. In this paper, we perform the calculation explicitly for the Kuramoto model of coupled oscillators (e.g. see here) but it can be generalized to any network of coupled elements.  The calculation relies on the path or functional integral formalism Michael developed in his thesis and generalized at the NIH.  It is a nice application of what is called “effective field theory”, where new dynamics (i.e. action) are obtained by marginalizing or integrating out unwanted degrees of freedom.  The path integral formalism gives a nice platform to perform this averaging.  The resulting Langevin equation has a noise term that is nonwhite, non-Gaussian and multiplicative.  It is probably not something you would have guessed a priori.

Michael A. Buice1,2 and Carson C. Chow11Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, Maryland 20892, USA
2Center for Learning and Memory, University of  Texas at Austin, Austin, Texas, USA

Received 25 July 2011; revised 12 September 2011; published 17 November 2011

Complex systems are generally analytically intractable and difficult to simulate. We introduce a method for deriving an effective stochastic equation for a high-dimensional deterministic dynamical system for which some portion of the configuration is not precisely specified. We use a response function path integral to construct an equivalent distribution for the stochastic dynamics from the distribution of the incomplete information. We apply this method to the Kuramoto model of coupled oscillators to derive an effective stochastic equation for a single oscillator interacting with a bath of oscillators and also outline the procedure for other systems.

 URL: http://link.aps.org/doi/10.1103/PhysRevE.84.051120 DOI: 10.1103/PhysRevE.84.051120 PACS: 05.40.-a, 05.45.Xt, 05.20.Dd, 05.70.Ln

### Talk in Marseille

October 8, 2011

I just returned from an excellent meeting in Marseille. I was quite impressed by the quality of talks, both in content and exposition. My talk may have been the least effective in that it provoked no questions. Although I don’t think it was a bad talk per se, I did fail to connect with the audience. I kind of made the classic mistake of not knowing my audience. My talk was about how to extend a previous formalism that much of the audience was unfamiliar with. Hence, they had no idea why it was interesting or useful. The workshop was on mean field methods in neuroscience and my talk was on how to make finite size corrections to classical mean field results. The problem is that many of the participants of the workshop don’t use or know these methods. The field has basically moved on.

In the classical view, the mean field limit is one where the discreteness of the system has been averaged away and thus there are no fluctuations or correlations. I have been struggling over the past decade trying to figure out how to estimate finite system size corrections to mean field. This led to my work on the Kuramoto model with Eric Hildebrand and particularly Michael Buice. Michael and I have now extended the method to synaptically coupled neuron models. However, to this audience, mean field pertains more to what is known as the “balanced state”. This is the idea put forth by Carl van Vreeswijk and Haim Sompolinsky to explain why the brain seems so noisy. In classical mean field theory, the interactions are scaled by the number of neurons N so in the limit of N going to infinity the effect of any single neuron on the population is zero. Thus, there are no fluctuations or correlations. However in the balanced state the interactions are scaled by the square root of the number of neurons so in the mean field limit the fluctuations do not disappear. The brilliant stroke of insight by Carl and Haim was that a self consistent solution to such a situation is where the excitatory and inhibitory neurons balance exactly so the net mean activity in the network is zero but the fluctuations are not. In some sense, this is the inverse of the classical notion. Maybe it should have been called “variance field theory”. The nice thing about the balanced state is that it is a stable fixed point and no further tuning of parameters is required. Of course the scaling choice is still a form of tuning but it is not detailed tuning.

Hence, to the younger generation of theorists in the audience, mean field theory already has fluctuations. Finite size corrections don’t seem that important. It may actually indicate the success of the field because in the past most computational neuroscientists were trained in either physics or mathematics and mean field theory would have the meaning it has in statistical mechanics. The current generation has been completely trained in computational neuroscience with it’s own canon of common knowledge. I should say that my talk wasn’t a complete failure. It did seem to stir up interest in learning the field theory methods we have developed as people did recognize it provides a very useful tool to solve the problems they are interested in.

Here are some links to previous posts that pertain to the comments above.

http://sciencehouse.wordpress.com/2009/06/03/talk-at-njit/

http://sciencehouse.wordpress.com/2009/03/22/path-integral-methods-for-stochastic-equations/

http://sciencehouse.wordpress.com/2009/01/17/kinetic-theory-of-coupled-oscillators/

http://sciencehouse.wordpress.com/2010/09/30/path-integral-methods-for-sdes/

http://sciencehouse.wordpress.com/2010/02/03/paper-now-in-print/

http://sciencehouse.wordpress.com/2009/02/27/systematic-fluctuation-expansion-for-neural-networks/

### Marseille talk

October 4, 2011

I am currently at CIRM in Marseille, France for a workshop on mean field methods in neuroscience.  My slides are here.

### Stochastic differential equations

June 5, 2011

One of the things I noticed at the recent Snowbird meeting was an increase in interest in stochastic differential equations (SDEs) or Langevin equations.  They arise wherever noise is involved in a dynamical process.  In some instances, an SDE comes about as the continuum approximation of a discrete stochastic process, like the price of a stock.  In other cases, they arise as a way to reintroduce stochastic effects to mean field differential equations originally obtained by averaging over a large number of stochastic molecules or neurons.  For example, the Hodgkin-Huxley equation describing action potential generation in neurons is a mean field approximation of the stochastic transport of ions (through ion channels) across the cell membrane, which can be modeled as a multi-state Markov process usually simulated with the Gillespie algorithm (for example see here).  This is computationally expensive so adding a noise term to the mean field equations is a more efficient way to account for stochasticity.

### Snowbird meeting

May 24, 2011

I’m currently at the biannual SIAM Dynamical Systems Meeting in Snowbird Utah.  If a massive avalanche were to roll down the mountain and bury the hotel at the bottom, much of applied dynamical systems research in the world would cease to exist.  The meeting has been growing steadily for the past thirty years and has now maxed out the capacity of Snowbird.   The meeting will either eventually have to move to a new venue or restrict the number of speakers. My inclination is to move but I don’t think that is the most popular sentiment.  Thus far, I have found the invited talks to be very interesting.  Climate change seems to be the big theme this year.  Chris Jones and Raymond Pierrehumbert  both gave talks on that topic.  I chaired the session by noted endocrinologist and neuroscientist Stafford Lightman who gave a very well received talk on the dynamics of hormone secretion. Chiara Daraio gave a very impressive talk on manipulating sound propagation with chains of ball bearings.  She’s basically creating the equivalent of nonlinear optics and electronics in acoustics.  My talk this afternoon is on finite size effects in spiking neural networks.  It is similar but not identical to the one I gave in New Orleans in January (see here).  The slides are here.