Archive for the ‘Physics’ Category

New paper

November 17, 2011

A new paper in Physical Review E is now available on line here.  In this paper Michael Buice and I show how you can derive an effective stochastic differential (Langevin) equation for a single element (e.g. neuron) embedded in a network by averaging over the unknown dynamics of the other elements. This then implies that given measurements from a single neuron, one might be able to infer properties of the network that it lives in.  We hope to show this in the future. In this paper, we perform the calculation explicitly for the Kuramoto model of coupled oscillators (e.g. see here) but it can be generalized to any network of coupled elements.  The calculation relies on the path or functional integral formalism Michael developed in his thesis and generalized at the NIH.  It is a nice application of what is called “effective field theory”, where new dynamics (i.e. action) are obtained by marginalizing or integrating out unwanted degrees of freedom.  The path integral formalism gives a nice platform to perform this averaging.  The resulting Langevin equation has a noise term that is nonwhite, non-Gaussian and multiplicative.  It is probably not something you would have guessed a priori.

 Michael A. Buice1,2 and Carson C. Chow11Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, Maryland 20892, USA
2Center for Learning and Memory, University of  Texas at Austin, Austin, Texas, USA

Received 25 July 2011; revised 12 September 2011; published 17 November 2011

Complex systems are generally analytically intractable and difficult to simulate. We introduce a method for deriving an effective stochastic equation for a high-dimensional deterministic dynamical system for which some portion of the configuration is not precisely specified. We use a response function path integral to construct an equivalent distribution for the stochastic dynamics from the distribution of the incomplete information. We apply this method to the Kuramoto model of coupled oscillators to derive an effective stochastic equation for a single oscillator interacting with a bath of oscillators and also outline the procedure for other systems.

Published by the American Physical Society

URL: http://link.aps.org/doi/10.1103/PhysRevE.84.051120
DOI: 10.1103/PhysRevE.84.051120
PACS: 05.40.-a, 05.45.Xt, 05.20.Dd, 05.70.Ln

Talk in Marseille

October 8, 2011

I just returned from an excellent meeting in Marseille. I was quite impressed by the quality of talks, both in content and exposition. My talk may have been the least effective in that it provoked no questions. Although I don’t think it was a bad talk per se, I did fail to connect with the audience. I kind of made the classic mistake of not knowing my audience. My talk was about how to extend a previous formalism that much of the audience was unfamiliar with. Hence, they had no idea why it was interesting or useful. The workshop was on mean field methods in neuroscience and my talk was on how to make finite size corrections to classical mean field results. The problem is that many of the participants of the workshop don’t use or know these methods. The field has basically moved on.

In the classical view, the mean field limit is one where the discreteness of the system has been averaged away and thus there are no fluctuations or correlations. I have been struggling over the past decade trying to figure out how to estimate finite system size corrections to mean field. This led to my work on the Kuramoto model with Eric Hildebrand and particularly Michael Buice. Michael and I have now extended the method to synaptically coupled neuron models. However, to this audience, mean field pertains more to what is known as the “balanced state”. This is the idea put forth by Carl van Vreeswijk and Haim Sompolinsky to explain why the brain seems so noisy. In classical mean field theory, the interactions are scaled by the number of neurons N so in the limit of N going to infinity the effect of any single neuron on the population is zero. Thus, there are no fluctuations or correlations. However in the balanced state the interactions are scaled by the square root of the number of neurons so in the mean field limit the fluctuations do not disappear. The brilliant stroke of insight by Carl and Haim was that a self consistent solution to such a situation is where the excitatory and inhibitory neurons balance exactly so the net mean activity in the network is zero but the fluctuations are not. In some sense, this is the inverse of the classical notion. Maybe it should have been called “variance field theory”. The nice thing about the balanced state is that it is a stable fixed point and no further tuning of parameters is required. Of course the scaling choice is still a form of tuning but it is not detailed tuning.

Hence, to the younger generation of theorists in the audience, mean field theory already has fluctuations. Finite size corrections don’t seem that important. It may actually indicate the success of the field because in the past most computational neuroscientists were trained in either physics or mathematics and mean field theory would have the meaning it has in statistical mechanics. The current generation has been completely trained in computational neuroscience with it’s own canon of common knowledge. I should say that my talk wasn’t a complete failure. It did seem to stir up interest in learning the field theory methods we have developed as people did recognize it provides a very useful tool to solve the problems they are interested in.

Addendum 2011-11-11

Here are some links to previous posts that pertain to the comments above.

http://sciencehouse.wordpress.com/2009/06/03/talk-at-njit/

http://sciencehouse.wordpress.com/2009/03/22/path-integral-methods-for-stochastic-equations/

http://sciencehouse.wordpress.com/2009/01/17/kinetic-theory-of-coupled-oscillators/

http://sciencehouse.wordpress.com/2010/09/30/path-integral-methods-for-sdes/

http://sciencehouse.wordpress.com/2010/02/03/paper-now-in-print/

http://sciencehouse.wordpress.com/2009/02/27/systematic-fluctuation-expansion-for-neural-networks/

Marseille talk

October 4, 2011

I am currently at CIRM in Marseille, France for a workshop on mean field methods in neuroscience.  My slides are here.

Stochastic differential equations

June 5, 2011

One of the things I noticed at the recent Snowbird meeting was an increase in interest in stochastic differential equations (SDEs) or Langevin equations.  They arise wherever noise is involved in a dynamical process.  In some instances, an SDE comes about as the continuum approximation of a discrete stochastic process, like the price of a stock.  In other cases, they arise as a way to reintroduce stochastic effects to mean field differential equations originally obtained by averaging over a large number of stochastic molecules or neurons.  For example, the Hodgkin-Huxley equation describing action potential generation in neurons is a mean field approximation of the stochastic transport of ions (through ion channels) across the cell membrane, which can be modeled as a multi-state Markov process usually simulated with the Gillespie algorithm (for example see here).  This is computationally expensive so adding a noise term to the mean field equations is a more efficient way to account for stochasticity.

(more…)

Snowbird meeting

May 24, 2011

I’m currently at the biannual SIAM Dynamical Systems Meeting in Snowbird Utah.  If a massive avalanche were to roll down the mountain and bury the hotel at the bottom, much of applied dynamical systems research in the world would cease to exist.  The meeting has been growing steadily for the past thirty years and has now maxed out the capacity of Snowbird.   The meeting will either eventually have to move to a new venue or restrict the number of speakers. My inclination is to move but I don’t think that is the most popular sentiment.  Thus far, I have found the invited talks to be very interesting.  Climate change seems to be the big theme this year.  Chris Jones and Raymond Pierrehumbert  both gave talks on that topic.  I chaired the session by noted endocrinologist and neuroscientist Stafford Lightman who gave a very well received talk on the dynamics of hormone secretion. Chiara Daraio gave a very impressive talk on manipulating sound propagation with chains of ball bearings.  She’s basically creating the equivalent of nonlinear optics and electronics in acoustics.  My talk this afternoon is on finite size effects in spiking neural networks.  It is similar but not identical to the one I gave in New Orleans in January (see here).  The slides are here.

2011 JMM talk

January 10, 2011

I’m on my way back from the 2011 Joint Mathematics Meeting.  I gave a talk yesterday on finite size effects in neural networks.  I gave a pedagogical talk on the strategy that Michael Buice and I have employed to analyze finite size networks in networks of coupled spiking neurons.  My slides are here.  We’ve adapted the formalism we used to analyze the finite size effects of the Kuramoto system (see here for summary) to a system of synaptically coupled phase oscillators.

High energy physics

November 6, 2010

I was asked a while ago what I thought of the Large Hadron Collider at CERN.  Although I’ve been critical of high energy physics in the past (see for example here), I strongly support the LHC and think it is a worthwhile endeavor.  My reason is because I think it will be important for future technology.  By this I don’t just mean spin offs, like the World Wide Web, which was invented at CERN by Tim Berners Lee.  What I mean is that knowledge gained at the high energy scale could be useful for saving the human race one day.

Let me elaborate.  My criticism of high energy or particle physics in the past was mostly because of the claim that it was more “fundamental” than other areas of science like condensed matter physics or psychology.  Following noble laureate Philip Anderson’s famous article “More is Different” (Science 177:393-396, 1971), what is fundamental to me is a matter of perspective.  For example, the fact that I can’t find a parking spot at the mall a week before Christmas is not because of particle physics but because of the pigeonhole principle, (i.e. if you have more things than boxes, then if you try to put the things into the boxes at least one box must contain more than one thing).  This is as fundamental to me as any high energy theory.  The fact that you can predict an election using polling data from a small sample of the electorate is because of  the central limit theorem, (i.e. the sum of a bunch of random events tends to obey a normal distribution), and is also independent of what particles that comprise the electorate.  Ironically, the main raison d’etre of the LHC is to look for the Higgs boson, which is thought to give  masses to some subatomic particles.  The Higgs mechanism is based on the idea of spontaneous symmetry breaking, which came from none other than Phil Anderson who was studying properties of magnets.

So how could high energy physics be pertinent to our existence some day?  Well, some day in the very distant future the sun will expand into a red giant and swallow the earth.  If humans, or whatever our descendants will be called, are to survive they are going to need to move.  This will take space faring technology that could rely on some yet unknown principle of high energy physics that could be discovered by the LHC.  And in the very, very distant future the universe will end either in a big crunch or by expanding so much that matter won’t be able to persist.  If and when that time comes and life forms still exist, then to survive they’ll have to figure out how to “tunnel” into a new universe or new existence.  This will take real science fiction-like stuff that will likely depend on knowledge of high energy physics.  So although high energy physics does not hold a monopoly on fundamental concepts, it may still be absolutely necessary for life saving future technology.

Scotch tape and flying frogs

October 6, 2010

This year’s Nobel prize in physics went to Andre Geim and Konstantin Novoselov for making single layer graphite or graphene using scotch tape.  However, Geim is also famous for having won an Ignoble prize for demonstrating diamagnetic levitation using a frog.  You can see a video of a flying frog and tomato hereSir Michael Berry of Berry’s phase fame and Geim wrote a paper demonstrating that diamagnetic but not paramagnetic objects can be levitated stably in a solenoidal magnetic field.  This was somewhat surprising because there is a theorem (Earnshaw’s theorem) that says you cannot suspend an object with fixed charges and magnets in any combination of static, magnetic, and gravitational fields.  The reason diamagnetic levitation works is because Earnshaw’s theorem does not apply to induced magnetism.

Talk at Pitt

September 10, 2010

I visited the University of Pittsburgh today to give a colloquium.  I was supposed to have come in February but my plane was cancelled because of a snow storm.  This was not the really big snow storm that closed Washington, DC and Baltimore for a week but a smaller one that hit New England and not the DC area.  My flight was on Southwest and I presume that they have such a tightly correlated flight system, where planes circulate around the country in a “just in time” fashion, that a disturbance in one part of the country affects the rest of the country.  So while other airlines just had cancellations in New England, Southwest flights were cancelled for the day all across the US.  It seems that there is a trade off between business efficiency and robustness.  I drove this time. My talk was on the finite size effects in the Kuramoto model, which I’ve given several times already.  However, I have revised the slides on pedagogical grounds and they can be found  here.

Slides for second SIAM talk

July 21, 2010

Here are the slides for my SIAM talk on generalizing the Wilson-Cowan equations to include correlations.  This talk was mostly on the paper with Michael Buice and Jack Cowan that I summarized  here.  However, I also contrasted our work with the recent work of Paul Bressloff who uses a system size expansion of the Markov process that Michael and Jack proposed as a microscopic model for Wilson-Cowan in their 2007 paper.  The difference between the two approaches stems from  the interpretation of what the Wilson-Cowan equation describes.  In our interpretation, the Wilson-Cowan equation describes the firing rate or stochastic intensity of a Poisson process.  A Poisson distribution is notable because all cumulants are equal to the mean.  Our expansion is  in terms of factorial cumulants (we called them normal ordered cumulants in the paper because we didn’t know there was a name for them), which are deviations from Poisson statistics.  Bressloff, on the other hand, considers the Wilson -Cowan equation to be the average population firing rate of  a large population of neurons.  In the infinite size limit, there are no fluctuations.  His expansion is in terms of regular cumulants and the inverse system size is the small parameter.  In our formulation, the expansion parameter  is related to the distance to a critical point where the expansion would break down.   In essence, we use a Bogoliubov  hierarchy of time scales expansion where the higher order  factorial cumulants decay to steady state much faster than the lower order ones.

Some numbers for the BP leak

June 3, 2010

The Deepwater Horizon well is situated 1500 m below the surface of the Gulf of Mexico.  The hydrostatic pressure is approximately given by  the simple formula of P_a+ g\rho h where P_a = 100 \ kPa is the pressure of the atmosphere, \rho = 1 \ g/ml = 1000 \ kg/m^3   is the density of water, and g = 10 \ m/s^2 is the gravitational acceleration.  Putting the numbers together gives 1.5\times 10^7 \ kg/m s^2, which is 15000 \ kPa or about 150 times atmospheric pressure.  Hence, the oil and natural gas must be under tremendous pressure to be able to leak out of the well at all.  It’s no wonder the Top Kill operation, where mud was pumped in at high pressure, did not work.

Currently, it is estimated that the leak rate is somewhere between 10,000 and 100,000 barrels of oil per day.  A barrel of oil is 159 litres or 0.159 cubic metres.  So basically 1600 to 16000 cubic metres of oil is leaking each day.  This amounts to a cube with sides of about 11 metres for the lower value and 25 metres for the upper one, which is about the length of a basketball court.  However, assuming that the oil forms a layer on the surface of the ocean that is 0.001 mm thick, this then corresponds to a slick with an area between 1,600 to 16,000 square kilometres.  Given that the leak has been going on for almost two months and the Gulf of Mexico is 160,000 square kilometres, this implies that the slick is either very thick, oil has started to wash up on shore, or a lot of the oil is still under the surface.

The scale invariant life

May 4, 2010

The most recent episode of WNYC’s Radiolab was about human limits.  The first two stories were on the limits of the human body and mind and had people telling stories of surviving extreme endurance events and participating in memory competitions.  The last story was about the limits of science.  I was expecting the usual take on Godel’s Incompleteness Theorems but they touched on something different.  Instead they talked about how this algorithm called Eureqa written by two Cornell computer scientists could deduce the dynamics of unknown systems.  They used it to deduce the equations of a double pendulum based on the time series of the angles and angular velocities.  They then applied it to a biological system and produced some dynamical equations.  However, they then claimed that they had trouble publishing the results because they couldn’t explain what those equations described or meant.  Steve Strogatz then came on and started to lament on the fact that as we begin to explore more and more complex systems, our brains may not be able to ever understand it.  He basically said that once we reached that limit we may need to hand over science to computers.

I think that Steve is confounding the limits of a single human being with the limits of humans in general.  To me, understanding is all about data compression.  One says they understand something when they can give a simpler description of it or relate it to something they know already.  Understanding is not a binary process.   I understand some things better than other things and I attain a greater and sometimes lesser understanding of things the more I think about them.  However, I do agree that there may be limits to the number of different things I personally can understand.  This applies to things that other humans already understand like for example Turkish.  Now, perhaps if I studied hard enough I could learn to speak Turkish but the time it took for me to do that would preclude me from learning something else like say Category Theory.

What Steve was specifically referring to I believe was that it may be difficult or impossible to understand certain complex systems by trying to relate them to what we know now. I think this is probably true but that doesn’t mean we won’t have intuitive understanding of such systems in the future.  For example,  it would be very difficult for an adult 500 years ago, who should be genetically indistinguishable to a person alive today, to understand Andrew Wile’s proof of Fermat’s last theorem.  Most of them would first have to learn how to read and then learn 500 years of mathematics.  Wiles basically used everything humans know about math up to this point to prove the theorem.  The average mathematician alive today that doesn’t specialize in arithmetic algebraic geometry has trouble understanding the proof.  They simply don’t have the background to follow all the arguments.

Now, I do believe that there are things that we can never understand  because we are bound by the rules of computation.  Turing showed us that there are undecidable problems that cannot be solved in general like if a computation will halt.  However, I do think that humans have the capability to understand individual things that arise out of computations and that includes physical objects like biological systems. We may not know whether a given computation will halt but we could understand what has already been computed.   For complex systems that Steve was alluding to, we just don’t yet know what the form of that understanding will be.  Consider Brownian motion, which is the modern paradigm of an unpredictable process.  Until  Einstein pointed out that the process should be understood probabilistically and calculated  the time dependence of the mean square deviation of a Brownian particle people didn’t even know how to think about the phenomenon.  I think most physicists would claim that they have a good understanding of Brownian motion even though they have no idea what a single trajectory of a Brownian particle will do.  From a neuroscience perspective, Brownian motion has become a primitive concept and we can understand more complex things in terms of it.  I think this will hold true for even more complex phenomenon.  We can always reform what we consider to be intuitive and build from that.

Addendum:  I forgot to relate everything back to the title of the post.  I called this the scale invariant life because I think everyone will go through similar stages where they learn what is known, make some new discoveries and then reach a crisis where they can’t understand something new in terms of what they already  know.  Thus there are no absolute thresholds of discovery or knowledge.  We just make excursions from where we start and then the next generation takes over.

Boltzmann’s Brain and Universe

January 29, 2010

One of the recent results of string theory is the revitalization of an old idea for the origin of the universe first proposed by Boltzmann.  This was nicely summarized in an article by Dennis Overbye in the New York Times. Cosmologist Sean Carroll has also blogged about this multiple times  (e.g. see here and here). Boltzmann suggested that the universe, which is not in thermal equilibrium, could have arisen as a fluctuation from a bigger universe in a state of thermal equilibrium.  (This involves issues of the second law of thermodynamics and the arrow of time, which I’ll post on at some later point.)  A paper by Dyson, Kleban and Susskind in 2002, set off a round of debates in the cosmology community because this idea leads to what is now called the Boltzmann’s brain paradox.  The details are nicely summarized in Carroll’s posts. Basically, the idea is that if a universe could arise out of a quantum fluctuation then a disembodied brain should also be able to pop into existence and since a brain is much smaller than the entire universe then it should be more probable.  So, why is it that we are not disembodied brains?

I had two thoughts when I first heard about this paradox.  The first was – how do you know you’re not a disembodied brain? and the second was –  it is not necessarily true that the brain is simpler than the whole universe.  What the cosmologists seem to be ignoring or discounting is nonlinear dynamics and computation.  The fact that the brain is contained in the universe  doesn’t mean it must be simpler.  They don’t take into account the possibility that the Kolmogorov complexity, which is the smallest description of an entity, of the universe is smaller than that of the brain.  So although the universe is much bigger than the brain and contains many brains among other things, it may in fact be less complex.  Personally, I happen to like the spontaneous fluctuation idea for the origin of our universe.

(more…)

Two talks at University of Toronto

December 2, 2009

I’m currently at the University of Toronto to give two talks in a series that is jointly hosted by the Physics department and the Fields Institute.  The Fields Institute is like the Canadian version of the Mathematical Sciences Research Institute in the US and is named in honour of  Canadian mathematician J.C. Fields, who started the Fields Medal (considered to be the most prestigious prize for mathematics).  The abstracts for my talks are here.

The talk today was a variation on my kinetic theory of coupled oscillators talk.  The slides are here.  I tried to be more pedagogical in this version and because it was to be only 45 minutes long, I also shortened it quite a bit.   However, in many ways I felt that this talk was much less successful than the previous versions.  In simplifying the story, I left out much of the history behind the topic and thus the results probably seemed somewhat disembodied.  I didn’t really get across why a kinetic theory of coupled oscillators is interesting and useful.   Here is the post giving more of the backstory on the topic, which has a link to an older version of the talk as well.  Tomorrow, I’ll talk about my obesity work.

Hurdles for mathematical thinking

October 30, 2009

From my years as both a math professor and observer of people, I’ve come up with a list of  hurdles for mathematical thinking.  These are what I believe to be the essential set of skills  a  person must have if they want to understand and do mathematics.  They don’t need to have all these skills to use mathematics but would need most of them if they want to progress far in mathematics.  Identifying what sorts of conceptual barriers people may have could help in improving mathematics education.

I’ll first give the list and then explain what I mean by them.

1. Context dependent rules

2. Equivalence classes

3. Limits and infinitesimals

4. Formal  logic

5. Abstraction

(more…)

Human scale

October 2, 2009

I’ve always been intrigued by how long we live compared to the age of the universe.  At 14 billion years, the universe is only a factor of 10^8 older than a long-lived human.  In contrast, it is immensely bigger than us.  The nearest star is 4 light years away, which is a factor of 10^{16} larger than a human, and the observable universe is about 25 billion times bigger than that.    The size scale of the universe is partly dictated by the speed of light which at 3 \times 10^8 m/s is coincidentally (or not) the same order of magnitude faster than we can move as the universe is older than we live.

Although we are small compared to the universe, we are also exceedingly big compared to our constituents. We are comprised of about 10^{13} cells, each of which are about 10^{-5} m in diameter.  If we assume that the density of the cell is about that of water (1 {\rm g/ cm}^3) then that roughly amounts to 10^{14} molecules.  So a human is comprised of something like 10^{27} molecules, most of it being water which has an atomic weight of 18.  Given that proteins and organic molecules can be much larger than that a lower bound on the number of atoms in the body is 10^{28}.

The speed at which we can move is governed by the reaction rates of metabolism.  Neurons fire at an average of approximately 10 Hz, so that is why awareness operates on a time scale of a few hundred milliseconds.  You could think of a human moment as being one tenth of a second.  There are 86,400 seconds in a day so we have close to a million moments in a day although we are a sleep for about a third of them.  That leads to about 20 billion moments in a lifetime. Neural activity also sets the scale for how fast we can move our muscles, which is a few metres per second.  If we consider a movement every second then that implies about a billion twitches per lifetime.  Our hearts beat about once a second so that is also the number of heart beats in a lifetime.

The average thermal energy at body temperature is about 10^{-19} Joules, which is not too far below the binding energies of protein-DNA and protein-protein interactions required for life.   Each of our  cells can translate about 5 amino acids per second, which is a lot of proteins in our lifetime.  I find it completely amazing that  a bag of 10^{28} or more things, incessantly buffeted by noise, can stay coherent for a hundred years.  There is no question that evolution is the world’s greatest engineer.  However, for those that are interested in artificial life this huge expanse of scale does pose a question –  What is the minimal computational requirement to simulate life and in particular something as complex as a mammal?  Even if you could do a simulation with say 10^{32} or more objects,  how would you even know that there was something living in it?

The numbers came from Wolfram Alpha and Bionumbers.

Energy efficiency and boiling water

September 11, 2009

I’ve noticed that my last few posts have been veering towards the metaphysical so I thought today I would talk about some kitchen science, literally. The question is what is the most efficient way to boil water.  Should one turn the heat on the stove to the maximum or is there some mid-level that should be used?  I didn’t know what the answer was so I tried to calculate it.  The answer turned out to be more subtle than I anticipated.

(more…)

Feynman’s “The Character of Physical Law”

July 18, 2009

Richard Feynman’s famous 1964 lecture series “The character of physical law“, is now available on the web curtesy of Bill Gates, who bought the rights.  For any of you who are uamiliar with Feynman, I would recommend watching the brilliant physicist and expositor at work.  There are seven lectures in total.  I watched the first, describing the law of grativation,  and the fifth on the distinction of past and future, where he gives a beautiful and clear explanation of entropy and the arrow of time.

Feynman also anticipates complexity theory in the fifth lecture.  He says that knowing the fundamental laws don’t help much in understanding complex phenomena like entropy.  There is a lot of analysis that must be done to get there.  He also talks about the hierachy of descriptions from the laws of elementary forces and particles all the way up to human concepts like beauty and hope.  He then says that he doesn’t believe either end of this spectrum or any of the steps in between are any more “closer to God”, i.e. more fundamental than any other.  He jokes that the people working on these very different fields should not have any animosity towards each other and that they’re all doing essentially the same thing, which is to try to relate the various levels of the hierarchy to each other.

Talk at NJIT

June 3, 2009

I was at the FACM ’09 conference held at the New Jersey Institute of Technology the past two days.  I gave a talk on “Effective theories for neural networks”.  The slides are here.  This was an unsatisfying talk on two accounts.  The first was that I didn’t internalize how soon this talk came after the Snowbird conference and so I didn’t have enough time to properly prepare.   I thus ended up giving a talk that provided enough information to be confusing and hopefully thought provoking but not enough to be understood.   The second problem was that there is a flaw in what I presented.

I’ll give a brief backdrop to the talk for those unfamiliar with neuroscience.  The brain is composed of interconnected neurons and as a proxy for understanding the brain, computational neuroscientists try to understand what a collection of coupled neurons will do.   The state of a neuron is characterized by the voltage across its membrane and the state of its membrane ion channels.  When a neuron is given enough input,  there can be a  massive change of voltage and flow of ions called an action potential.  One of the ions that flows into the cell is calcium, which can trigger the release of neurotransmitter to influence other neurons.  Thus, neuroscientists are highly focused on how and when action potentials or spikes occur.

We can thus model a neural network at many levels.  At the bottom level, there is what I will call a microscopic description where we write down equations for the dynamics of the voltage and ion channels for each neuron.  These neuron models are sometimes called conductance-based neurons and the Hodgkin-Huxley neuron is the first and most famous of them.  They usually consist of two to four differential equations and can easily be a lot more.  On the other hand, if one is more interested in just the spiking rate,  then there is a reduced description for that.  In fact, much of the early progress in mathematically understanding neural networks used rate equations, examples being Wison and Cowan, Grossberg, Hopfield and Amari.  The question that I have always had was what is the precise connection between a microscopic description and a spike rate or activity description.  If I start with a network of conductance-based neurons can I derive the appropriate activity based description?

(more…)

Snowbird conference

May 20, 2009

I’m currently at the SIAM Dynamical Systems meeting in Snowbird, Utah.  I gave a short version of my talk on calculating finite size effects of the Kuramotor coupled oscillator model using kinetic theory and path integral approaches.  Here is the longer and more informative version of the talk.  I summarized the papers on this talk here.


Follow

Get every new post delivered to your Inbox.

Join 121 other followers