Guest editorial

I have a guest editorial in the SIAM Dynamical System online magazine DSWeb this month.  The full text of the editorial is below and the link is here.  I actually had several ideas circulating in my head and didn’t really know what would come out until I started to write.   This is how my weekly blog posts often go.  The process of writing itself helps to solidify inchoate ideas.  I think too often, young people want to wait until everything is under control before they write. I try to tell them to never just sit there and stare at a screen.  Just start writing and something will come.

Math in the Twenty First Century

I was a graduate student at MIT in the late eighties. When I started, I wrote Fortran code on an EMACS editor at a VT100 terminal to run simulations on a Digital Equipment Corporation VAX computer. When I didn’t understand something, I would try to find a book or paper on the topic. I spent hours in the library reading and photocopying. Somehow, I managed to find and read everything that was related to my thesis topic. Email had just become widespread at that time. I recall taking to it immediately and found it to be an indispensible tool for keeping in touch with my friends at other universities and even to make dinner plans. Then, I got a desktop workstation running X Windows. I loved it. I could read email and run my code simultaneously. I could also log onto the mainframe computer if necessary. As I was finishing up, my advisor got a fax machine for the office (hard to believe that they are that recent and now obsolete) and used it almost everyday.

I think that immediate integration of technology has been the theme of the past twenty-five years. Each new innovation – email, the desktop computer, the fax machine, the laser printer, the world wide web, mobile phones, digital cameras, power point slides, iPods, and so forth – becomes so quickly enmeshed into our lives that we can’t imagine what life would be like without them. Today, if I want to know what some mathematical term means I can just type it into Google and there will usually be a Wikipedia or Scholarpedia page on it. If I need a reference, I can download it immediately. If I have a question, I can post it on Math Overflow and someone, possibly a Field’s medalist, will answer it quickly (I actually haven’t done this yet myself but have watched it in action). Instead of walking over to the auditorium, I now can sit in my office and watch lectures online. My life was not like this fifteen years ago.

Yet, despite this rapid technological change, we still publish in the same old way. Sure, we can now submit our papers on the web and there are online journals but there has been surprisingly little innovation otherwise. In particular, many of the journals we publish in are not freely available to the public. The journal publishing industry is a monopoly that has surprising lasting power. If you are not behind the cozy firewall of an academic institution, much of the knowledge we produce is inaccessible to you. Math is much better than most other sciences since people post their papers to the arXiv. This is a great thing but it is not the same as a refereed journal. Perhaps, now is the time for us to come up with a new model to present our work – something that is refereed and public. Something new. I don’t know what it should look like, but I do know that when it comes around I’ll wonder how I ever got along without it.

Philosophical confusion about free will

The New York Times has a blog series called the Stone on philosophy.  Unlike some of my colleagues, I actually like philosophy and spend time studying and thinking about it.  However, in some instances I think that philosophers just like to create problems that  don’t exist.  This week’s Stone post is by Notre Dame philosopher Gary Gutting on free will.  It’s in reference to a recent Nature news article on neuroscience versus philosophy on the same topic.  The Nature article tells of some recent research that shows that fMRI scans can predict what people will do before they  “know” it themselves.  This is not new knowledge.  We’ve known for decades that neurons in the brain “know” what an animal will do before the animal does.  Gutting’s post is about how these “new” results don’t settle the question of free will and that a joint effort between philosophers and neuroscientists could get us closer to the truth.

I think this kind of thinking is completely misguided.  When it comes to free will there are only two questions one needs to answer.  The first is do you think mind comes from brain and the second is do you think brain comes from physics. (There is a third implicit question which is do you believe that physics is complete, meaning that it is only defined by a given set of physical laws.)  Because, if you answer yes to these questions then there cannot be free will. All of our actions are determined by processes in your brain, which are governed by a set of prescribed physical laws. There is no need for any more philosophical or biological inquiry, as Gutting suggests.  I would venture that almost all neuroscientists think that brain is responsible for mind. You could argue that physics is not completely understood and there is some mysterious connection between quantum mechanics and consciousness but this doesn’t involve neuroscience and probably not philosophy either. It is a question of physics.

There are many unsolved problems in biology and neuroscience.  We really have little idea of how the brain really works and particularly how genes affect behaviour and cognition.  However, whether or not we have free will is no longer a question of neuroscience or biology.   That is not to say that philosophers and neuroscientists should not stop thinking about free will.  They should simply stop worrying about whether it exists and start thinking about what we should do in a post free will world (see previous blog post).  How does this impact how we think about ethics and crime?  What sort of society is fair given that people have no free will?  Although, we may not have free will we certainly have the perception of free will and we also experience real joy and pain. There are consequences to our actions and there could be ways that we can modify them so that they cause less suffering in the world.

Approaches to theoretical biology

I have  recently  thought about how to classify what theorists actually do and I came up with three broad approaches: 1) Model analysis, 2) Constraint driven modeling and 3) Data driven modeling.   By model, I mean a set of equations (and inequalities) that are proposed to govern or mimic the behavior of some biological system.  Often, a given research project will involve more than one category.  Model analysis is trying to understand what the equations  do. For example, there could exist some set of differential equations and the goal is to figure out what the solutions of these equations are or look like. Constraint driven modeling is trying to explain a phenomenon starting from another set of phenomena.  For example, trying to explain the rhythms in EEG measurements by exploring networks of coupled spiking neurons.  Finally, data driven modeling, is looking directly for patterns in the data itself and not worry about where the data may have come from.  An example would be trying to find systematic differences in the DNA sequence between people with and without a certain disease.

I have spent most of my scientific career in Approach 1).  What I have done a lot in the past is to construct approximate solutions to dynamical systems and then compare them to numerical simulations.  Thus, I never  had to worry too much about data and statistics or even real phenomena itself.  In fact, even when I first moved into biology in the early nineties, I still did mostly the same thing. (The lone exception was my work on posture control, which did involve paying attention to data). Computational neuroscience is a mature enough field that one can focus exclusively on analyzing existing models.    I started moving more towards Approach 2) when I began studying localized persistent neural activity or bumps.  My first few papers on the subject were mostly analyzing models but there was a more exploratory nature to them than my previous work.  Instead of trying to explicitly compute a quantity, it was more about exploring what networks of neurons can do.  The work on binocular rivalry and visual competition were attempts to explain a cognitive phenomenon using the constraints imposed by the properties of neurons and synapses. However, I was still only trying to explain the data qualitatively.

That changed when I started my work on modeling the acute inflammatory response.  Now, I was just given data with very few biological constraints. I basically took what the immunologists told me and constructed the simplest model possible that could account for the data.  Given that my knowledge of statistics was minimal, I simply used the “eye test” as a basis of whether or not the model worked or not.   The model somehow fit the data and did bring insights to the phenomenon but it was not done in a  systematic way.  When I arrived at NIH, I was introduced to Bayesian inference and this really opened my eyes.  I realized that when one doesn’t have strong biological or physical constraints, Approach 2) is not that useful.  It is easy to cobble together a system of differential equations to explain any data.  This is how I ended up moving more towards Approach 3). Instead of just coming up with some set of ODEs that can explain the data, what we did was to explore classes of models that could explain a given set of data and use Bayesian model comparison to decide which was better.  This approach was used in the work on quantifying insulin’s effect on free fatty acid dynamics.  While that work involved some elements of Approach 2) in that we utilized some constraints, my work on protein sequences is almost all within Approach 3).  The work on obesity and body weight change involves all three Approaches. The conservation of energy and the vast separation of time scales put a lot of strong constraints on the dynamics so one can get surprisingly far using Approach 1) and 2).

When I was younger, some my fellow graduate students would lament that they missed out on the glory days of the 1930’s when quantum mechanics was discovered.  It is true that when a field matures, it starts to move from Approach 3) to 2) and 1).  Theoretical physics is almost exclusively in 1) and 2). Even string theory is basically all in Approach 1) and 2).  They are trying to explain all the known forces using the constraints of quantum mechanics and general relativity.   The romantic periods of physics involved Approach 3).  There was Galileo, Kepler and Newton inventing classical mechanics. Lavoisier, Carnot, Thompson and so forth coming up with conservation laws and thermodynamics. Faraday and Maxwell defining electrodynamics. Einstein invented the “Thought experiment” version of Approach 3) to dream up Special and General Relativity.  The last true romantic period  in physics was the invention of quantum mechanics.  Progress since then has basically been in Approaches 1) and 2).  However,  Approach 3) is alive and well in biology and data mining. The theoretical glory days of these fields might be now.

Talk in Marseille

I just returned from an excellent meeting in Marseille. I was quite impressed by the quality of talks, both in content and exposition. My talk may have been the least effective in that it provoked no questions. Although I don’t think it was a bad talk per se, I did fail to connect with the audience. I kind of made the classic mistake of not knowing my audience. My talk was about how to extend a previous formalism that much of the audience was unfamiliar with. Hence, they had no idea why it was interesting or useful. The workshop was on mean field methods in neuroscience and my talk was on how to make finite size corrections to classical mean field results. The problem is that many of the participants of the workshop don’t use or know these methods. The field has basically moved on.

In the classical view, the mean field limit is one where the discreteness of the system has been averaged away and thus there are no fluctuations or correlations. I have been struggling over the past decade trying to figure out how to estimate finite system size corrections to mean field. This led to my work on the Kuramoto model with Eric Hildebrand and particularly Michael Buice. Michael and I have now extended the method to synaptically coupled neuron models. However, to this audience, mean field pertains more to what is known as the “balanced state”. This is the idea put forth by Carl van Vreeswijk and Haim Sompolinsky to explain why the brain seems so noisy. In classical mean field theory, the interactions are scaled by the number of neurons N so in the limit of N going to infinity the effect of any single neuron on the population is zero. Thus, there are no fluctuations or correlations. However in the balanced state the interactions are scaled by the square root of the number of neurons so in the mean field limit the fluctuations do not disappear. The brilliant stroke of insight by Carl and Haim was that a self consistent solution to such a situation is where the excitatory and inhibitory neurons balance exactly so the net mean activity in the network is zero but the fluctuations are not. In some sense, this is the inverse of the classical notion. Maybe it should have been called “variance field theory”. The nice thing about the balanced state is that it is a stable fixed point and no further tuning of parameters is required. Of course the scaling choice is still a form of tuning but it is not detailed tuning.

Hence, to the younger generation of theorists in the audience, mean field theory already has fluctuations. Finite size corrections don’t seem that important. It may actually indicate the success of the field because in the past most computational neuroscientists were trained in either physics or mathematics and mean field theory would have the meaning it has in statistical mechanics. The current generation has been completely trained in computational neuroscience with it’s own canon of common knowledge. I should say that my talk wasn’t a complete failure. It did seem to stir up interest in learning the field theory methods we have developed as people did recognize it provides a very useful tool to solve the problems they are interested in.

Addendum 2011-11-11

Here are some links to previous posts that pertain to the comments above.

https://sciencehouse.wordpress.com/2009/06/03/talk-at-njit/

https://sciencehouse.wordpress.com/2009/03/22/path-integral-methods-for-stochastic-equations/

https://sciencehouse.wordpress.com/2009/01/17/kinetic-theory-of-coupled-oscillators/

https://sciencehouse.wordpress.com/2010/09/30/path-integral-methods-for-sdes/

https://sciencehouse.wordpress.com/2010/02/03/paper-now-in-print/

https://sciencehouse.wordpress.com/2009/02/27/systematic-fluctuation-expansion-for-neural-networks/