I gave at talk today at St. Mary’s College of Maryland. The talk was on the dynamics of visual competition. It differs slightly from previous versions. The slides are here. A summary of the talk can be found here. The talk also covered my recent work on autism that is summarized here.
Archive for the ‘Talks’ Category
I’m on my way back from the 2011 Joint Mathematics Meeting. I gave a talk yesterday on finite size effects in neural networks. I gave a pedagogical talk on the strategy that Michael Buice and I have employed to analyze finite size networks in networks of coupled spiking neurons. My slides are here. We’ve adapted the formalism we used to analyze the finite size effects of the Kuramoto system (see here for summary) to a system of synaptically coupled phase oscillators.
I was in New York yesterday and gave a talk at NYU in a joint Center for Neural Science and Courant Institute seminar. My slides are here. The talk is an updated version of the talk I gave before and summarized here. The new parts include recent work on applying the model to Autism (see here) and some new work on resolving why mutual inhibition models of binocular rivalry do not reproduce Levelt’s fourth proposition, which states that as the contrast is decreased to both eyes, the dominance time of the percepts increases. I will summarize the results of that work in detail when we finish the paper.
My blog post on the summary of my SIAM talk on obesity was picked up by Reddit.com. There is also a story by mathematics writer Barry Cipra in SIAM news (not yet available online). I thought I would explicitly clarify the “push” hypothesis here and reiterate that this is my opinion and not NIH policy. What we had done previously was to derive a model of human metabolism that gives a prediction of how much you would weigh given how much you eat. The model is fully dynamic and can capture how much you gain or lose weight depending on changes in diet or physical activity. The parameters in the model have been calibrated with physiological measurements and validated in several independent studies of people undergoing weight change due to diet changes.
We then applied this model to the US population. We used data from the National Health and Nutrition Examination Survey, which has kept track of the body weights of a representative sample of the US population for the past several decades and food availability data from the USDA. Since the 1970′s, the average US body weight has increased linearly. The US food availability per person has also increased linearly. However, when we used the food availability data in the model, it predicted that the weight gain would grow linearly at a faster rate. The USDA has used surveys and other investigative techniques to try to account for how much food is wasted. If we calibrate the wastage to 1970 then we predict that the difference between the amount consumed and the amount available progressively increased from 1970 to 2005. We interpreted this gap to be a progressive increase of food waste. An alternative hypothesis would be that everyone burned more energy than the model predicted.
This also makes a prediction for the cause of the obesity epidemic although we didn’t make this the main point of the paper. In order to gain weight, you have to eat more calories than you burn. There are three possibilities for how this could happen: 1) We could decrease energy expenditure by reducing physical activity and thus increase weight even if we ate the same amount of food as before, 2) There could be a pull effect where we became hungrier and start to eat more food, and 3) There could be a push effect where we eat more food than we would have previously because of increased availability. Now the data rules out hypothesis 1) since we assumed that physical activity stayed constant and still showed an increasing gap between energy intake and energy expenditure. If anything, we may be exercising more than expected. Hypothesis 2) would predict that the gap between intake and expenditure should fall and waste should decrease as we utilize more of the available food. This then leaves us with hypothesis 3) where we are being supplied more food than we need to maintain our body weight and while we are eating some of this excess food, we are wasting more and more of it as well.
The final question, which is outside my realm of expertise, is why food supply increased. The simple answer is that food policy changed dramatically in the 1970′s. Earl Butz was appointed to be the US Secretary of Agriculture in 1971. At that time food prices were quite high so he decided to change farm policy and vastly increase the production of corn and soybeans. As a result, the supply of food increased dramatically and the price of food began to drop. The story of Butz and the consequences of his policy shift is documented in the film King Corn.
I visited the University of Pittsburgh today to give a colloquium. I was supposed to have come in February but my plane was cancelled because of a snow storm. This was not the really big snow storm that closed Washington, DC and Baltimore for a week but a smaller one that hit New England and not the DC area. My flight was on Southwest and I presume that they have such a tightly correlated flight system, where planes circulate around the country in a “just in time” fashion, that a disturbance in one part of the country affects the rest of the country. So while other airlines just had cancellations in New England, Southwest flights were cancelled for the day all across the US. It seems that there is a trade off between business efficiency and robustness. I drove this time. My talk was on the finite size effects in the Kuramoto model, which I’ve given several times already. However, I have revised the slides on pedagogical grounds and they can be found here.
Last Monday I gave a plenary talk at the joint Life Sciences and Annual SIAM meeting. My slides can be downloaded from a previous post. The talk summarized the work I’ve been doing on obesity and human body weight change for the past six years. The main idea is that at the most basic level, the body can be modeled as a fuel tank. You put food into the tank by eating and you use up energy to maintain bodily functions and do physical work. The difference between food intake rate and energy expenditure rate is the rate of change of your body weight. In calculating body weight you need to convert energy (e.g. Calories consumed) into mass (e.g. kilograms). However, the difficulty in doing this is that your body is not homogeneous. You are comprised of water, bones, minerals, fat, protein and carbohydrates in the form of glycogen. Each of these quantities has its own energy density (e.g. Calories/kg). So in order to figure out how much you’ll weigh you need to figure out how the body partitions energy into these different components.
Here are the slides for my SIAM talk on generalizing the Wilson-Cowan equations to include correlations. This talk was mostly on the paper with Michael Buice and Jack Cowan that I summarized here. However, I also contrasted our work with the recent work of Paul Bressloff who uses a system size expansion of the Markov process that Michael and Jack proposed as a microscopic model for Wilson-Cowan in their 2007 paper. The difference between the two approaches stems from the interpretation of what the Wilson-Cowan equation describes. In our interpretation, the Wilson-Cowan equation describes the firing rate or stochastic intensity of a Poisson process. A Poisson distribution is notable because all cumulants are equal to the mean. Our expansion is in terms of factorial cumulants (we called them normal ordered cumulants in the paper because we didn’t know there was a name for them), which are deviations from Poisson statistics. Bressloff, on the other hand, considers the Wilson -Cowan equation to be the average population firing rate of a large population of neurons. In the infinite size limit, there are no fluctuations. His expansion is in terms of regular cumulants and the inverse system size is the small parameter. In our formulation, the expansion parameter is related to the distance to a critical point where the expansion would break down. In essence, we use a Bogoliubov hierarchy of time scales expansion where the higher order factorial cumulants decay to steady state much faster than the lower order ones.
I am currently in Pittsburgh for the SIAM joint Life Sciences and Annual meetings. SIAM is the Society for Industrial and Applied Mathematics and has nothing to do with the country currently named Thailand. I just gave my invited joint plenary talk. My slides are here. The talk was on my recent work on human body weight change and obesity. I have posted on this topic recently here and here. I would write a summary of the talk but I’m feeling a bit under the weather right now and will leave it for another time.
Last Thursday I had to drive from Baltimore to State College, PA for the 16th Congress of the US National Congress on Theoretical and Applied Mechanics to give a talk in one of the sessions. I gave a condensed version of the kinetic theory of coupled oscillators talk I gave in Warwick last month. The theme of the session was on recent advances in nonlinear dynamics so the topics were quite diverse. I’m not sure my talk resonated with the audience. The only question I received was how was this related to the NIH!
During the six hours of driving I did going back and forth, I listened to podcasts of the Australian radio show The Philosopher’s Zone. This is a wonderful program hosted by Alan Saunders, who has a PhD in philosophy and is also a food expert. Every show consists of Saunders talking to a guest, who is usually a philosopher but not always, about either a book she has recently written or some other philosophical topic. The topics can range from the philosophy of Buffy the Vampire Slayer to Stoicism and everything in between. Saunders has a knack for making complex philosophical ideas accessible and interesting. In addition to The Philosopher’s Zone, I still regularly listen to Quirks and Quarks, The Science Show, Radio Lab, and The Naked Scientists. I’ll also sneak in All in the Mind from time to time.
I visited the Gatsby Computational Neuroscience Unit in London on Friday. I talked about how the dynamics of many observed neural responses to visual stimuli can be explained by varying just two parameters in a “micro-cortical circuit” at the sub-millimetre level. The circuit consists of recurrent excitation, lateral inhibition and fatigue mechanisms like synaptic depression. Recurrently connected pools of neurons inhibit other pools and the competition between pools with fatigue leads to all the varied observed responses we see. I also covered my recent paper on autism where we describe how perturbing the synaptic balance in the micro-cortical circuit can then lead to alterations in performance of simple saccade tasks that seem to match clinical observations. My slides for the talk are here.
I’m currently in England at the Dendrites, Neurones, and Networks workshop. The talks have really impressed me. The field of computational neuroscience has really reached a critical mass where truly excellent work is being done in multiple directions. I gave a talk on finite system size effects in neural networks. I mostly covered the work on the Kuramoto model with a little bit on synaptically coupled phase neurons at the end. My slides are here.
I just gave a seminar in the math department at the University of Iowa today. I gave a talk that was similar to the one I gave at the Mathematical Biosciences Institute on Bayesian Inference for dynamical systems. My slides are here. I was lucky I made it to give this talk. My flight from Chicago O’Hare to Cedar Rapids, Iowa was cancelled yesterday and I was rebooked on another flight tonight, which wouldn’t have been of much use for my talk this afternoon. There was one fight left last evening to Cedar Rapids but bad weather cancelled several flights yesterday so there were many stranded travelers. I was number 38 on the standby list and thought I had no chance to make it out that night. However, on a whim I decided to wait it out and I started to move up the list because other people had evidently given up as well. To my great surprise and relief I was the last person to get on the plane. There was a brief scare when they asked me to get off because we exceeded the weight limitation but then they changed their mind and let us all fly. (Someone else had kindly volunteered to take my place). I learned two lessons. One is to keep close watch of your flight at all times so you can get on the standby list as soon as possible and two is that even number 38 on a plane that only seats 50 can still make it.
I’m currently at the University of Toronto to give two talks in a series that is jointly hosted by the Physics department and the Fields Institute. The Fields Institute is like the Canadian version of the Mathematical Sciences Research Institute in the US and is named in honour of Canadian mathematician J.C. Fields, who started the Fields Medal (considered to be the most prestigious prize for mathematics). The abstracts for my talks are here.
The talk today was a variation on my kinetic theory of coupled oscillators talk. The slides are here. I tried to be more pedagogical in this version and because it was to be only 45 minutes long, I also shortened it quite a bit. However, in many ways I felt that this talk was much less successful than the previous versions. In simplifying the story, I left out much of the history behind the topic and thus the results probably seemed somewhat disembodied. I didn’t really get across why a kinetic theory of coupled oscillators is interesting and useful. Here is the post giving more of the backstory on the topic, which has a link to an older version of the talk as well. Tomorrow, I’ll talk about my obesity work.
I’m currently at the Mathematical Biosciences Institute for a workshop on Computational challenges in integrative biological modeling. The slides for my talk on using Bayesian methods for parameter estimation and model comparison are here.
Abstract: Differential equation models are often used to model biological systems. An important and difficult problem is how to estimate parameters and decide which model among possible models is the best. I will argue that Bayesian inference provides a self-consistent framework to do both tasks. In particular, Bayesian parameter estimation provides a natural measure of parameter sensitivity and Bayesian model comparison automatically evaluates models by rewarding fit to the data while penalizing the number of parameters. I will give examples of employing these approaches on ODE and PDE models.
I was at the FACM ’09 conference held at the New Jersey Institute of Technology the past two days. I gave a talk on “Effective theories for neural networks”. The slides are here. This was an unsatisfying talk on two accounts. The first was that I didn’t internalize how soon this talk came after the Snowbird conference and so I didn’t have enough time to properly prepare. I thus ended up giving a talk that provided enough information to be confusing and hopefully thought provoking but not enough to be understood. The second problem was that there is a flaw in what I presented.
I’ll give a brief backdrop to the talk for those unfamiliar with neuroscience. The brain is composed of interconnected neurons and as a proxy for understanding the brain, computational neuroscientists try to understand what a collection of coupled neurons will do. The state of a neuron is characterized by the voltage across its membrane and the state of its membrane ion channels. When a neuron is given enough input, there can be a massive change of voltage and flow of ions called an action potential. One of the ions that flows into the cell is calcium, which can trigger the release of neurotransmitter to influence other neurons. Thus, neuroscientists are highly focused on how and when action potentials or spikes occur.
We can thus model a neural network at many levels. At the bottom level, there is what I will call a microscopic description where we write down equations for the dynamics of the voltage and ion channels for each neuron. These neuron models are sometimes called conductance-based neurons and the Hodgkin-Huxley neuron is the first and most famous of them. They usually consist of two to four differential equations and can easily be a lot more. On the other hand, if one is more interested in just the spiking rate, then there is a reduced description for that. In fact, much of the early progress in mathematically understanding neural networks used rate equations, examples being Wison and Cowan, Grossberg, Hopfield and Amari. The question that I have always had was what is the precise connection between a microscopic description and a spike rate or activity description. If I start with a network of conductance-based neurons can I derive the appropriate activity based description?
I’m currently at the SIAM Dynamical Systems meeting in Snowbird, Utah. I gave a short version of my talk on calculating finite size effects of the Kuramotor coupled oscillator model using kinetic theory and path integral approaches. Here is the longer and more informative version of the talk. I summarized the papers on this talk here.
I’m currently in Edinburgh for a Mathematical Neuroscience workshop. I gave a tutorial today on using field theoretic methods to solve stochastic differential equations (SDE’s). The slides are here. The methods I presented have been around for decades but as far as I know they haven’t been collated together into a pedagogical review for nonexperts. Also, there is an entire community of theorists and mathematicians that are unaware of path integral methods. In particular, I apply the response function formalism stemming from the work of Martin Siggia and Rose. Field theory and diagrammatic methods are a nice way to organize perturbation expansions for nonlinear SDE’s. I plan to write a review paper on this topic in the next few months and will post it here.
Addendum: Jan 20, 2011. The review paper can be found here.
Last week, I gave a physics colloquium at the Catholic University of America about recent work on using kinetic theory and field theory approaches to analyze finite-size corrections to networks of coupled oscillators. My slides are here although they are converted from Keynote so the movies don’t work. Coupled oscillators arise in contexts as diverse as the brain, synchronized flashing of fireflies, coupled Josephson junctions, or unstable modes of the Millennium bridge in London. Steve Strogatz’s book Sync gives a popular account of the field. My talk considers the Kuramoto model
where the frequencies are drawn from a fixed distribution . The model describes the dynamics of the phases of an all-to-all connected network of oscillators. It can be considered to be the weak coupling limit of a set of nonlinear oscillators with different natural frequencies and a synchronizing phase response curve.
In December I gave a talk at the new and beautiful Howard Hughes Janelia Farm Research Campus in Virginia. I talked about my work on how we resolve ambiguous or multiple stimuli. My slides are here although the movies don’t work. The talk is based mostly on work in two papers (C.R. Laing and C.C. Chow. `A spiking neuron model for binocular rivalry’, J. Comp. Neurosci. 12, 39-53 (2002) and S. Moldakarimov, J.E. Rollenhagen, C.R. Olson, and C.C. Chow, ‘ Competitive dynamics in cortical responses to visual stimuli’, Journal of Neurophysiology 94, 3388-3396 (2005), both of which can be downloaded from here) with some new stuff that Hedi Soula and I have been working on (and mostly off) for the past four years. I’m hoping that we’ll finally finish the paper this year.
When the eyes are presented with multiple or ambiguous stimulus, several things can happen. You can perceive multiple images, which is what you usually do when you look out into the natural world. You can resolve an ambiguity. For example, you could be presented with a dark and shadowy image and your brain decides on what is figure and what is ground. However, if you are presented with something truly ambiguous like the Necker cube then your perception will be multistable. You’ll see one thing and then another and back again. The most striking form of multistable perception is binocular rivalry, which occurs when each eye is presented with a completely unrelated image like horizontal stripes to the left eye and vertical stripes to the right eye. For a range of contrasts, your perception will alternate between vertical and horizontal stripes. The alternations are stochastic with a gamma-like distribution for the dominance times with a mean of a second or so, which varies from person to person.