Path Integral Methods for SDEs

I’ve just uploaded a review paper to arXiv on the use of path integral and field theory methods for solving stochastic differential equations.   The paper can be obtained here.  Most books on field theory and path integrals are geared towards applications in particle physics or statistical mechanics.  This paper shows how you can adapt these methods to solving everyday problems in applied mathematics and theoretical biology.  The nice thing about it is that they form an organized way to do perturbative expansions and explicitly compute quantities like moments.  The paper was originally written for a special issue of the journal Methods that fell through.  Our goal is to collate the papers intended for that issue into a book, which will include an expanded version of this paper.

Advertisements

Tononi in the Times

The New York Times had a fun article on neuroscientist  Giulio Tononi last week.  Tononi is one of the most creative researchers in cognitive science right now.   Many of my views on consciousness, which I partly summarized here,  have been strongly influenced by his ideas.

Here is an excerpt from the article:

New York Times: Consciousness, Dr. Tononi says, is nothing more than integrated information. Information theorists measure the amount of information in a computer file or a cellphone call in bits, and Dr. Tononi argues that we could, in theory, measure consciousness in bits as well. When we are wide awake, our consciousness contains more bits than when we are asleep.

For the past decade, Dr. Tononi and his colleagues have been expanding traditional information theory in order to analyze integrated information. It is possible, they have shown, to calculate how much integrated information there is in a network. Dr. Tononi has dubbed this quantity phi, and he has studied it in simple networks made up of just a few interconnected parts. How the parts of a network are wired together has a big effect on phi. If a network is made up of isolated parts, phi is low, because the parts cannot share information.

But simply linking all the parts in every possible way does not raise phi much. “It’s either all on, or all off,” Dr. Tononi said. In effect, the network becomes one giant photodiode.

Networks gain the highest phi possible if their parts are organized into separate clusters, which are then joined. “What you need are specialists who talk to each other, so they can behave as a whole,” Dr. Tononi said. He does not think it is a coincidence that the brain’s organization obeys this phi-raising principle.

Dr. Tononi argues that his Integrated Information Theory sidesteps a lot of the problems that previous models of consciousness have faced. It neatly explains, for example, why epileptic seizures cause unconsciousness. A seizure forces many neurons to turn on and off together. Their synchrony reduces the number of possible states the brain can be in, lowering its phi.

Tononi is an NIH Pioneer Award winner this year and his talk this coming Thursday at 9:00 EDT will be webcast.  The whole slate of Pioneer Award winners are all quite impressive.

Rethinking clinical trials

Today’s New York Times has a poignant article about the cold side of randomized clinical trials.  It describes the case of two cousins with melanoma and a promising new drug to treat it.  One cousin was given the drug and is still living while the other was assigned to the control arm of the trial and is now dead.  The new treatment seems to work better but the drug company and trial investigators want to complete the trial to prove that it actually extends life and that implies that the control arm patients need to die before the treatment arm patients.

Ever since my work on modeling sepsis a decade ago, I have felt that we need to come up with a paradigm for testing the efficacy of treatments.  Aside from the ethical concern of depriving a patient of a treatment just to get better statistics, I felt that we would hit a combinatorial limit where it would just be physically impossible to test a new generation of treatments.  Currently, a drug is tested in three phases before it is approved for use.  Phase I is a small trial that tests the safety of the drug in humans.  Phase II then tests for the efficacy of the drug in a larger group.  If the drug passes these two phases then it goes to Phase III, which is a randomized clinical trial with many patients and at multiple centers.   It takes a long time and a lot of money to make it through all of these stages.

Continue reading

The push hypothesis for obesity

My blog post on the summary of my SIAM talk on obesity was picked up by Reddit.com.  There is also a story by mathematics writer Barry Cipra in SIAM news (not yet available online).  I thought I would explicitly clarify the “push” hypothesis here and reiterate that this is my opinion and not NIH policy.  What we had done previously was to derive a model of human metabolism that gives a prediction of how much you would weigh given how much you eat.  The model is fully dynamic and can capture how much you gain or lose weight depending on changes in diet or physical activity.  The parameters in the model have been calibrated with physiological measurements and validated in several independent studies of people undergoing weight change due to diet changes.

We then applied this model to the US population.  We used data from the National Health and Nutrition Examination Survey, which has kept track of the body weights of a representative sample of the US population for the past several decades and food availability data from the USDA.  Since the 1970’s, the average US body weight has increased linearly.  The US food availability per person has also increased linearly.  However, when we used the food availability data in the model, it predicted that the weight gain would grow linearly at a faster rate.  The USDA has used surveys and other investigative techniques to try to account for how much food is wasted.  If we calibrate the wastage to 1970 then we predict that the difference between the amount consumed and the amount available progressively increased from 1970 to 2005.  We interpreted this gap to be a progressive increase of food waste.  An alternative hypothesis would be that everyone burned more energy than the model predicted.

This also makes a prediction for the cause of the obesity epidemic although we didn’t make this the main point of the paper.  In order to gain weight, you have to eat more calories than you burn.  There are three possibilities for how this could happen: 1)  We could decrease energy expenditure by reducing physical activity and thus increase weight even if we ate the same amount of food as before,  2) There could be a pull effect where we became hungrier and start to eat more food, and 3)  There could be a push effect where we eat more food than we would have previously because of increased availability.  Now the data rules out hypothesis 1) since we assumed that physical activity stayed constant and still showed an increasing gap between energy intake and energy expenditure.  If anything, we may be exercising more than expected.  Hypothesis 2) would predict that the gap between intake and expenditure should fall and waste should decrease as we utilize more of the available food.  This then leaves us with hypothesis 3) where we are being supplied more food than we need to maintain our body weight and while we are eating some of this excess food, we are wasting more and more of it as well.

The final question, which is outside my realm of expertise, is why food supply increased. The simple answer is that food policy changed dramatically in the 1970’s. Earl Butz was appointed to be the US Secretary of Agriculture in 1971.  At that time food prices were quite high so he decided to change farm policy and vastly increase the production of corn and soybeans.  As a result, the supply of food increased dramatically and the price of food began to drop.   The story of Butz and the consequences of his policy shift is documented in the film King Corn.

Talk at Pitt

I visited the University of Pittsburgh today to give a colloquium.  I was supposed to have come in February but my plane was cancelled because of a snow storm.  This was not the really big snow storm that closed Washington, DC and Baltimore for a week but a smaller one that hit New England and not the DC area.  My flight was on Southwest and I presume that they have such a tightly correlated flight system, where planes circulate around the country in a “just in time” fashion, that a disturbance in one part of the country affects the rest of the country.  So while other airlines just had cancellations in New England, Southwest flights were cancelled for the day all across the US.  It seems that there is a trade off between business efficiency and robustness.  I drove this time. My talk was on the finite size effects in the Kuramoto model, which I’ve given several times already.  However, I have revised the slides on pedagogical grounds and they can be found  here.

Passwords

I always wondered if there was any evidence that strong passwords (i.e. no dictionary words, include numbers, different cases and symbols and change frequently with no repeats) for computer systems are really necessary. The theory is that  you need a password that cannot be broken by multiple guesses.  However, I’d like to know how many systems are actually broken into using a brute force attack. My guess is that break ins occur using other means, like stealing passwords with spywear.  This article in the New York Times confirms my suspicions that we could make password selection much less onerous on the user without sacrificing security.