Archive for the ‘Papers’ Category

New paper on GPCRs

November 28, 2011

New paper in PloS One:

Fatakia SN, Costanzi S, Chow CC (2011) Molecular Evolution of the Transmembrane Domains of G Protein-Coupled Receptors. PLoS ONE 6(11): e27813. doi:10.1371/journal.pone.0027813

Abstract

G protein-coupled receptors (GPCRs) are a superfamily of integral membrane proteins vital for signaling and are important targets for pharmaceutical intervention in humans. Previously, we identified a group of ten amino acid positions (called key positions), within the seven transmembrane domain (7TM) interhelical region, which had high mutual information with each other and many other positions in the 7TM. Here, we estimated the evolutionary selection pressure at those key positions. We found that the key positions of receptors for small molecule natural ligands were under strong negative selection. Receptors naturally activated by lipids had weaker negative selection in general when compared to small molecule-activated receptors. Selection pressure varied widely in peptide-activated receptors. We used this observation to predict that a subgroup of orphan GPCRs not under strong selection may not possess a natural small-molecule ligand. In the subgroup of MRGX1-type GPCRs, we identified a key position, along with two non-key positions, under statistically significant positive selection.

New paper

November 17, 2011

A new paper in Physical Review E is now available on line here.  In this paper Michael Buice and I show how you can derive an effective stochastic differential (Langevin) equation for a single element (e.g. neuron) embedded in a network by averaging over the unknown dynamics of the other elements. This then implies that given measurements from a single neuron, one might be able to infer properties of the network that it lives in.  We hope to show this in the future. In this paper, we perform the calculation explicitly for the Kuramoto model of coupled oscillators (e.g. see here) but it can be generalized to any network of coupled elements.  The calculation relies on the path or functional integral formalism Michael developed in his thesis and generalized at the NIH.  It is a nice application of what is called “effective field theory”, where new dynamics (i.e. action) are obtained by marginalizing or integrating out unwanted degrees of freedom.  The path integral formalism gives a nice platform to perform this averaging.  The resulting Langevin equation has a noise term that is nonwhite, non-Gaussian and multiplicative.  It is probably not something you would have guessed a priori.

 Michael A. Buice1,2 and Carson C. Chow11Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, Maryland 20892, USA
2Center for Learning and Memory, University of  Texas at Austin, Austin, Texas, USA

Received 25 July 2011; revised 12 September 2011; published 17 November 2011

Complex systems are generally analytically intractable and difficult to simulate. We introduce a method for deriving an effective stochastic equation for a high-dimensional deterministic dynamical system for which some portion of the configuration is not precisely specified. We use a response function path integral to construct an equivalent distribution for the stochastic dynamics from the distribution of the incomplete information. We apply this method to the Kuramoto model of coupled oscillators to derive an effective stochastic equation for a single oscillator interacting with a bath of oscillators and also outline the procedure for other systems.

Published by the American Physical Society

URL: http://link.aps.org/doi/10.1103/PhysRevE.84.051120
DOI: 10.1103/PhysRevE.84.051120
PACS: 05.40.-a, 05.45.Xt, 05.20.Dd, 05.70.Ln

New paper in The Lancet

August 26, 2011

The Lancet has just published a series of articles on obesity.  They can be found here.  I am an author on the third paper, which covers the work Kevin Hall and I have been working on for the past seven years.  There was a press conference in London yesterday that Kevin attended and there is a symposium today.  The announcement has since been picked up in the popular press.  Here are some samples:  Science Daily, Mirror, The Australian, and The Chart at CNN.

 

 

New paper on binocular rivalry

July 22, 2011
J Neurophysiol. 2011 Jul 20. [Epub ahead of print]

The role of mutual inhibition in binocular rivalry.

Seely J, Chow CC.

Binocular rivalry is a phenomenon that occurs when ambiguous images are presented
to each of the eyes. The observer generally perceives just one image at a time,
with perceptual switches occurring every few seconds. A natural assumption is
that this perceptual mutual exclusivity is achieved via mutual inhibition between
populations of neurons that encode for either percept. Theoretical models that
incorporate mutual inhibition have been largely successful at capturing
experimental features of rivalry, including Levelt's propositions, which
characterize perceptual dominance durations as a function of image contrasts.
However, basic mutual inhibition models do not fully comply with Levelt's fourth
proposition, which states that percepts alternate faster as the stimulus
contrasts to both eyes are increased simultaneously. This theory-experiment
discrepancy has been taken as evidence against the role of mutual inhibition for
binocular rivalry. Here, we show how various biophysically plausible
modifications to mutual inhibition models can resolve this problem.

PMID: 21775721  [PubMed - as supplied by publisher]

Paper can be downloaded here.

Review paper on steroid-mediated gene expression

July 22, 2011
Mol Cell Endocrinol. 2011 Jun 1. [Epub ahead of print]

The road less traveled: New views of steroid receptor action 
from the path of dose-response curves. 
Simons SS Jr, Chow CC.

Steroid Hormones Section, NIDDK/CEB, NIDDK, National Institutes of Health,
Bethesda, MD, United States.

Conventional studies of steroid hormone action proceed via quantitation of the
maximal activity for gene induction at saturating concentrations of agonist
steroid (i.e., A(max)). Less frequently analyzed parameters of receptor-mediated
gene expression are EC(50) and PAA. The EC(50) is the concentration of steroid
required for half-maximal agonist activity and is readily determined from the
dose-response curve. The PAA is the partial agonist activity of an antagonist
steroid, expressed as percent of A(max) under the same conditions. Recent results
demonstrate that new and otherwise inaccessible mechanistic information is
obtained when the EC(50) and/or PAA are examined in addition to the A(max).
Specifically, A(max), EC(50), and PAA can be independently regulated, which
suggests that novel pathways and factors may preferentially modify the EC(50)
and/or PAA with little effect on A(max). Other approaches indicate that the
activity of receptor-bound factors can be altered without changing the binding of
factors to receptor. Finally, a new theoretical model of steroid hormone action
not only permits a mechanistically based definition of factor activity but also
allows the positioning of when a factor acts, as opposed to binds, relative to a
kinetically defined step. These advances illustrate some of the benefits of
expanding the mechanistic studies of steroid hormone action to routinely include
EC(50) and PAA.

PMID: 21664235  [PubMed - as supplied by publisher]

New paper on insulin’s effect on lipolysis

July 22, 2011
J Clin Endocrinol Metab. 2011 May 18. [Epub ahead of print]

Higher Acute Insulin Response to Glucose May Determine Greater 
Free Fatty Acid Clearance in African-American Women.

Chow CC, Periwal V, Csako G, Ricks M, Courville AB, Miller BV 3rd, Vega GL,
Sumner AE.

Laboratory of Biological Modeling (C.C.C., V.P.), National Institute of Diabetes
and Digestive and Kidney Diseases, National Institutes of Health, Bethesda,
Maryland 20892; Departments of Laboratory Medicine (G.C.) and Nutrition (A.B.C.),
Clinical Center, National Institutes of Health, and Clinical Endocrinology Branch
(M.R., B.V.M., A.E.S.), National Institute of Diabetes and Digestive and Kidney
Diseases, National Institutes of Health, Bethesda, Maryland 20892; and Center for
Human Nutrition (G.L.V.), University of Texas Southwestern Medical Center at
Dallas, Dallas, Texas 75235.

Context: Obesity and diabetes are more common in African-Americans than whites.
Because free fatty acids (FFA) participate in the development of these
conditions, studying race differences in the regulation of FFA and glucose by
insulin is essential. Objective: The objective of the study was to determine
whether race differences exist in glucose and FFA response to insulin. Design:
This was a cross-sectional study. Setting: The study was conducted at a clinical
research center. Participants: Thirty-four premenopausal women (17
African-Americans, 17 whites) matched for age [36 ± 10 yr (mean ± sd)] and body
mass index (30.0 ± 6.7 kg/m(2)). Interventions: Insulin-modified frequently
sampled iv glucose tolerance tests were performed with data analyzed by separate
minimal models for glucose and FFA. Main Outcome Measures: Glucose measures were
insulin sensitivity index (S(I)) and acute insulin response to glucose (AIRg).
FFA measures were FFA clearance rate (c(f)). Results: Body mass index was similar
but fat mass was higher in African-Americans than whites (P < 0.01). Compared
with whites, African-Americans had lower S(I) (3.71 ± 1.55 vs. 5.23 ± 2.74
[×10(-4) min(-1)/(microunits per milliliter)] (P = 0.05) and higher AIRg (642 ±
379 vs. 263 ± 206 mU/liter(-1) · min, P < 0.01). Adjusting for fat mass,
African-Americans had higher FFA clearance, c(f) (0.13 ± 0.06 vs. 0.08 ± 0.05
min(-1), P < 0.01). After adjusting for AIRg, the race difference in c(f) was no
longer present (P = 0.51). For all women, the relationship between c(f) and AIRg
was significant (r = 0.64, P < 0.01), but the relationship between c(f) and S(I)
was not (r = -0.07, P = 0.71). The same pattern persisted when the two groups
were studied separately. Conclusion: African-American women were more insulin
resistant than white women, yet they had greater FFA clearance. Acutely higher
insulin concentrations in African-American women accounted for higher FFA
clearance.

PMID: 21593106  [PubMed - as supplied by publisher]

New paper on estimating food intake from body weight

July 22, 2011
 Am J Clin Nutr. 2011 Jul;94(1):66-74. Epub 2011 May 11.

Estimating changes in free-living energy intake and its confidence interval.

Hall KD, Chow CC.

Laboratory of Biological Modeling, National Institute of Diabetes and Digestive
and Kidney Diseases, Bethesda, MD.

Background: Free-living energy intake in humans is notoriously difficult to
measure but is required to properly assess outpatient weight-control
interventions. Objective: Our objective was to develop a simple methodology that 
uses longitudinal body weight measurements to estimate changes in energy intake
and its 95% CI in individual subjects. Design: We showed how an energy balance
equation with 2 parameters can be derived from any mathematical model of human
metabolism. We solved the energy balance equation for changes in free-living
energy intake as a function of body weight and its rate of change. We tested the 
predicted changes in energy intake by using weight-loss data from controlled
inpatient feeding studies as well as simulated free-living data from a group of
"virtual study subjects" that included realistic fluctuations in body water and
day-to-day variations in energy intake. Results: Our method accurately predicted 
individual energy intake changes with the use of weight-loss data from controlled
inpatient feeding experiments. By applying the method to our simulated
free-living virtual study subjects, we showed that daily weight measurements over
periods >28 d were required to obtain accurate estimates of energy intake change 
with a 95% CI of <300 kcal/d. These estimates were relatively insensitive to
initial body composition or physical activity level. Conclusions: Frequent
measurements of body weight over extended time periods are required to precisely 
estimate changes in energy intake in free-living individuals. Such measurements
are feasible, relatively inexpensive, and can be used to estimate diet adherence 
during clinical weight-management programs.

PMCID: PMC3127505 [Available on 2012/7/1]
PMID: 21562087  [PubMed - in process]

Review paper on analyzing dose-response curves

July 22, 2011

I’ve been negligent about announcing new papers so I’ll do it all at once and then perhaps provide details later:

 Methods Enzymol. 2011;487:465-83.

Inferring mechanisms from dose-response curves.

Chow CC, Ong KM, Dougherty EJ, Simons SS Jr.

Laboratory of Biological Modeling, NIDDK/CEB, National Institutes of Health,
Bethesda, Maryland, USA.

The steady state dose-response curve of ligand-mediated gene induction usually
appears to precisely follow a first-order Hill equation (Hill coefficient equal
to 1). Additionally, various cofactors/reagents can affect both the potency and
the maximum activity of gene induction in a gene-specific manner. Recently, we
have developed a general theory for which an unspecified sequence of steps or
reactions yields a first-order Hill dose-response curve (FHDC) for plots of the
final product versus initial agonist concentration. The theory requires only that
individual reactions "dissociate" from the downstream reactions leading to the
final product, which implies that intermediate complexes are weakly bound or
exist only transiently. We show how the theory can be utilized to make
predictions of previously unidentified mechanisms and the site of action of
cofactors/reagents. The theory is general and can be applied to any biochemical
reaction that has a FHDC.

PMID: 21187235  [PubMed - indexed for MEDLINE]

New paper on TREK channels

January 27, 2011

This paper came out of a collaboration instigated by my former fellow Sarosh Fatakia.  We applied our method using mutual information (see here and here) to a family of potassium channels known as TREK channels.

Structural models of TREK channels and their gating mechanism

A.L. Milac, A. Anishkin, S.N. Fatakia, C.C. Chow, S. Sukharev, and H. R. Guy

Mechanosensitive TReK channels belong to the family of K2p channels, a family of widely distributed, well-modulated channels that uniquely have two similar or identical subunits, each with two TM1-p-TM2 motifs. Our goal is to build viable structural models of TReK channels, as representatives of K2p channels family. The structures available to be used as templates belong to the 2TM channels superfamily. These have low sequence similarity and different structural features: four symmetrically arranged subunits, each having one TM1-p-TM2 motif. Our model building strategy used two subunits of the template (KcsA) to build one subunit of the target (TREK-1).  Our models of the closed channel were adjusted to differ substantially from those of the template, e.g., TM2 of the second repeat is near the axis of the pore whereas TM2 of the first repeat is far from the axis.  Segments linking the two repeats and immediately following the last TM segment were modeled ab initio as α-helices based on helical periodicities of hydrophobic and hydrophilic residues, highly conserved and poorly conserved residues and statistically related positions from multiple sequence alignments. The models were further refined by 2-fold symmetry-constrained MD simulations using a protocol we developed previously. We also built models of the open state and suggest a possible tension-activated gating mechanism characterized by helical motion with 2-fold symmetry. Our models are consistent with deletion/truncation mutagenesis and thermodynamic analysis of gating described in the accompanying paper.

 

 

addendum: link fixed

Path Integral Methods for SDEs

September 30, 2010

I’ve just uploaded a review paper to arXiv on the use of path integral and field theory methods for solving stochastic differential equations.   The paper can be obtained here.  Most books on field theory and path integrals are geared towards applications in particle physics or statistical mechanics.  This paper shows how you can adapt these methods to solving everyday problems in applied mathematics and theoretical biology.  The nice thing about it is that they form an organized way to do perturbative expansions and explicitly compute quantities like moments.  The paper was originally written for a special issue of the journal Methods that fell through.  Our goal is to collate the papers intended for that issue into a book, which will include an expanded version of this paper.

The push hypothesis for obesity

September 14, 2010

My blog post on the summary of my SIAM talk on obesity was picked up by Reddit.com.  There is also a story by mathematics writer Barry Cipra in SIAM news (not yet available online).  I thought I would explicitly clarify the “push” hypothesis here and reiterate that this is my opinion and not NIH policy.  What we had done previously was to derive a model of human metabolism that gives a prediction of how much you would weigh given how much you eat.  The model is fully dynamic and can capture how much you gain or lose weight depending on changes in diet or physical activity.  The parameters in the model have been calibrated with physiological measurements and validated in several independent studies of people undergoing weight change due to diet changes.

We then applied this model to the US population.  We used data from the National Health and Nutrition Examination Survey, which has kept track of the body weights of a representative sample of the US population for the past several decades and food availability data from the USDA.  Since the 1970′s, the average US body weight has increased linearly.  The US food availability per person has also increased linearly.  However, when we used the food availability data in the model, it predicted that the weight gain would grow linearly at a faster rate.  The USDA has used surveys and other investigative techniques to try to account for how much food is wasted.  If we calibrate the wastage to 1970 then we predict that the difference between the amount consumed and the amount available progressively increased from 1970 to 2005.  We interpreted this gap to be a progressive increase of food waste.  An alternative hypothesis would be that everyone burned more energy than the model predicted.

This also makes a prediction for the cause of the obesity epidemic although we didn’t make this the main point of the paper.  In order to gain weight, you have to eat more calories than you burn.  There are three possibilities for how this could happen: 1)  We could decrease energy expenditure by reducing physical activity and thus increase weight even if we ate the same amount of food as before,  2) There could be a pull effect where we became hungrier and start to eat more food, and 3)  There could be a push effect where we eat more food than we would have previously because of increased availability.  Now the data rules out hypothesis 1) since we assumed that physical activity stayed constant and still showed an increasing gap between energy intake and energy expenditure.  If anything, we may be exercising more than expected.  Hypothesis 2) would predict that the gap between intake and expenditure should fall and waste should decrease as we utilize more of the available food.  This then leaves us with hypothesis 3) where we are being supplied more food than we need to maintain our body weight and while we are eating some of this excess food, we are wasting more and more of it as well.

The final question, which is outside my realm of expertise, is why food supply increased. The simple answer is that food policy changed dramatically in the 1970′s. Earl Butz was appointed to be the US Secretary of Agriculture in 1971.  At that time food prices were quite high so he decided to change farm policy and vastly increase the production of corn and soybeans.  As a result, the supply of food increased dramatically and the price of food began to drop.   The story of Butz and the consequences of his policy shift is documented in the film King Corn.

Summary of SIAM talk

July 23, 2010

Last Monday I gave a plenary talk at the joint Life Sciences and Annual SIAM meeting.  My slides can be downloaded from a previous post. The talk summarized the work I’ve been doing on obesity and human body weight change for the past six years.  The main idea is that at the most basic level, the body can be modeled as a fuel tank.  You put food into the tank by eating and you use up energy to maintain bodily functions and do physical work.  The difference between food intake rate and energy expenditure rate is the rate of change of your body weight.  In calculating body weight you need to convert energy (e.g. Calories consumed) into mass (e.g. kilograms).  However, the difficulty in doing this is that your body is not homogeneous.  You are comprised of water, bones, minerals, fat, protein and carbohydrates in the form of glycogen.  Each of these quantities has its own energy density (e.g. Calories/kg).  So in order to figure out how much you’ll weigh you need to figure out how the body partitions energy into these different components.

(more…)

New paper on Autism

April 9, 2010

S. Vattikuti and C.C. Chow, ‘A computational model for cerebral cortical dysfunction in Autism Spectrum Disorders’, Biol Psychiatry 67:672-6798 (2010).  PMID: 19880095

PDF available here.

Shashaank Vattikuti was a medical student and wanted to do a rotation in my lab.    He had done some pediatric rotations and was frustrated at the lack of treatments for autistic children.    He thought that  a better biophysical understanding of the neural activity that caused autism was necessary to make progress.  The conventional wisdom is that autism is due to some problem in global connectivity in the brain.  This makes sense because neural imaging data seems to show that different regions of the brain seem to be less functionally connected in autistics.   However, Shashaank thought that the deficit was probably a local microscopic one and that the global perturbations were due to the brain’s attempt to compensate for these deficits.

He found papers that showed that cortical structures called minicolums (on the hundred micron scale) were denser (closer together)  in autistics.  This immediately set off a flag in my head because I had previously shown for spiking neurons  (e.g. Chow and Coombes) that localized persistent activity (often called a bump) was more stable if the neuron density increased.   Bumps in networks of spiking neurons tended to wander around for small numbers but stabilized as the density increased.   Shashaank also found genetic and physiological evidence that the synaptic balance is tilted towards an excess of excitation in autistics.

The real breakthrough in making this more than an academic exercise was that Shashaank also found a simple behavioral task to model, where subjects visually fixate on a point for a certain amount of time and then shift their gaze to a target when instructed.  Most people will undershoot and this is called hypometria and make errors called dysmetria.  The data was mixed but it seemed like autistics had more hypometria and dysmetria.  The way we implemented this visually guided task in the model was to consider a one dimensional network of excitatory and inhibitory neurons.  The parameters were tuned so that when a stimulus was applied at a given position a bump of firing neurons would form.  The fixation point was represented by a stimulus applied to a location in the network.  We then stimulated at another location to indicate the saccade target and tracked how the bump moved.  We found that when there was an excess of excitation, hypometria and dysmetria both increased.  However, when the minicolumn structure was perturbed, only hypometria increased.

Before Shashaank ran the simulations, I really wasn’t sure what would happen.  Given more excitation, it is plausible that the bump would move more quickly and hence reduce hypometria with increased excitation.  Instead, the excessive excitation makes bumps more persistent and stable so it is harder to move them once they are established.   Hence, our  result hinges on there being prior neural activity that must be moved in a saccade task.    This is consistent  with autistics having more difficulty switching mental tasks.  Our  hypothesis is that the underlying source of autistic symptoms arise from excessive local persistent activity.  This excessive persistence is also why the effective connectivity between brain regions is reduced because each region is less responsive to external inputs.    It also suggests that restoring synaptic balance with medication may alleviate some of these symptoms.  We’re currently trying to devise ways to validate our hypothesis.

References:

Casanova MF, van Kooten IA, Switala AE, van Engeland H, Heinsen H, Steinbusch HW, et al. (2006): Minicolumnar abnormalities in autism. Acta Neuropathol 112:287–303.

C.C. Chow and S. Coombes, ‘Existence and Wandering of bumps in a spiking neural network model’. SIAM Journal on Applied Dynamical Systems 5, 552-574 (2006) [PDF]

New paper on gene induction

March 30, 2010

Karen M. Ong, John A. Blackford, Jr., Benjamin L. Kagan, S. Stoney Simons, Jr., and Carson C. Chow. A theoretical framework for gene induction and experimental comparisons PNAS 200911095; published ahead of print March 29, 2010, doi:10.1073/pnas.0911095107

This is an open access article so it can be downloaded directly from the PNAS website here.

This is a paper where group theory appears unexpectedly.  The project grew out of a chance conversation between Stoney Simons and myself in 2004.  I had arrived recently at the NIH and was invited to give a presentation at the NIDDK retreat.  I spoke about how mathematics could be applied to obesity research and I’ll talk about the culmination of that effort in my invited plenary talk at the joint SIAM Life Sciences and Annual meeting this summer.  Stoney gave a summary of his research on steroid-mediated gene induction.  He showed some amazing data of the dose response curve of the amount of gene product induced for a given amount of steroid.  In these experiments,  different concentrations of steroid are added to cells  in a dish and then after waiting awhile, the total amount of gene product in the form of luciferase is measured.  Between steroid and gene product is a lot of messy and complicated biology starting with  steroid binding to a steroid receptor, which translocates to the nucleus while interacting with many molecules on the way. The steroid-receptor complex then binds  to DNA, which triggers a transcription cascade involving many other factors binding to DNA, which gives rise to mRNA, which is then translated into luciferase and then measured as photons.  Amazingly, the dose response curve after all of this was fit almost perfectly (R^2 > 95%) by a Michaelis-Menton or first order Hill function

P = \frac{Amax [S]}{EC50 + [S]}

where  P is the amount of gene product, [S] is the concentration of steroid, Amax is the maximum possible amount of product and EC50 is the concentration of steroid giving half the maximum of product. Stoney also showed that Amax and EC50 could be moved in various directions by the addition of various cofactors.  I remember thinking to myself during his talk that there must be a nice mathematical explanation for this.  After my talk, Stoney came up to me  and asked me if I thought I could model his data.  We’ve been in sync like this ever since.

(more…)

Paper now in print

February 3, 2010

M.A. Buice, J.D. Cowan, and C.C. Chow, Systematic Fluctuation Expansion for Neural Network Activity Equations, Neural Comp. 22:377-426 (2010) is now in print.  The summary of the paper is here and a PDF can be obtained here.

New paper on food waste

November 25, 2009

Hall KD, Guo J, Dore M, Chow CC (2009) The Progressive Increase of Food Waste in America and Its Environmental Impact. PLoS ONE 4(11): e7940. doi:10.1371/journal.pone.0007940

This paper started out as a way to understand the obesity epidemic.  Kevin Hall and I developed a reduced model of how food intake is translated into body weight change [1].  We then decided to apply the model to the entire US population.  For the past thirty years there has been an ongoing study (NHANES) that has been taking a representative sample of the US population and taking anthropomorphic measurments like body weight and height. The UN Food and Agriculture Organization and the USDA have also kept track of how much food is available to the population. We thought it would be interesting to see if the food available accounted for the increase in body weight over the past thirty years.

What we  found was that the available food more than accounted for all of the weight gain.  In fact our calculations showed that the gap between predicted food intake and actual intake has diverged linearly over time.  This “energy gap” could be due to two things: 1) people were actually burning more energy than our model indicated because they were more physically active than expected (we assumed that physical activity stayed constant for the last thirty years), or 2) there has been a progressive increase of food waste.  Given that most people have argued that physical activity has gone down recently, which would make the energy gap even greater, we opted to for conclusion 2).   Our estimate is also on the conservative side because we didn’t accout for the fact that children eat less than adults on average.

I didn’t want to believe the result at first but the numbers were the numbers.  We have gone from wasting about 900 kcal per person per day in 1974 to 1400 kcal  in 2003.  It takes about 3 kcal to make 1 kcal of food so the energy in the wasted food amounts to about 4% of total US oil consumption.  The wasted food also uses about 25% of all fresh water use.  Ten percent of it could feed Canada.  The press has taken some interest in our result.  Our paper was covered by CBC news, Kevin and I were interviewed by Science and Kevin was interviewed on Voice of America.

[1] Chow CC, Hall KD (2008) The Dynamics of Human Body Weight Change. PLoS Comput Biol 4(3): e1000045. doi:10.1371/journal.pcbi.1000045

New paper on transients

November 20, 2009

A new paper, Competition Between Transients in the Rate of Approach to a Fixed Point, SIAM J. Appl. Dyn. Syst. 8, 1523 (2009) by Judy Day, Jonathan Rubin and myself appears today. The official journal link to the paper is here and the PDF can be obtained here.  This paper came about because of a biological phenomenon known as tolerance.  When the body is exposed to a pathogen or toxin there is an inflammatory response.  This makes you feel ill and initiates the immune system to mount a defense.  In some cases, if you are hit with a second dose of the toxin you’ll get a heightened response.  However, there are situations where you can have a decreased response to a second dose and that is called tolerance.

Judy Day was my last graduate student at Pitt.  When I left for NIH,  Jon Rubin stepped in to advise her.  Her first project on tolerance was to simulate a reduced four dimensional model of the immune system and see if tolerance could be observed in the model [1]. She found that it did occur under certain parameter regimes. What she showed was that if you watch a particular inflammatory marker, then it’s response could be damped if a preconditioning dose is first administered.

The next step was to understand mathematically how and why it occurred. The result after several starts and stops was this paper. We realized that tolerance boiled down to a question regarding the behavior of transients, i.e. how fast does an orbit get to a stable fixed point starting from different initial conditions. For example, consider two orbits with initial conditions (x1,y1) and (x2,y2) with x1 > x2, where y represents all the other coordinates. Tolerance occurs if the x coordinate of orbit 1 ever becomes smaller than the x coordinate of orbit 2 independent of what the other coordinates do. From continuity arguments, you can show that if tolerance occurs at a single point in space or time it must occur in a neighbourhood around those points. In our paper, we showed that tolerance could be understood geometrically and that for linear and nonlinear systems with certain general properties, tolerance is always possible although the theorems don’t say which orbits in particular will exhibit it.   However, regions of tolerance can be calculated explicitly in two dimensional linear systems and estimated for nonlinear planar systems.

[1] Day J, Rubin J, Vodovotz Y, Chow CC, Reynolds A, Clermont G, A reduced mathematical model of the acute inflammatory response II. Capturing scenarios of repeated endotoxin administration, Journal of theoretical biology, 242(1):237-56 2006.

New paper on liver regeneration

June 19, 2009

I have a new paper that has just appeared in Biophysical Journal entitled “A model of liver regeneration ” by Furchgott, Chow and Periwal.  The liver has this remarkable property that if a portion of it is removed up to a critical fraction, it will grow back to approximately 10% of its original size.  The restoration does not recreate the original morphology of the liver but it does restore function.  Although this has been known since ancient times, it is still a puzzle as to exactly how the liver does this, especially how it knows to stop growing when it is back to its original size and why it does not oscillate.  Our paper proposes a simple model to explain it.

(more…)

Talk at NJIT

June 3, 2009

I was at the FACM ’09 conference held at the New Jersey Institute of Technology the past two days.  I gave a talk on “Effective theories for neural networks”.  The slides are here.  This was an unsatisfying talk on two accounts.  The first was that I didn’t internalize how soon this talk came after the Snowbird conference and so I didn’t have enough time to properly prepare.   I thus ended up giving a talk that provided enough information to be confusing and hopefully thought provoking but not enough to be understood.   The second problem was that there is a flaw in what I presented.

I’ll give a brief backdrop to the talk for those unfamiliar with neuroscience.  The brain is composed of interconnected neurons and as a proxy for understanding the brain, computational neuroscientists try to understand what a collection of coupled neurons will do.   The state of a neuron is characterized by the voltage across its membrane and the state of its membrane ion channels.  When a neuron is given enough input,  there can be a  massive change of voltage and flow of ions called an action potential.  One of the ions that flows into the cell is calcium, which can trigger the release of neurotransmitter to influence other neurons.  Thus, neuroscientists are highly focused on how and when action potentials or spikes occur.

We can thus model a neural network at many levels.  At the bottom level, there is what I will call a microscopic description where we write down equations for the dynamics of the voltage and ion channels for each neuron.  These neuron models are sometimes called conductance-based neurons and the Hodgkin-Huxley neuron is the first and most famous of them.  They usually consist of two to four differential equations and can easily be a lot more.  On the other hand, if one is more interested in just the spiking rate,  then there is a reduced description for that.  In fact, much of the early progress in mathematically understanding neural networks used rate equations, examples being Wison and Cowan, Grossberg, Hopfield and Amari.  The question that I have always had was what is the precise connection between a microscopic description and a spike rate or activity description.  If I start with a network of conductance-based neurons can I derive the appropriate activity based description?

(more…)

Revised version of paper

May 15, 2009

We’ve just uploaded a revised version of our paper: Systematic fluctuation expansion for neural network activity equations, by Buice, Cowan and Chow to the arXiv. Hopefully, this is more readable (especially the path integral section) than the previous version.


Follow

Get every new post delivered to your Inbox.

Join 111 other followers