Archive for the ‘Papers’ Category

New paper on repetition priming and suppression

July 27, 2012

A new paper by Steve Gotts, myself, and Alex Martin has officially been published in the journal Cognitive Neuroscience:

Stephen J. Gotts, Carson C. Chow & Alex Martin (2012): Repetition priming and repetition suppression: Multiple mechanisms in need of testing, Cognitive Neuroscience, 3:3-4, 250-259 [PDF]

This paper is a review of the topic but is partially based on the PhD thesis work of Steve Gotts when we were both in Pittsburgh over a decade ago. Steve was a CNBC graduate student at Carnegie Mellon University and came to visit me one day to tell me about his research project to reconcile the psychological phenomenon of repetition priming with a neurophysiological phenomenon called repetition suppression. It is well known that performance improves when you repeat a task. For example, you will respond faster to words on a random list if you have seen the word before. This is called repetition priming. The priming effect can occur over time scales as short as a few seconds to your life time. Steve was focused on the short time effect. A naive explanation for why you would respond faster to priming is that the pool of neurons that code for the word become slightly more active so when the word reappears they fire more readily. This hypothesis could only be tested  when electrophysiological recordings of cells in awake behaving monkeys and functional magnetic resonance imaging data in humans finally became available in the mid-nineties.  As is often the case in science, the opposite was observed. Neural responses actually decreased and this was called repetition suppression.  So an interesting question arose: How do you get priming with suppression?  Steve had a hypothesis and it involved work I had done so he came to see if I wanted to collaborate.

I joined the math department at Pitt in the fall of 1998 (the webpage has a nice picture of Bard Ermentrout, Rodica Curtu and Pranay Goel standing at a white board). I had just come from doing a post doc with Nancy Kopell at BU.   At that time, the computational neuroscience community was interested in how a population of spiking neurons would become synchronous.  The history of synchrony and coupled oscillators is long with many threads but I got into the game because of the weekly meetings Nancy organized at BU, which we dubbed “N-group”. People from all over the Boston area would participate.  It was quite exciting at that time. One day Xiao Jing Wang, who was at Brandeis at the time, came to give a seminar on his joint work with Gyorgy Buzsaki on gamma oscillations in the hippocampus, which resulted in this highly cited paper. What the paper was really about was how inhibition could induce synchrony in a network with heterogeneous connections.  It had already been shown by a number of people that a network with inhibitory synapses could synchronize a network of spiking neurons. This was somewhat counter intuitive because the conventional wisdom was that inhibition would lead to anti-synchrony.  The key ingredient was that the inhibition had to be slow. Xiao Jing argued from his simulations that the hippocampus had a sweet spot for synchronization for the gamma band (i.e. frequencies around 40Hz). I was highly intrigued by his result and spent the next two years trying to understand the simulations mathematically. This resulted in four papers:

C.C. Chow, J.A. White, J. Ritt, and N. Kopell, `Frequency control in synchronized networks of inhibitory neurons’, J. Comp. Neurosci. 5, 407-420 (1998). [PDF]

J.A. White, C.C. Chow, J. Ritt, C. Soto-Trevino, and N. Kopell, `Synchronization and oscillatory dynamics in heterogeneous, mutually inhibited neurons’, J. Comp. Neurosci. 5, 5-16 (1998). [PDF]

C.C. Chow, `Phase-locking in weakly heterogeneous neuronal networks’, Physica D 118, 343-370 (1998). [PDF]

C.C. Chow and N. Kopell, `Dynamics of spiking neurons with electrical coupling’, Neural Comp. 12, 1643-1678 (2000). [PDF]

In a nutshell, these papers showed that in a heterogeneous network, neurons will tend to synchronize around the time scale of the synaptic inhibition, which in the case of the inhibitory neurotransmitter receptor GABA_A is around 25 ms or 40 Hz. When the firing frequency is too high the neurons tend to fire asynchronously and when the frequency is too slow, neurons tend to stop firing all together.

Steve read my papers (and practically everything else) and thought  that this might be the resolution of his question.  Now, it had also been known for a while that when neurons fire they tend to slow down. This is due to both spike-frequency adaptation and synaptic depression, so repetition suppression is not entirely surprising since when neurons are stimulated they will tend to fire slower. What is surprising is that slowing down makes you respond faster.  Steve thought that maybe suppression synchronized neurons and made them more effective in getting downstream neurons to fire. In essence, what he needed to find was a mechanism that increases the gain of a neuron for a decrease in input and synchrony was a solution. I helped him work out some technical details and he wrote a very nice thesis showing how this could work and match the data. He then went on to work with Bob Desimone and Alex Martin at NIH. However, we never wrote the theoretical paper from his thesis because of a critique that we never got around to answering. The issue was that if a lowering of network frequency can elicit priming then why does a reduction in contrast in the primed stimulus, which also reduces network frequency, not do the same? This came up after Steve had left and I turned my attention to other things. The answer is probably because not all frequency reductions are equal. A reduction in contrast lowers the total input to the early part of the visual system while synaptic depression will have the largest effect on the most active neurons.  The ensuing dynamics will likely be different but we never had the time to fully flesh this out. Although, I always wanted to get back to this, the project sat idle for me for about eight years until Steve sent me an email one day saying that he’s writing a review with Alex on the topic and wanted to know if I wanted to be included. I was delighted.  The paper covers all the current theories for priming and suppression and is accompanied by commentaries from many of the key players in the field. I’ve just covered a small part of the many interesting issues brought up in the review.

Obesity references

May 23, 2012

I’ve been asked about references to papers on which my New York Times interview is based so I’ve listed them below.  You can find summaries for some of them as well as the slides for my talks and posts related to obesity here.

K.D. Hall, G.Sacks, D. Chandramohan, C.C Chow, C. Wang; S. Gortmaker; B. Swinburn, `Quantifying the effect of energy imbalance on body weight change.’ The Lancet 378:826-37 (2011).

K.D. Hall and C.C. Chow, `Estimating changes of free-living energy intake and its confidence interval,’ Am J Clin Nutr 94:66-74 (2011).

K.D. Hall, M. Dore, J. Guo, and C.C. Chow, ‘The progressive increase of food waste in America’, PLoS ONE 4(11): e7940 (2009).

C.C. Chow and K.D. Hall, `The dynamics of human body weight change’, PLoS Computational Biology , e1000045 (2008).

K.D. Hall, H.L. Bain and C.C. Chow, `How adaptations of substrate utilization regulate body composition’, International Journal of Obesity, 31 , 1378-83 (2007). [PDF]

V. Periwal and C.C. Chow, ‘Patterns in food intake correlate with body mass index’, American Journal of Physiology: Endocrinology and Metabolism, 291 929-936 (2006) [PDF]

Errata recap

May 9, 2012

I want to stress that there is nothing wrong with the results in the paper. The mistakes are typographical in the sense that the formulas in the methods were transcribed incorrectly from our code.  This was just pointed out to me that the errata could be misinterpreted.  What happened was that MS Word kept turning our equations into pictures so we couldn’t edit them so we retyped them over and over again.  Transcription errors then started to creep in and we were so adapted to the equations that we didn’t notice anymore.  Not a good excuse but unfortunately that is what happened.

Errata for PLoS Genetics paper

May 4, 2012

I just discovered some glaring typographical errors in the Methods Section of our recent paper: Heritability and genetic correlations explained by common SNPS for metabolic syndrome traits.  The corrected Methods can be obtained here.  I will see if I can include an errata on the PLoS Genetics website as well. The results in the paper are fine.

edited on May 9 to clear up possible misconception.

Heritability and GWAS

April 9, 2012

Here is the backstory for the paper in the previous post.  Immediately after the  human genome project was completed a decade ago, people set out to discover the genes responsible for diseases. Traditionally, genes had been discovered by tracing the incidence in families that exhibit the disease. This was a painstaking process – a classic example being the discovery of the gene for Huntington’s disease.  The  completion of the  human genome project provided a simpler approach.  The first thing was to create what is known as a haplotype map or HapMap. This is a catalog of all the common genome differences between people.  The genome between humans only differ by about 0.1%. These differences include  Single Nucleotide Variations (SNPs) where a given base (A,C,G,T) is changed and  Copy Number Variations (CNV) where there are differences in the number of copies of segments of DNA.   There are about 10 million common SNPs.

The Genome Wide Association Study (GWAS) usually looks for differences in SNPs between people with and without diseases.  The working hypothesis at that time was that common diseases like Alzheimer’s disease or Type II diabetes should be due to differences in common SNPs (i.e. common disease, common variant). People thought that the genes for many of these diseases would be found within a decade. Towards this end, companies like Affymetrics and Illumina began making microarray chips, which consist of tiny wells with snippets of complement DNA (cDNA) of a small segment of the sequence around each SNP.  SNP variants are found by seeing what types of DNA fragments in genetic samples (e.g. saliva) get bound to the complement strands in the  array. A classic GWAS study then considers the differences in SNP variants observed in disease and control groups. For any finite sample, there will always be fluctuations  so a statistical criterion must be established to evaluate whether a variant is significant. This is done by computing the probability or p-value of the occurrence of a variant due to random chance.  However, in a set of a million SNPS, which is standard for a chip, the probability of one SNP being randomly associated with a disease is a million fold higher if the SNPs are all independent (i.e. it’s just the sum of the probabilities of each event (see Bonferonni correction)). However, since SNPs are not always independent, this is a very conservative criterion.

The first set of results from GWAS  started to be published shortly after I arrived at the NIH in 2004 and they weren’t very promising. A small number of SNPs were found for some common diseases but they only conferred a very small increase in  risk even when the disease were thought to be highly heritable. Heritability is a measure of the proportion of phenotypic variation between people explained by genetic variation.  (I’ll give a primer on heritability in a following post.) The one notable exception was age-related macular degeneration for which five SNPs were found to be associated with a 2 to 3 fold increase in risk. The difference between the heritability of a disease as measured by classical genetic methods and what was found to be explained by SNPs came to be known as the “Missing heritability” problem. I thought this was an interesting puzzle but didn’t think much about it until I saw the results from this paper (summarized on Steve Hsu’s blog), which showed that different European subpopulations could be separated by just projecting onto two principal components of 300,000 SNPS of 6000 people. (Principle components are the eigenvectors of the 6000 by 6000 correlation matrix of the 300,000 dimensional SNP vectors of each person.)  This was a revelation to me because it implied that although single genes did not carry much weight, the collection of all the genes certainly did.  I decided right then that I would show that the missing heritability for obesity was contained in the “pattern” of SNPs rather than in single SNPs. I did this without really knowing anything at all about genetics or heritability.


New paper on heritability from GWAS

March 29, 2012

Heritability and genetic correlations explained by common SNPS in metabolic syndrome traits

PLoS Genet 8(3): e1002637. doi:10.1371/journal.pgen.1002637

Shashaank Vattikuti, Juen Guo, and Carson C. Chow

Abstract: We used a bivariate (multivariate) linear mixed-effects model to estimate the narrow-sense heritability (h2) and heritability explained by the common SNPs (hg2) for several metabolic syndrome (MetS) traits and the genetic correlation between pairs of traits for the Atherosclerosis Risk in Communities (ARIC) genome-wide association study (GWAS) population. MetS traits included body-mass index (BMI), waist-to-hip ratio (WHR), systolic blood pressure (SBP), fasting glucose (GLU), fasting insulin (INS), fasting trigylcerides (TG), and fasting high-density lipoprotein (HDL). We found the percentage of h2 accounted for by common SNPs to be 58% of h2 for height, 41% for BMI, 46% for WHR, 30% for GLU, 39% for INS, 34% for TG, 25% for HDL, and 80% for SBP. We confirmed prior reports for height and BMI using the ARIC population and independently in the Framingham Heart Study (FHS) population. We demonstrated that the multivariate model supported large genetic correlations between BMI and WHR and between TG and HDL. We also showed that the genetic correlations between the MetS traits are directly proportional to the phenotypic correlations.

Author Summary: The narrow-sense heritability of a trait such as body-mass index is a measure of the variability of the trait between people that is accounted for by their additive genetic differences. Knowledge of these genetic differences provides insight into biological mechanisms and hence treatments for diseases. Genome-wide association studies (GWAS) survey a large set of genetic markers common to the population. They have identified several single markers that are associated with traits and diseases. However, these markers do not seem to account for all of the known narrow-sense heritability. Here we used a recently developed model to quantify the genetic information contained in GWAS for single traits and shared between traits. We specifically investigated metabolic syndrome traits that are associated with type 2 diabetes and heart disease, and we found that for the majority of these traits much of the previously unaccounted for heritability is contained within common markers surveyed in GWAS. We also computed the genetic correlation between traits, which is a measure of the genetic components shared by traits. We found that the genetic correlation between these traits could be predicted from their phenotypic correlation.

I am very happy that this paper is finally out.  It has been a three year long ordeal.  I’ll write about the story and background for this paper later.

New paper in Biophysical Journal

February 14, 2012

Bayesian Functional Integral Method for Inferring Continuous Data from Discrete Measurements

Biophysical Journal, Volume 102, Issue 3, 399-406, 8 February 2012


William J. Heuett, Bernard V. Miller, Susan B. Racette, John O. Holloszy, Carson C. Chow, and Vipul Periwal

Abstract: Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a “model”. An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models.

New paper on steroid-mediated gene induction

January 23, 2012

A follow-up  to our PNAS paper on a new theory of steroid-mediated gene induction is now available on PLoS One here.  The title and abstract is below.  In the first paper, we proposed a general mathematical framework to compute how much protein will be produced from a steroid-mediated gene.  It had been noted in the past that the dose response curve of product given steroid amount follows a Michaelis-Menten curve or first order Hill function (e.g. Product = Amax [S]/(EC50+[S], where [S] is the added steroid concentration)..  In our previous work, we exploited this fact and showed that a complete closed form expression for the dose response curve could be written down for an arbitrary number of linked reactions.  The formula also indicates how added cofactors could increase or decrease the Amax or EC50.  What we do in this paper is to show how this expression can be used to predict the mechanism and order in the sequence of reactions a given cofactor will act by analyzing how two cofactors affect the Amax and EC50.

Deducing the Temporal Order of Cofactor Function in Ligand-Regulated Gene Transcription: Theory and Experimental Verification

Edward J. Dougherty, Chunhua Guo, S. Stoney Simons Jr, Carson C. Chow

Abstract: Cofactors are intimately involved in steroid-regulated gene expression. Two critical questions are (1) the steps at which cofactors exert their biological activities and (2) the nature of that activity. Here we show that a new mathematical theory of steroid hormone action can be used to deduce the kinetic properties and reaction sequence position for the functioning of any two cofactors relative to a concentration limiting step (CLS) and to each other. The predictions of the theory, which can be applied using graphical methods similar to those of enzyme kinetics, are validated by obtaining internally consistent data for pair-wise analyses of three cofactors (TIF2, sSMRT, and NCoR) in U2OS cells. The analysis of TIF2 and sSMRT actions on GR-induction of an endogenous gene gave results identical to those with an exogenous reporter. Thus new tools to determine previously unobtainable information about the nature and position of cofactor action in any process displaying first-order Hill plot kinetics are now available.

New paper on GPCRs

November 28, 2011

New paper in PloS One:

Fatakia SN, Costanzi S, Chow CC (2011) Molecular Evolution of the Transmembrane Domains of G Protein-Coupled Receptors. PLoS ONE 6(11): e27813. doi:10.1371/journal.pone.0027813


G protein-coupled receptors (GPCRs) are a superfamily of integral membrane proteins vital for signaling and are important targets for pharmaceutical intervention in humans. Previously, we identified a group of ten amino acid positions (called key positions), within the seven transmembrane domain (7TM) interhelical region, which had high mutual information with each other and many other positions in the 7TM. Here, we estimated the evolutionary selection pressure at those key positions. We found that the key positions of receptors for small molecule natural ligands were under strong negative selection. Receptors naturally activated by lipids had weaker negative selection in general when compared to small molecule-activated receptors. Selection pressure varied widely in peptide-activated receptors. We used this observation to predict that a subgroup of orphan GPCRs not under strong selection may not possess a natural small-molecule ligand. In the subgroup of MRGX1-type GPCRs, we identified a key position, along with two non-key positions, under statistically significant positive selection.

New paper

November 17, 2011

A new paper in Physical Review E is now available on line here.  In this paper Michael Buice and I show how you can derive an effective stochastic differential (Langevin) equation for a single element (e.g. neuron) embedded in a network by averaging over the unknown dynamics of the other elements. This then implies that given measurements from a single neuron, one might be able to infer properties of the network that it lives in.  We hope to show this in the future. In this paper, we perform the calculation explicitly for the Kuramoto model of coupled oscillators (e.g. see here) but it can be generalized to any network of coupled elements.  The calculation relies on the path or functional integral formalism Michael developed in his thesis and generalized at the NIH.  It is a nice application of what is called “effective field theory”, where new dynamics (i.e. action) are obtained by marginalizing or integrating out unwanted degrees of freedom.  The path integral formalism gives a nice platform to perform this averaging.  The resulting Langevin equation has a noise term that is nonwhite, non-Gaussian and multiplicative.  It is probably not something you would have guessed a priori.

 Michael A. Buice1,2 and Carson C. Chow11Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, Maryland 20892, USA
2Center for Learning and Memory, University of  Texas at Austin, Austin, Texas, USA

Received 25 July 2011; revised 12 September 2011; published 17 November 2011

Complex systems are generally analytically intractable and difficult to simulate. We introduce a method for deriving an effective stochastic equation for a high-dimensional deterministic dynamical system for which some portion of the configuration is not precisely specified. We use a response function path integral to construct an equivalent distribution for the stochastic dynamics from the distribution of the incomplete information. We apply this method to the Kuramoto model of coupled oscillators to derive an effective stochastic equation for a single oscillator interacting with a bath of oscillators and also outline the procedure for other systems.

Published by the American Physical Society

DOI: 10.1103/PhysRevE.84.051120
PACS: 05.40.-a, 05.45.Xt, 05.20.Dd, 05.70.Ln

New paper in The Lancet

August 26, 2011

The Lancet has just published a series of articles on obesity.  They can be found here.  I am an author on the third paper, which covers the work Kevin Hall and I have been working on for the past seven years.  There was a press conference in London yesterday that Kevin attended and there is a symposium today.  The announcement has since been picked up in the popular press.  Here are some samples:  Science Daily, Mirror, The Australian, and The Chart at CNN.



New paper on binocular rivalry

July 22, 2011
J Neurophysiol. 2011 Jul 20. [Epub ahead of print]

The role of mutual inhibition in binocular rivalry.

Seely J, Chow CC.

Binocular rivalry is a phenomenon that occurs when ambiguous images are presented
to each of the eyes. The observer generally perceives just one image at a time,
with perceptual switches occurring every few seconds. A natural assumption is
that this perceptual mutual exclusivity is achieved via mutual inhibition between
populations of neurons that encode for either percept. Theoretical models that
incorporate mutual inhibition have been largely successful at capturing
experimental features of rivalry, including Levelt's propositions, which
characterize perceptual dominance durations as a function of image contrasts.
However, basic mutual inhibition models do not fully comply with Levelt's fourth
proposition, which states that percepts alternate faster as the stimulus
contrasts to both eyes are increased simultaneously. This theory-experiment
discrepancy has been taken as evidence against the role of mutual inhibition for
binocular rivalry. Here, we show how various biophysically plausible
modifications to mutual inhibition models can resolve this problem.

PMID: 21775721  [PubMed - as supplied by publisher]

Paper can be downloaded here.

Review paper on steroid-mediated gene expression

July 22, 2011
Mol Cell Endocrinol. 2011 Jun 1. [Epub ahead of print]

The road less traveled: New views of steroid receptor action 
from the path of dose-response curves. 
Simons SS Jr, Chow CC.

Steroid Hormones Section, NIDDK/CEB, NIDDK, National Institutes of Health,
Bethesda, MD, United States.

Conventional studies of steroid hormone action proceed via quantitation of the
maximal activity for gene induction at saturating concentrations of agonist
steroid (i.e., A(max)). Less frequently analyzed parameters of receptor-mediated
gene expression are EC(50) and PAA. The EC(50) is the concentration of steroid
required for half-maximal agonist activity and is readily determined from the
dose-response curve. The PAA is the partial agonist activity of an antagonist
steroid, expressed as percent of A(max) under the same conditions. Recent results
demonstrate that new and otherwise inaccessible mechanistic information is
obtained when the EC(50) and/or PAA are examined in addition to the A(max).
Specifically, A(max), EC(50), and PAA can be independently regulated, which
suggests that novel pathways and factors may preferentially modify the EC(50)
and/or PAA with little effect on A(max). Other approaches indicate that the
activity of receptor-bound factors can be altered without changing the binding of
factors to receptor. Finally, a new theoretical model of steroid hormone action
not only permits a mechanistically based definition of factor activity but also
allows the positioning of when a factor acts, as opposed to binds, relative to a
kinetically defined step. These advances illustrate some of the benefits of
expanding the mechanistic studies of steroid hormone action to routinely include
EC(50) and PAA.

PMID: 21664235  [PubMed - as supplied by publisher]

New paper on insulin’s effect on lipolysis

July 22, 2011
J Clin Endocrinol Metab. 2011 May 18. [Epub ahead of print]

Higher Acute Insulin Response to Glucose May Determine Greater 
Free Fatty Acid Clearance in African-American Women.

Chow CC, Periwal V, Csako G, Ricks M, Courville AB, Miller BV 3rd, Vega GL,
Sumner AE.

Laboratory of Biological Modeling (C.C.C., V.P.), National Institute of Diabetes
and Digestive and Kidney Diseases, National Institutes of Health, Bethesda,
Maryland 20892; Departments of Laboratory Medicine (G.C.) and Nutrition (A.B.C.),
Clinical Center, National Institutes of Health, and Clinical Endocrinology Branch
(M.R., B.V.M., A.E.S.), National Institute of Diabetes and Digestive and Kidney
Diseases, National Institutes of Health, Bethesda, Maryland 20892; and Center for
Human Nutrition (G.L.V.), University of Texas Southwestern Medical Center at
Dallas, Dallas, Texas 75235.

Context: Obesity and diabetes are more common in African-Americans than whites.
Because free fatty acids (FFA) participate in the development of these
conditions, studying race differences in the regulation of FFA and glucose by
insulin is essential. Objective: The objective of the study was to determine
whether race differences exist in glucose and FFA response to insulin. Design:
This was a cross-sectional study. Setting: The study was conducted at a clinical
research center. Participants: Thirty-four premenopausal women (17
African-Americans, 17 whites) matched for age [36 ± 10 yr (mean ± sd)] and body
mass index (30.0 ± 6.7 kg/m(2)). Interventions: Insulin-modified frequently
sampled iv glucose tolerance tests were performed with data analyzed by separate
minimal models for glucose and FFA. Main Outcome Measures: Glucose measures were
insulin sensitivity index (S(I)) and acute insulin response to glucose (AIRg).
FFA measures were FFA clearance rate (c(f)). Results: Body mass index was similar
but fat mass was higher in African-Americans than whites (P < 0.01). Compared
with whites, African-Americans had lower S(I) (3.71 ± 1.55 vs. 5.23 ± 2.74
[×10(-4) min(-1)/(microunits per milliliter)] (P = 0.05) and higher AIRg (642 ±
379 vs. 263 ± 206 mU/liter(-1) · min, P < 0.01). Adjusting for fat mass,
African-Americans had higher FFA clearance, c(f) (0.13 ± 0.06 vs. 0.08 ± 0.05
min(-1), P < 0.01). After adjusting for AIRg, the race difference in c(f) was no
longer present (P = 0.51). For all women, the relationship between c(f) and AIRg
was significant (r = 0.64, P < 0.01), but the relationship between c(f) and S(I)
was not (r = -0.07, P = 0.71). The same pattern persisted when the two groups
were studied separately. Conclusion: African-American women were more insulin
resistant than white women, yet they had greater FFA clearance. Acutely higher
insulin concentrations in African-American women accounted for higher FFA

PMID: 21593106  [PubMed - as supplied by publisher]

New paper on estimating food intake from body weight

July 22, 2011
 Am J Clin Nutr. 2011 Jul;94(1):66-74. Epub 2011 May 11.

Estimating changes in free-living energy intake and its confidence interval.

Hall KD, Chow CC.

Laboratory of Biological Modeling, National Institute of Diabetes and Digestive
and Kidney Diseases, Bethesda, MD.

Background: Free-living energy intake in humans is notoriously difficult to
measure but is required to properly assess outpatient weight-control
interventions. Objective: Our objective was to develop a simple methodology that 
uses longitudinal body weight measurements to estimate changes in energy intake
and its 95% CI in individual subjects. Design: We showed how an energy balance
equation with 2 parameters can be derived from any mathematical model of human
metabolism. We solved the energy balance equation for changes in free-living
energy intake as a function of body weight and its rate of change. We tested the 
predicted changes in energy intake by using weight-loss data from controlled
inpatient feeding studies as well as simulated free-living data from a group of
"virtual study subjects" that included realistic fluctuations in body water and
day-to-day variations in energy intake. Results: Our method accurately predicted 
individual energy intake changes with the use of weight-loss data from controlled
inpatient feeding experiments. By applying the method to our simulated
free-living virtual study subjects, we showed that daily weight measurements over
periods >28 d were required to obtain accurate estimates of energy intake change 
with a 95% CI of <300 kcal/d. These estimates were relatively insensitive to
initial body composition or physical activity level. Conclusions: Frequent
measurements of body weight over extended time periods are required to precisely 
estimate changes in energy intake in free-living individuals. Such measurements
are feasible, relatively inexpensive, and can be used to estimate diet adherence 
during clinical weight-management programs.

PMCID: PMC3127505 [Available on 2012/7/1]
PMID: 21562087  [PubMed - in process]

Review paper on analyzing dose-response curves

July 22, 2011

I’ve been negligent about announcing new papers so I’ll do it all at once and then perhaps provide details later:

 Methods Enzymol. 2011;487:465-83.

Inferring mechanisms from dose-response curves.

Chow CC, Ong KM, Dougherty EJ, Simons SS Jr.

Laboratory of Biological Modeling, NIDDK/CEB, National Institutes of Health,
Bethesda, Maryland, USA.

The steady state dose-response curve of ligand-mediated gene induction usually
appears to precisely follow a first-order Hill equation (Hill coefficient equal
to 1). Additionally, various cofactors/reagents can affect both the potency and
the maximum activity of gene induction in a gene-specific manner. Recently, we
have developed a general theory for which an unspecified sequence of steps or
reactions yields a first-order Hill dose-response curve (FHDC) for plots of the
final product versus initial agonist concentration. The theory requires only that
individual reactions "dissociate" from the downstream reactions leading to the
final product, which implies that intermediate complexes are weakly bound or
exist only transiently. We show how the theory can be utilized to make
predictions of previously unidentified mechanisms and the site of action of
cofactors/reagents. The theory is general and can be applied to any biochemical
reaction that has a FHDC.

PMID: 21187235  [PubMed - indexed for MEDLINE]

New paper on TREK channels

January 27, 2011

This paper came out of a collaboration instigated by my former fellow Sarosh Fatakia.  We applied our method using mutual information (see here and here) to a family of potassium channels known as TREK channels.

Structural models of TREK channels and their gating mechanism

A.L. Milac, A. Anishkin, S.N. Fatakia, C.C. Chow, S. Sukharev, and H. R. Guy

Mechanosensitive TReK channels belong to the family of K2p channels, a family of widely distributed, well-modulated channels that uniquely have two similar or identical subunits, each with two TM1-p-TM2 motifs. Our goal is to build viable structural models of TReK channels, as representatives of K2p channels family. The structures available to be used as templates belong to the 2TM channels superfamily. These have low sequence similarity and different structural features: four symmetrically arranged subunits, each having one TM1-p-TM2 motif. Our model building strategy used two subunits of the template (KcsA) to build one subunit of the target (TREK-1).  Our models of the closed channel were adjusted to differ substantially from those of the template, e.g., TM2 of the second repeat is near the axis of the pore whereas TM2 of the first repeat is far from the axis.  Segments linking the two repeats and immediately following the last TM segment were modeled ab initio as α-helices based on helical periodicities of hydrophobic and hydrophilic residues, highly conserved and poorly conserved residues and statistically related positions from multiple sequence alignments. The models were further refined by 2-fold symmetry-constrained MD simulations using a protocol we developed previously. We also built models of the open state and suggest a possible tension-activated gating mechanism characterized by helical motion with 2-fold symmetry. Our models are consistent with deletion/truncation mutagenesis and thermodynamic analysis of gating described in the accompanying paper.



addendum: link fixed

Path Integral Methods for SDEs

September 30, 2010

I’ve just uploaded a review paper to arXiv on the use of path integral and field theory methods for solving stochastic differential equations.   The paper can be obtained here.  Most books on field theory and path integrals are geared towards applications in particle physics or statistical mechanics.  This paper shows how you can adapt these methods to solving everyday problems in applied mathematics and theoretical biology.  The nice thing about it is that they form an organized way to do perturbative expansions and explicitly compute quantities like moments.  The paper was originally written for a special issue of the journal Methods that fell through.  Our goal is to collate the papers intended for that issue into a book, which will include an expanded version of this paper.

The push hypothesis for obesity

September 14, 2010

My blog post on the summary of my SIAM talk on obesity was picked up by  There is also a story by mathematics writer Barry Cipra in SIAM news (not yet available online).  I thought I would explicitly clarify the “push” hypothesis here and reiterate that this is my opinion and not NIH policy.  What we had done previously was to derive a model of human metabolism that gives a prediction of how much you would weigh given how much you eat.  The model is fully dynamic and can capture how much you gain or lose weight depending on changes in diet or physical activity.  The parameters in the model have been calibrated with physiological measurements and validated in several independent studies of people undergoing weight change due to diet changes.

We then applied this model to the US population.  We used data from the National Health and Nutrition Examination Survey, which has kept track of the body weights of a representative sample of the US population for the past several decades and food availability data from the USDA.  Since the 1970’s, the average US body weight has increased linearly.  The US food availability per person has also increased linearly.  However, when we used the food availability data in the model, it predicted that the weight gain would grow linearly at a faster rate.  The USDA has used surveys and other investigative techniques to try to account for how much food is wasted.  If we calibrate the wastage to 1970 then we predict that the difference between the amount consumed and the amount available progressively increased from 1970 to 2005.  We interpreted this gap to be a progressive increase of food waste.  An alternative hypothesis would be that everyone burned more energy than the model predicted.

This also makes a prediction for the cause of the obesity epidemic although we didn’t make this the main point of the paper.  In order to gain weight, you have to eat more calories than you burn.  There are three possibilities for how this could happen: 1)  We could decrease energy expenditure by reducing physical activity and thus increase weight even if we ate the same amount of food as before,  2) There could be a pull effect where we became hungrier and start to eat more food, and 3)  There could be a push effect where we eat more food than we would have previously because of increased availability.  Now the data rules out hypothesis 1) since we assumed that physical activity stayed constant and still showed an increasing gap between energy intake and energy expenditure.  If anything, we may be exercising more than expected.  Hypothesis 2) would predict that the gap between intake and expenditure should fall and waste should decrease as we utilize more of the available food.  This then leaves us with hypothesis 3) where we are being supplied more food than we need to maintain our body weight and while we are eating some of this excess food, we are wasting more and more of it as well.

The final question, which is outside my realm of expertise, is why food supply increased. The simple answer is that food policy changed dramatically in the 1970’s. Earl Butz was appointed to be the US Secretary of Agriculture in 1971.  At that time food prices were quite high so he decided to change farm policy and vastly increase the production of corn and soybeans.  As a result, the supply of food increased dramatically and the price of food began to drop.   The story of Butz and the consequences of his policy shift is documented in the film King Corn.

Summary of SIAM talk

July 23, 2010

Last Monday I gave a plenary talk at the joint Life Sciences and Annual SIAM meeting.  My slides can be downloaded from a previous post. The talk summarized the work I’ve been doing on obesity and human body weight change for the past six years.  The main idea is that at the most basic level, the body can be modeled as a fuel tank.  You put food into the tank by eating and you use up energy to maintain bodily functions and do physical work.  The difference between food intake rate and energy expenditure rate is the rate of change of your body weight.  In calculating body weight you need to convert energy (e.g. Calories consumed) into mass (e.g. kilograms).  However, the difficulty in doing this is that your body is not homogeneous.  You are comprised of water, bones, minerals, fat, protein and carbohydrates in the form of glycogen.  Each of these quantities has its own energy density (e.g. Calories/kg).  So in order to figure out how much you’ll weigh you need to figure out how the body partitions energy into these different components.



Get every new post delivered to your Inbox.

Join 243 other followers