Archive for the ‘Papers’ Category

Errata for PLoS Genetics paper

May 4, 2012

I just discovered some glaring typographical errors in the Methods Section of our recent paper: Heritability and genetic correlations explained by common SNPS for metabolic syndrome traits.  The corrected Methods can be obtained here.  I will see if I can include an errata on the PLoS Genetics website as well. The results in the paper are fine.

edited on May 9 to clear up possible misconception.

Heritability and GWAS

April 9, 2012

Here is the backstory for the paper in the previous post.  Immediately after the  human genome project was completed a decade ago, people set out to discover the genes responsible for diseases. Traditionally, genes had been discovered by tracing the incidence in families that exhibit the disease. This was a painstaking process – a classic example being the discovery of the gene for Huntington’s disease.  The  completion of the  human genome project provided a simpler approach.  The first thing was to create what is known as a haplotype map or HapMap. This is a catalog of all the common genome differences between people.  The genome between humans only differ by about 0.1%. These differences include  Single Nucleotide Variations (SNPs) where a given base (A,C,G,T) is changed and  Copy Number Variations (CNV) where there are differences in the number of copies of segments of DNA.   There are about 10 million common SNPs.

The Genome Wide Association Study (GWAS) usually looks for differences in SNPs between people with and without diseases.  The working hypothesis at that time was that common diseases like Alzheimer’s disease or Type II diabetes should be due to differences in common SNPs (i.e. common disease, common variant). People thought that the genes for many of these diseases would be found within a decade. Towards this end, companies like Affymetrics and Illumina began making microarray chips, which consist of tiny wells with snippets of complement DNA (cDNA) of a small segment of the sequence around each SNP.  SNP variants are found by seeing what types of DNA fragments in genetic samples (e.g. saliva) get bound to the complement strands in the  array. A classic GWAS study then considers the differences in SNP variants observed in disease and control groups. For any finite sample, there will always be fluctuations  so a statistical criterion must be established to evaluate whether a variant is significant. This is done by computing the probability or p-value of the occurrence of a variant due to random chance.  However, in a set of a million SNPS, which is standard for a chip, the probability of one SNP being randomly associated with a disease is a million fold higher if the SNPs are all independent (i.e. it’s just the sum of the probabilities of each event (see Bonferonni correction)). However, since SNPs are not always independent, this is a very conservative criterion.

The first set of results from GWAS  started to be published shortly after I arrived at the NIH in 2004 and they weren’t very promising. A small number of SNPs were found for some common diseases but they only conferred a very small increase in  risk even when the disease were thought to be highly heritable. Heritability is a measure of the proportion of phenotypic variation between people explained by genetic variation.  (I’ll give a primer on heritability in a following post.) The one notable exception was age-related macular degeneration for which five SNPs were found to be associated with a 2 to 3 fold increase in risk. The difference between the heritability of a disease as measured by classical genetic methods and what was found to be explained by SNPs came to be known as the “Missing heritability” problem. I thought this was an interesting puzzle but didn’t think much about it until I saw the results from this paper (summarized on Steve Hsu’s blog), which showed that different European subpopulations could be separated by just projecting onto two principal components of 300,000 SNPS of 6000 people. (Principle components are the eigenvectors of the 6000 by 6000 correlation matrix of the 300,000 dimensional SNP vectors of each person.)  This was a revelation to me because it implied that although single genes did not carry much weight, the collection of all the genes certainly did.  I decided right then that I would show that the missing heritability for obesity was contained in the “pattern” of SNPs rather than in single SNPs. I did this without really knowing anything at all about genetics or heritability.

(more…)

New paper on heritability from GWAS

March 29, 2012

Heritability and genetic correlations explained by common SNPS in metabolic syndrome traits

PLoS Genet 8(3): e1002637. doi:10.1371/journal.pgen.1002637

Shashaank Vattikuti, Juen Guo, and Carson C. Chow

Abstract: We used a bivariate (multivariate) linear mixed-effects model to estimate the narrow-sense heritability (h2) and heritability explained by the common SNPs (hg2) for several metabolic syndrome (MetS) traits and the genetic correlation between pairs of traits for the Atherosclerosis Risk in Communities (ARIC) genome-wide association study (GWAS) population. MetS traits included body-mass index (BMI), waist-to-hip ratio (WHR), systolic blood pressure (SBP), fasting glucose (GLU), fasting insulin (INS), fasting trigylcerides (TG), and fasting high-density lipoprotein (HDL). We found the percentage of h2 accounted for by common SNPs to be 58% of h2 for height, 41% for BMI, 46% for WHR, 30% for GLU, 39% for INS, 34% for TG, 25% for HDL, and 80% for SBP. We confirmed prior reports for height and BMI using the ARIC population and independently in the Framingham Heart Study (FHS) population. We demonstrated that the multivariate model supported large genetic correlations between BMI and WHR and between TG and HDL. We also showed that the genetic correlations between the MetS traits are directly proportional to the phenotypic correlations.

Author Summary: The narrow-sense heritability of a trait such as body-mass index is a measure of the variability of the trait between people that is accounted for by their additive genetic differences. Knowledge of these genetic differences provides insight into biological mechanisms and hence treatments for diseases. Genome-wide association studies (GWAS) survey a large set of genetic markers common to the population. They have identified several single markers that are associated with traits and diseases. However, these markers do not seem to account for all of the known narrow-sense heritability. Here we used a recently developed model to quantify the genetic information contained in GWAS for single traits and shared between traits. We specifically investigated metabolic syndrome traits that are associated with type 2 diabetes and heart disease, and we found that for the majority of these traits much of the previously unaccounted for heritability is contained within common markers surveyed in GWAS. We also computed the genetic correlation between traits, which is a measure of the genetic components shared by traits. We found that the genetic correlation between these traits could be predicted from their phenotypic correlation.

I am very happy that this paper is finally out.  It has been a three year long ordeal.  I’ll write about the story and background for this paper later.

New paper in Biophysical Journal

February 14, 2012

Bayesian Functional Integral Method for Inferring Continuous Data from Discrete Measurements

Biophysical Journal, Volume 102, Issue 3, 399-406, 8 February 2012

doi:10.1016/j.bpj.2011.12.046

William J. Heuett, Bernard V. Miller, Susan B. Racette, John O. Holloszy, Carson C. Chow, and Vipul Periwal

Abstract: Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a “model”. An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models.

New paper on steroid-mediated gene induction

January 23, 2012

A follow-up  to our PNAS paper on a new theory of steroid-mediated gene induction is now available on PLoS One here.  The title and abstract is below.  In the first paper, we proposed a general mathematical framework to compute how much protein will be produced from a steroid-mediated gene.  It had been noted in the past that the dose response curve of product given steroid amount follows a Michaelis-Menten curve or first order Hill function (e.g. Product = Amax [S]/(EC50+[S], where [S] is the added steroid concentration)..  In our previous work, we exploited this fact and showed that a complete closed form expression for the dose response curve could be written down for an arbitrary number of linked reactions.  The formula also indicates how added cofactors could increase or decrease the Amax or EC50.  What we do in this paper is to show how this expression can be used to predict the mechanism and order in the sequence of reactions a given cofactor will act by analyzing how two cofactors affect the Amax and EC50.

Deducing the Temporal Order of Cofactor Function in Ligand-Regulated Gene Transcription: Theory and Experimental Verification

Edward J. Dougherty, Chunhua Guo, S. Stoney Simons Jr, Carson C. Chow

Abstract: Cofactors are intimately involved in steroid-regulated gene expression. Two critical questions are (1) the steps at which cofactors exert their biological activities and (2) the nature of that activity. Here we show that a new mathematical theory of steroid hormone action can be used to deduce the kinetic properties and reaction sequence position for the functioning of any two cofactors relative to a concentration limiting step (CLS) and to each other. The predictions of the theory, which can be applied using graphical methods similar to those of enzyme kinetics, are validated by obtaining internally consistent data for pair-wise analyses of three cofactors (TIF2, sSMRT, and NCoR) in U2OS cells. The analysis of TIF2 and sSMRT actions on GR-induction of an endogenous gene gave results identical to those with an exogenous reporter. Thus new tools to determine previously unobtainable information about the nature and position of cofactor action in any process displaying first-order Hill plot kinetics are now available.

New paper on GPCRs

November 28, 2011

New paper in PloS One:

Fatakia SN, Costanzi S, Chow CC (2011) Molecular Evolution of the Transmembrane Domains of G Protein-Coupled Receptors. PLoS ONE 6(11): e27813. doi:10.1371/journal.pone.0027813

Abstract

G protein-coupled receptors (GPCRs) are a superfamily of integral membrane proteins vital for signaling and are important targets for pharmaceutical intervention in humans. Previously, we identified a group of ten amino acid positions (called key positions), within the seven transmembrane domain (7TM) interhelical region, which had high mutual information with each other and many other positions in the 7TM. Here, we estimated the evolutionary selection pressure at those key positions. We found that the key positions of receptors for small molecule natural ligands were under strong negative selection. Receptors naturally activated by lipids had weaker negative selection in general when compared to small molecule-activated receptors. Selection pressure varied widely in peptide-activated receptors. We used this observation to predict that a subgroup of orphan GPCRs not under strong selection may not possess a natural small-molecule ligand. In the subgroup of MRGX1-type GPCRs, we identified a key position, along with two non-key positions, under statistically significant positive selection.

New paper

November 17, 2011

A new paper in Physical Review E is now available on line here.  In this paper Michael Buice and I show how you can derive an effective stochastic differential (Langevin) equation for a single element (e.g. neuron) embedded in a network by averaging over the unknown dynamics of the other elements. This then implies that given measurements from a single neuron, one might be able to infer properties of the network that it lives in.  We hope to show this in the future. In this paper, we perform the calculation explicitly for the Kuramoto model of coupled oscillators (e.g. see here) but it can be generalized to any network of coupled elements.  The calculation relies on the path or functional integral formalism Michael developed in his thesis and generalized at the NIH.  It is a nice application of what is called “effective field theory”, where new dynamics (i.e. action) are obtained by marginalizing or integrating out unwanted degrees of freedom.  The path integral formalism gives a nice platform to perform this averaging.  The resulting Langevin equation has a noise term that is nonwhite, non-Gaussian and multiplicative.  It is probably not something you would have guessed a priori.

 Michael A. Buice1,2 and Carson C. Chow11Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, Maryland 20892, USA
2Center for Learning and Memory, University of  Texas at Austin, Austin, Texas, USA

Received 25 July 2011; revised 12 September 2011; published 17 November 2011

Complex systems are generally analytically intractable and difficult to simulate. We introduce a method for deriving an effective stochastic equation for a high-dimensional deterministic dynamical system for which some portion of the configuration is not precisely specified. We use a response function path integral to construct an equivalent distribution for the stochastic dynamics from the distribution of the incomplete information. We apply this method to the Kuramoto model of coupled oscillators to derive an effective stochastic equation for a single oscillator interacting with a bath of oscillators and also outline the procedure for other systems.

Published by the American Physical Society

URL: http://link.aps.org/doi/10.1103/PhysRevE.84.051120
DOI: 10.1103/PhysRevE.84.051120
PACS: 05.40.-a, 05.45.Xt, 05.20.Dd, 05.70.Ln

New paper in The Lancet

August 26, 2011

The Lancet has just published a series of articles on obesity.  They can be found here.  I am an author on the third paper, which covers the work Kevin Hall and I have been working on for the past seven years.  There was a press conference in London yesterday that Kevin attended and there is a symposium today.  The announcement has since been picked up in the popular press.  Here are some samples:  Science Daily, Mirror, The Australian, and The Chart at CNN.

 

 

New paper on binocular rivalry

July 22, 2011
J Neurophysiol. 2011 Jul 20. [Epub ahead of print]

The role of mutual inhibition in binocular rivalry.

Seely J, Chow CC.

Binocular rivalry is a phenomenon that occurs when ambiguous images are presented
to each of the eyes. The observer generally perceives just one image at a time,
with perceptual switches occurring every few seconds. A natural assumption is
that this perceptual mutual exclusivity is achieved via mutual inhibition between
populations of neurons that encode for either percept. Theoretical models that
incorporate mutual inhibition have been largely successful at capturing
experimental features of rivalry, including Levelt's propositions, which
characterize perceptual dominance durations as a function of image contrasts.
However, basic mutual inhibition models do not fully comply with Levelt's fourth
proposition, which states that percepts alternate faster as the stimulus
contrasts to both eyes are increased simultaneously. This theory-experiment
discrepancy has been taken as evidence against the role of mutual inhibition for
binocular rivalry. Here, we show how various biophysically plausible
modifications to mutual inhibition models can resolve this problem.

PMID: 21775721  [PubMed - as supplied by publisher]

Paper can be downloaded here.

Review paper on steroid-mediated gene expression

July 22, 2011
Mol Cell Endocrinol. 2011 Jun 1. [Epub ahead of print]

The road less traveled: New views of steroid receptor action 
from the path of dose-response curves. 
Simons SS Jr, Chow CC.

Steroid Hormones Section, NIDDK/CEB, NIDDK, National Institutes of Health,
Bethesda, MD, United States.

Conventional studies of steroid hormone action proceed via quantitation of the
maximal activity for gene induction at saturating concentrations of agonist
steroid (i.e., A(max)). Less frequently analyzed parameters of receptor-mediated
gene expression are EC(50) and PAA. The EC(50) is the concentration of steroid
required for half-maximal agonist activity and is readily determined from the
dose-response curve. The PAA is the partial agonist activity of an antagonist
steroid, expressed as percent of A(max) under the same conditions. Recent results
demonstrate that new and otherwise inaccessible mechanistic information is
obtained when the EC(50) and/or PAA are examined in addition to the A(max).
Specifically, A(max), EC(50), and PAA can be independently regulated, which
suggests that novel pathways and factors may preferentially modify the EC(50)
and/or PAA with little effect on A(max). Other approaches indicate that the
activity of receptor-bound factors can be altered without changing the binding of
factors to receptor. Finally, a new theoretical model of steroid hormone action
not only permits a mechanistically based definition of factor activity but also
allows the positioning of when a factor acts, as opposed to binds, relative to a
kinetically defined step. These advances illustrate some of the benefits of
expanding the mechanistic studies of steroid hormone action to routinely include
EC(50) and PAA.

PMID: 21664235  [PubMed - as supplied by publisher]

New paper on insulin’s effect on lipolysis

July 22, 2011
J Clin Endocrinol Metab. 2011 May 18. [Epub ahead of print]

Higher Acute Insulin Response to Glucose May Determine Greater 
Free Fatty Acid Clearance in African-American Women.

Chow CC, Periwal V, Csako G, Ricks M, Courville AB, Miller BV 3rd, Vega GL,
Sumner AE.

Laboratory of Biological Modeling (C.C.C., V.P.), National Institute of Diabetes
and Digestive and Kidney Diseases, National Institutes of Health, Bethesda,
Maryland 20892; Departments of Laboratory Medicine (G.C.) and Nutrition (A.B.C.),
Clinical Center, National Institutes of Health, and Clinical Endocrinology Branch
(M.R., B.V.M., A.E.S.), National Institute of Diabetes and Digestive and Kidney
Diseases, National Institutes of Health, Bethesda, Maryland 20892; and Center for
Human Nutrition (G.L.V.), University of Texas Southwestern Medical Center at
Dallas, Dallas, Texas 75235.

Context: Obesity and diabetes are more common in African-Americans than whites.
Because free fatty acids (FFA) participate in the development of these
conditions, studying race differences in the regulation of FFA and glucose by
insulin is essential. Objective: The objective of the study was to determine
whether race differences exist in glucose and FFA response to insulin. Design:
This was a cross-sectional study. Setting: The study was conducted at a clinical
research center. Participants: Thirty-four premenopausal women (17
African-Americans, 17 whites) matched for age [36 ± 10 yr (mean ± sd)] and body
mass index (30.0 ± 6.7 kg/m(2)). Interventions: Insulin-modified frequently
sampled iv glucose tolerance tests were performed with data analyzed by separate
minimal models for glucose and FFA. Main Outcome Measures: Glucose measures were
insulin sensitivity index (S(I)) and acute insulin response to glucose (AIRg).
FFA measures were FFA clearance rate (c(f)). Results: Body mass index was similar
but fat mass was higher in African-Americans than whites (P < 0.01). Compared
with whites, African-Americans had lower S(I) (3.71 ± 1.55 vs. 5.23 ± 2.74
[×10(-4) min(-1)/(microunits per milliliter)] (P = 0.05) and higher AIRg (642 ±
379 vs. 263 ± 206 mU/liter(-1) · min, P < 0.01). Adjusting for fat mass,
African-Americans had higher FFA clearance, c(f) (0.13 ± 0.06 vs. 0.08 ± 0.05
min(-1), P < 0.01). After adjusting for AIRg, the race difference in c(f) was no
longer present (P = 0.51). For all women, the relationship between c(f) and AIRg
was significant (r = 0.64, P < 0.01), but the relationship between c(f) and S(I)
was not (r = -0.07, P = 0.71). The same pattern persisted when the two groups
were studied separately. Conclusion: African-American women were more insulin
resistant than white women, yet they had greater FFA clearance. Acutely higher
insulin concentrations in African-American women accounted for higher FFA
clearance.

PMID: 21593106  [PubMed - as supplied by publisher]

New paper on estimating food intake from body weight

July 22, 2011
 Am J Clin Nutr. 2011 Jul;94(1):66-74. Epub 2011 May 11.

Estimating changes in free-living energy intake and its confidence interval.

Hall KD, Chow CC.

Laboratory of Biological Modeling, National Institute of Diabetes and Digestive
and Kidney Diseases, Bethesda, MD.

Background: Free-living energy intake in humans is notoriously difficult to
measure but is required to properly assess outpatient weight-control
interventions. Objective: Our objective was to develop a simple methodology that 
uses longitudinal body weight measurements to estimate changes in energy intake
and its 95% CI in individual subjects. Design: We showed how an energy balance
equation with 2 parameters can be derived from any mathematical model of human
metabolism. We solved the energy balance equation for changes in free-living
energy intake as a function of body weight and its rate of change. We tested the 
predicted changes in energy intake by using weight-loss data from controlled
inpatient feeding studies as well as simulated free-living data from a group of
"virtual study subjects" that included realistic fluctuations in body water and
day-to-day variations in energy intake. Results: Our method accurately predicted 
individual energy intake changes with the use of weight-loss data from controlled
inpatient feeding experiments. By applying the method to our simulated
free-living virtual study subjects, we showed that daily weight measurements over
periods >28 d were required to obtain accurate estimates of energy intake change 
with a 95% CI of <300 kcal/d. These estimates were relatively insensitive to
initial body composition or physical activity level. Conclusions: Frequent
measurements of body weight over extended time periods are required to precisely 
estimate changes in energy intake in free-living individuals. Such measurements
are feasible, relatively inexpensive, and can be used to estimate diet adherence 
during clinical weight-management programs.

PMCID: PMC3127505 [Available on 2012/7/1]
PMID: 21562087  [PubMed - in process]

Review paper on analyzing dose-response curves

July 22, 2011

I’ve been negligent about announcing new papers so I’ll do it all at once and then perhaps provide details later:

 Methods Enzymol. 2011;487:465-83.

Inferring mechanisms from dose-response curves.

Chow CC, Ong KM, Dougherty EJ, Simons SS Jr.

Laboratory of Biological Modeling, NIDDK/CEB, National Institutes of Health,
Bethesda, Maryland, USA.

The steady state dose-response curve of ligand-mediated gene induction usually
appears to precisely follow a first-order Hill equation (Hill coefficient equal
to 1). Additionally, various cofactors/reagents can affect both the potency and
the maximum activity of gene induction in a gene-specific manner. Recently, we
have developed a general theory for which an unspecified sequence of steps or
reactions yields a first-order Hill dose-response curve (FHDC) for plots of the
final product versus initial agonist concentration. The theory requires only that
individual reactions "dissociate" from the downstream reactions leading to the
final product, which implies that intermediate complexes are weakly bound or
exist only transiently. We show how the theory can be utilized to make
predictions of previously unidentified mechanisms and the site of action of
cofactors/reagents. The theory is general and can be applied to any biochemical
reaction that has a FHDC.

PMID: 21187235  [PubMed - indexed for MEDLINE]

New paper on TREK channels

January 27, 2011

This paper came out of a collaboration instigated by my former fellow Sarosh Fatakia.  We applied our method using mutual information (see here and here) to a family of potassium channels known as TREK channels.

Structural models of TREK channels and their gating mechanism

A.L. Milac, A. Anishkin, S.N. Fatakia, C.C. Chow, S. Sukharev, and H. R. Guy

Mechanosensitive TReK channels belong to the family of K2p channels, a family of widely distributed, well-modulated channels that uniquely have two similar or identical subunits, each with two TM1-p-TM2 motifs. Our goal is to build viable structural models of TReK channels, as representatives of K2p channels family. The structures available to be used as templates belong to the 2TM channels superfamily. These have low sequence similarity and different structural features: four symmetrically arranged subunits, each having one TM1-p-TM2 motif. Our model building strategy used two subunits of the template (KcsA) to build one subunit of the target (TREK-1).  Our models of the closed channel were adjusted to differ substantially from those of the template, e.g., TM2 of the second repeat is near the axis of the pore whereas TM2 of the first repeat is far from the axis.  Segments linking the two repeats and immediately following the last TM segment were modeled ab initio as α-helices based on helical periodicities of hydrophobic and hydrophilic residues, highly conserved and poorly conserved residues and statistically related positions from multiple sequence alignments. The models were further refined by 2-fold symmetry-constrained MD simulations using a protocol we developed previously. We also built models of the open state and suggest a possible tension-activated gating mechanism characterized by helical motion with 2-fold symmetry. Our models are consistent with deletion/truncation mutagenesis and thermodynamic analysis of gating described in the accompanying paper.

 

 

addendum: link fixed

Path Integral Methods for SDEs

September 30, 2010

I’ve just uploaded a review paper to arXiv on the use of path integral and field theory methods for solving stochastic differential equations.   The paper can be obtained here.  Most books on field theory and path integrals are geared towards applications in particle physics or statistical mechanics.  This paper shows how you can adapt these methods to solving everyday problems in applied mathematics and theoretical biology.  The nice thing about it is that they form an organized way to do perturbative expansions and explicitly compute quantities like moments.  The paper was originally written for a special issue of the journal Methods that fell through.  Our goal is to collate the papers intended for that issue into a book, which will include an expanded version of this paper.

The push hypothesis for obesity

September 14, 2010

My blog post on the summary of my SIAM talk on obesity was picked up by Reddit.com.  There is also a story by mathematics writer Barry Cipra in SIAM news (not yet available online).  I thought I would explicitly clarify the “push” hypothesis here and reiterate that this is my opinion and not NIH policy.  What we had done previously was to derive a model of human metabolism that gives a prediction of how much you would weigh given how much you eat.  The model is fully dynamic and can capture how much you gain or lose weight depending on changes in diet or physical activity.  The parameters in the model have been calibrated with physiological measurements and validated in several independent studies of people undergoing weight change due to diet changes.

We then applied this model to the US population.  We used data from the National Health and Nutrition Examination Survey, which has kept track of the body weights of a representative sample of the US population for the past several decades and food availability data from the USDA.  Since the 1970’s, the average US body weight has increased linearly.  The US food availability per person has also increased linearly.  However, when we used the food availability data in the model, it predicted that the weight gain would grow linearly at a faster rate.  The USDA has used surveys and other investigative techniques to try to account for how much food is wasted.  If we calibrate the wastage to 1970 then we predict that the difference between the amount consumed and the amount available progressively increased from 1970 to 2005.  We interpreted this gap to be a progressive increase of food waste.  An alternative hypothesis would be that everyone burned more energy than the model predicted.

This also makes a prediction for the cause of the obesity epidemic although we didn’t make this the main point of the paper.  In order to gain weight, you have to eat more calories than you burn.  There are three possibilities for how this could happen: 1)  We could decrease energy expenditure by reducing physical activity and thus increase weight even if we ate the same amount of food as before,  2) There could be a pull effect where we became hungrier and start to eat more food, and 3)  There could be a push effect where we eat more food than we would have previously because of increased availability.  Now the data rules out hypothesis 1) since we assumed that physical activity stayed constant and still showed an increasing gap between energy intake and energy expenditure.  If anything, we may be exercising more than expected.  Hypothesis 2) would predict that the gap between intake and expenditure should fall and waste should decrease as we utilize more of the available food.  This then leaves us with hypothesis 3) where we are being supplied more food than we need to maintain our body weight and while we are eating some of this excess food, we are wasting more and more of it as well.

The final question, which is outside my realm of expertise, is why food supply increased. The simple answer is that food policy changed dramatically in the 1970’s. Earl Butz was appointed to be the US Secretary of Agriculture in 1971.  At that time food prices were quite high so he decided to change farm policy and vastly increase the production of corn and soybeans.  As a result, the supply of food increased dramatically and the price of food began to drop.   The story of Butz and the consequences of his policy shift is documented in the film King Corn.

Summary of SIAM talk

July 23, 2010

Last Monday I gave a plenary talk at the joint Life Sciences and Annual SIAM meeting.  My slides can be downloaded from a previous post. The talk summarized the work I’ve been doing on obesity and human body weight change for the past six years.  The main idea is that at the most basic level, the body can be modeled as a fuel tank.  You put food into the tank by eating and you use up energy to maintain bodily functions and do physical work.  The difference between food intake rate and energy expenditure rate is the rate of change of your body weight.  In calculating body weight you need to convert energy (e.g. Calories consumed) into mass (e.g. kilograms).  However, the difficulty in doing this is that your body is not homogeneous.  You are comprised of water, bones, minerals, fat, protein and carbohydrates in the form of glycogen.  Each of these quantities has its own energy density (e.g. Calories/kg).  So in order to figure out how much you’ll weigh you need to figure out how the body partitions energy into these different components.

(more…)

New paper on Autism

April 9, 2010

S. Vattikuti and C.C. Chow, ‘A computational model for cerebral cortical dysfunction in Autism Spectrum Disorders’, Biol Psychiatry 67:672-6798 (2010).  PMID: 19880095

PDF available here.

Shashaank Vattikuti was a medical student and wanted to do a rotation in my lab.    He had done some pediatric rotations and was frustrated at the lack of treatments for autistic children.    He thought that  a better biophysical understanding of the neural activity that caused autism was necessary to make progress.  The conventional wisdom is that autism is due to some problem in global connectivity in the brain.  This makes sense because neural imaging data seems to show that different regions of the brain seem to be less functionally connected in autistics.   However, Shashaank thought that the deficit was probably a local microscopic one and that the global perturbations were due to the brain’s attempt to compensate for these deficits.

He found papers that showed that cortical structures called minicolums (on the hundred micron scale) were denser (closer together)  in autistics.  This immediately set off a flag in my head because I had previously shown for spiking neurons  (e.g. Chow and Coombes) that localized persistent activity (often called a bump) was more stable if the neuron density increased.   Bumps in networks of spiking neurons tended to wander around for small numbers but stabilized as the density increased.   Shashaank also found genetic and physiological evidence that the synaptic balance is tilted towards an excess of excitation in autistics.

The real breakthrough in making this more than an academic exercise was that Shashaank also found a simple behavioral task to model, where subjects visually fixate on a point for a certain amount of time and then shift their gaze to a target when instructed.  Most people will undershoot and this is called hypometria and make errors called dysmetria.  The data was mixed but it seemed like autistics had more hypometria and dysmetria.  The way we implemented this visually guided task in the model was to consider a one dimensional network of excitatory and inhibitory neurons.  The parameters were tuned so that when a stimulus was applied at a given position a bump of firing neurons would form.  The fixation point was represented by a stimulus applied to a location in the network.  We then stimulated at another location to indicate the saccade target and tracked how the bump moved.  We found that when there was an excess of excitation, hypometria and dysmetria both increased.  However, when the minicolumn structure was perturbed, only hypometria increased.

Before Shashaank ran the simulations, I really wasn’t sure what would happen.  Given more excitation, it is plausible that the bump would move more quickly and hence reduce hypometria with increased excitation.  Instead, the excessive excitation makes bumps more persistent and stable so it is harder to move them once they are established.   Hence, our  result hinges on there being prior neural activity that must be moved in a saccade task.    This is consistent  with autistics having more difficulty switching mental tasks.  Our  hypothesis is that the underlying source of autistic symptoms arise from excessive local persistent activity.  This excessive persistence is also why the effective connectivity between brain regions is reduced because each region is less responsive to external inputs.    It also suggests that restoring synaptic balance with medication may alleviate some of these symptoms.  We’re currently trying to devise ways to validate our hypothesis.

References:

Casanova MF, van Kooten IA, Switala AE, van Engeland H, Heinsen H, Steinbusch HW, et al. (2006): Minicolumnar abnormalities in autism. Acta Neuropathol 112:287–303.

C.C. Chow and S. Coombes, ‘Existence and Wandering of bumps in a spiking neural network model’. SIAM Journal on Applied Dynamical Systems 5, 552-574 (2006) [PDF]

New paper on gene induction

March 30, 2010

Karen M. Ong, John A. Blackford, Jr., Benjamin L. Kagan, S. Stoney Simons, Jr., and Carson C. Chow. A theoretical framework for gene induction and experimental comparisons PNAS 200911095; published ahead of print March 29, 2010, doi:10.1073/pnas.0911095107

This is an open access article so it can be downloaded directly from the PNAS website here.

This is a paper where group theory appears unexpectedly.  The project grew out of a chance conversation between Stoney Simons and myself in 2004.  I had arrived recently at the NIH and was invited to give a presentation at the NIDDK retreat.  I spoke about how mathematics could be applied to obesity research and I’ll talk about the culmination of that effort in my invited plenary talk at the joint SIAM Life Sciences and Annual meeting this summer.  Stoney gave a summary of his research on steroid-mediated gene induction.  He showed some amazing data of the dose response curve of the amount of gene product induced for a given amount of steroid.  In these experiments,  different concentrations of steroid are added to cells  in a dish and then after waiting awhile, the total amount of gene product in the form of luciferase is measured.  Between steroid and gene product is a lot of messy and complicated biology starting with  steroid binding to a steroid receptor, which translocates to the nucleus while interacting with many molecules on the way. The steroid-receptor complex then binds  to DNA, which triggers a transcription cascade involving many other factors binding to DNA, which gives rise to mRNA, which is then translated into luciferase and then measured as photons.  Amazingly, the dose response curve after all of this was fit almost perfectly (R^2 > 95%) by a Michaelis-Menton or first order Hill function

P = \frac{Amax [S]}{EC50 + [S]}

where  P is the amount of gene product, [S] is the concentration of steroid, Amax is the maximum possible amount of product and EC50 is the concentration of steroid giving half the maximum of product. Stoney also showed that Amax and EC50 could be moved in various directions by the addition of various cofactors.  I remember thinking to myself during his talk that there must be a nice mathematical explanation for this.  After my talk, Stoney came up to me  and asked me if I thought I could model his data.  We’ve been in sync like this ever since.

(more…)

Paper now in print

February 3, 2010

M.A. Buice, J.D. Cowan, and C.C. Chow, Systematic Fluctuation Expansion for Neural Network Activity Equations, Neural Comp. 22:377-426 (2010) is now in print.  The summary of the paper is here and a PDF can be obtained here.


Follow

Get every new post delivered to your Inbox.

Join 228 other followers