New paper on population genetics

James Lee and I just published a paper entitled “The causal meaning of Fisher’s average effect” in the journal Genetics Research. The paper can be obtained here. This paper is the brainchild of James and I just helped him out with some of the proofs.  James’s take on the paper can be read here. The paper resolves a puzzle about the incommensurability of Ronald Fisher’s two definitions of the average effect noted by population geneticist D.S. Falconer three decades ago.

Fisher was well known for both brilliance and obscurity and people have long puzzled over the meaning of some of his work.  The concept of the average effect is extremely important for population genetics but it is not very well understood. The field of population genetics was invented in the early twentieth century by luminaries such as Fisher, Sewall Wright, and JBS Haldane to reconcile Darwin’s theory of evolution with Mendelian genetics. This is a very rich field that has been somewhat forgotten. People in mathematical,  systems, computational, and quantitative biology really should be fully acquainted with the field.

For those who are unacquainted with genetics, here is a quick primer to understand the paper. Certain traits, like eye colour or the ability to roll your tongue, are affected by your genes. Prior to the discovery of the structure of DNA, it was not clear what genes were, except that they were the primary discrete unit of genetic inheritance. These days it usually refers to some region on the genome. Mendel’s great insight was that genes come in pairs, which we now know to correspond to the two copies of each of the 23 chromosomes we have.  A variant of a particular gene is called an allele.  Traits can depend on genes (or more accurately genetic loci) linearly or nonlinearly. Consider a quantitative trait that depends on a single genetic locus that has two alleles, which we will call a and A. This means that a person will have one of three possible genotypes: 1) homozygous in A (i.e. have two A alleles), 2) heterozygous (have one of each), or 3) homozygous in a (i.e. have no A alleles). If the locus is linear then if you plot the measure of the trait (e.g. height) against the number of A alleles, you will get a straight line. For example, suppose allele A contributes a tenth of a centimetre to height. Then people with one A allele will be on average one tenth of a centimetre taller than those with no A alleles and those with two A alleles will be two tenths taller. The familiar notion of dominance is a nonlinear effect. So for example, the ability to roll your tongue is controlled by a single gene. There is a dominant rolling allele and a recessive nonrolling allele. If you have at least one rolling allele, you can roll your tongue.

The average effect of a gene substitution is the average change in a trait if one allele is substituted for another. A crucial part of population genetics is that you always need to consider averages. This is because genes are rarely completely deterministic. They can be influenced by the environment or other genes. Thus, in order to define the effect of the gene, you need to average over these other influences. This then leads to a somewhat ambiguous definition of average effect and Fisher actually came up with two. The first, and as James would argue the primary definition, is a causal one in that we want to measure the average effect of a gene if you experimentally substituted one allele for another prior to development and influence by the environment. A second correlation definition would simply be to plot the trait against the number of alleles as in the example above. The slope would then be the average effect. This second definition looks at the correlation between the gene and the trait but as the old saying goes “correlation does not imply causation”. For example, the genetic loci may not have any effect on the trait but happens to be strongly correlated with a true causal locus (in the population you happen to be examining). Distinguishing between genes that are merely associated with a trait from ones that are actually causal remains an open problem in genome wide association studies.

Our paper goes over some of the history and philosophy of the tension between these two definitions. We wrote the paper because these two definitions do not always agree and we show under what conditions they do agree. The main reason they don’t agree is that averages will depend on the background over which you average. For a biallelic gene, there are 2 alleles but 3 genotypes. The distribution of alleles in a population is governed by two parameters. It’s not enough to specify the frequency of one allele. You also need to know the correlation between alleles. The regression definition matches the causal definition if a particular function representing this correlation is held fixed while the experimental allele substitutions under the causal definition are carried out. We also considered the multi-allele and multi-loci case in as much generality as we could.

Advertisements

The problem with democracy

Winston Churchill once said that “Democracy is the worst form of government, except for all those other forms that have been tried from time to time.” The current effectiveness of the US government does make one wonder if that is even true. The principle behind democracy is essentially utilitarian – a majority or at least a plurality decides on the course of the state. However, implicit in this assumption is that the utility function for individuals match their participation function.

For example, consider environmental regulation. The utility function for the amount of allowable emissions of some harmful pollutant like mercury for most people will be downward sloping – most people would increase their utility the less the pollutant is emitted. However, for a small minority of polluters it will be upward sloping with a much steeper slope. Let’s say that the sum of the utility gained for the bulk of the population for strong regulation is greater than that gained by the few polluters for weak regulation. If the democratic voice one has in affecting policy is proportional to the summed utility then the smaller gain for the many will outweigh the larger gain to the few. Unfortunately, this is not usually case. More often, the translation of utility to legislation and regulation is not proportional but passes through a very nonlinear participation function with a sharp threshold. The bulk of the population is below the threshold so they provide little or no voice on the issue. The minority utility is above the threshold and provides a very loud voice which dominates the result. Our laws are thus systematically biased to protecting the interests of special interest groups.

The way out of this trap is to either align everyone’s utility functions or to linearize the participation functions. We could try to use regulation to dampen the effectiveness of minority participation functions or use public information campaigns to change utility functions or increase the participation functions of the silent majority. Variations of these methods have been tried with varying degrees of success. Then there is always the old install a benevolent dictator who respects the views of the majority. That one really doesn’t have a good track record though.

Beware the vampire squid

Before you take that job programming at an investment bank or hedge fund you may want to read Felix Salmon’s post and Michael Lewis’s article on the case of Sergey Aleynikov. He was a top programmer at Goldman Sachs, who was then prosecuted and convicted of stealing proprietary computer code. The conviction was eventually overturned but he has now been charged again for the same crime under a different law. According to Lewis, the code was mostly modified open source stuff that Aleynikov emailed to himself for future reference of what he had done and had little value outside of Goldman. Salmon thinks that Goldman aggressively pursued this case because in order for the directors of the programming division to justify their bonuses, they need to make it look like the code, which they don’t understand, is important. If Goldman Sachs had a public relations problem before, the Lewis article will really put it over the top. This case certainly makes me think that we should change the criminal code and leave cases of intellectual theft by employees to the civil courts and not force the taxpayer to pick up the tab. Also, what is the point for putting a harmless nonviolent programmer in jail for 8 years. We could at least have him serve his sentence doing something useful like writing code to improve city traffic flow. Finally, the open software foundation may have a case against Goldman and other firms who use open source code and then violate the open source license agreement. I’m sure it wouldn’t be too hard to find a backer with deep pockets to pursue the case.

New paper on childhood growth and obesity

Kevin D Hall, Nancy F Butte, Boyd A Swinburn, Carson C Chow. Dynamics of childhood growth and obesity: development and validation of a quantitative mathematical model. Lancet Diabetes and Endocrinology 2013 .

You can read the press release here.

In order to curb childhood obesity, we need a good measure of how much food kids should eat. Although people like Claire Wang have proposed quantitative models in the past that are plausible, Kevin Hall and I have insisted that this is a hard problem because we don’t fully understand childhood growth. Unlike adults, who are more or less in steady state, growing children are a moving target. After a few fits and starts we finally came up with a satisfactory model that modifies our two compartment adult body composition model to incorporate growth. That previous model partitioned excess energy intake into fat and lean compartments according to the Forbes rule, which basically says that the ratio of added fat to lean is proportional to how much fat you have so the more fat you have the more excess Calories go to fat. The odd consequence of that model is that the steady state body weight is not unique but falls on a one dimensional curve. Thus there is a whole continuum of possible body weights for a fixed diet and lifestyle. I actually don’t believe this and have a modification to fix it but that is a future story.

What puzzled me about childhood growth was how do we know how much more to eat as we grow? After some thought, I realized that what we could do is to eat enough to maintain the fraction of body fat at some level, using leptin as a signal perhaps, and then tap off the energy stored in fat when we needed to grow. So just like we know how much gasoline (petrol) to add by simply filling the tank when it’s empty, we simply eat to keep our fat reserves at some level. In terms of the model, this is a symmetry breaking term that transfers energy from the fat compartment to the lean compartment. In my original model, I made this term a constant and had food intake increase to maintain the fat to lean ratio and showed using singular perturbation theory that his would yield growth that was qualitatively similar to the real thing. This then sat languishing until Kevin had the brilliant idea to make the growth term time dependent and fit it to actual data that Nancy Butte and Boyd Swinburn had taken. We could then fit the model to normal weight and obese kids to quantify how much more obese kids eat, which is more than previously believed. Another nice thing is that when the child stops growing the model is automatically the adult model!