CO2 and the return of the dinosaurs

The dinosaurs lived during the Mesozoic Era, which was divided into the Triassic, Jurassic, and Cretaceous Periods. Many of the iconic dinosaurs that we know and love such as Tyrannosaurus rex and Triceratops lived at the end of the Cretaceous while others such as Stegosaurus and Apatosaurus (formerly known as Brontosaurus) lived 80 or so million years earlier in the Jurassic. I used to picture all the dinosaurs co-existing simultaneously but the time span separating Stegosaurus from T. rex is larger than that between T. rex and the present! Dinosaurs also weren’t the only creatures alive at that time, just the most dominant ones. Technically, the term dinosaur only applies to land-based reptiles with certain features. The avian reptiles, such as Pteranodon, or the marine ones, such as Plesiosaurs, resembled dinosaurs but were not classified as such. Aside from those dinosaur-like animals, there were also invertebrates, fish, sharks, and a class of animals called Synapsids, defined by an opening in the skull behind the eyes, from which all mammals are descended.

Synapsids were small marginal creatures during the Mesozoic but came to dominate the land after the dinosaurs went extinct at the end of the Cretaceous (the KT extinction event). The consensus theory is that a large asteroid or comet strike in the Yucatan set off fire storms, seismic events and a cloud that blocked sunlight for up to a year. This caused plants to die globally, which collapsed the food chain.The only survivors were creatures that could go deep underwater or bury underground and survive long periods with little or no food. Survivors of the KT extinction include some fish, small sharks, small crocodiles and other cold blooded reptiles, small bipedal theropod dinosaurs, of which T-Rex is a member, and small rodent-like synapsids.

If the KT extinction event was a transient perturbation then it is reasonable to expect that whatever it was that allowed dinosaurs to become dominant would remain and the surviving theropods would come to dominate again. But that is not what happened. Theropods did survive to become modern birds but aside from a few exceptions, most are small and almost all are avian. Instead, the Synapsids came to dominate and the largest creature to ever live, namely the Blue Whale, is a Synapsid. Now this could be purely due to random chance and if we played out the KT event over and over there would be some distribution where either Synapsids or dinosaurs become dominant. However, it could also be that global conditions after the Cretaceous changed to favour Synapsids over dinosaurs.

One possible change is the atmospheric level of carbon dioxide. CO2 levels were higher than they are today for much of the past 500 million years, even with the recent rapid increase. The levels were particularly high in the Triassic and Jurassic but began to decline during the Cretaceous (e.g. see here) and have continued to decrease until the industrial revolution when it turned upwards again. Average global temperatures were also higher in the Mesozoic. The only other time that C02 levels and global temperatures have been as low as they are now was in the Permian before the Great Dying. During the Permian, the ancestor to dinosaurs was a small insectivore that had the ability to run on two legs while the dominant creatures were none other than the Synapsids! So, mammal-like creatures were dominant before and after the dinosaurs when CO2 levels and temperatures were low.

Perhaps this is just a coincidence but there is one more interesting fact to this story and that is the amount of stored carbon (i.e. fossil fuels) has been very high twice over the past 500 million years – the Permian and now. It had been believed that the rise in CO2 at the end of the Permian was due to increased volcanism but a paper from 2014, (see here), speculated that a horizontal gene transfer event allowed an archaea microbe to become efficient in exploiting the buried carbon and this led to an exponential increase in methane and CO2 production. The active volcanos provided the necessary nickel to catalyze the reactions. Maybe it was simply a matter of time before some creature would find a way to exploit all the stored energy conveniently buried underground and release the carbon back into the atmosphere. The accompanying rise in temperatures and increased acidification of the oceans may also spell the end of this current reign of Synapsids and start a new era. While the smart (rich?) money seems to be on some sort of trans-human cyborg being the future, I am betting that some insignificant bird out there will be the progenitor of the next dominant age of dinosaurs.

Are humans successful because they are cruel?

According to Wikipedia, the class Mammalia has 29 orders (e.g. Carnivora), 156 families (e.g. Ursidae), 1258 Genera (e.g. Ursus), and nearly 6000 species (e.g. Polar Bear). Some orders like Chiroptera (bats) and Rodentia are very large with many families, genera, and species. Some are really small like Orycteropodidae, which has only one species – the aardvark. Humans are in the order Primates, of which there are quite a few families and genera. Almost all of them live in tropical or subtropical areas and almost all of them have small populations, many of them endangered. The exception of course is humans who is the only species remaining of the genus homo. The other genera in the great ape family hominidae – gorillas, orangutans, chimpanzees, and bonobos – are all in big trouble.

I think most people would attribute the incomparable success of humans to their resilience, intelligence, and ingenuity. However, another important factor could be their bottomless capacity for intentional cruelty. Although there seems to be a decline in violence throughout history as documented in Steven Pinker’s recent book, there are still no shortages of examples. Take a listen to this recent Econtalk podcast with Mike Munger on how the South rationalized slavery. It could very well be that what made modern humans dominate earth and wipe out all the other homo species along the way was not that they were more intelligent but that they were more cruel and rapacious. Neanderthals and Denisovans may have been happy sitting around the campfire after a hunt, while humans needed to raid every nearby tribe and kill them.

The blurry line between human and ape

Primate researcher extraordinaire, Frans de Waal, pens an excellent commentary in the New York Times on the recent discovery of Homo Naledi. His thesis that the distinction between human and nonhuman is not clear cut is something I wholeheartedly subscribe to. No matter what we look at, the difference between humans and other species is almost always quantitative and not qualitative.

Here are some excerpts and I recommend you read the whole thing:

The fabulous find, named Homo naledi,has rightly been celebrated for both the number of fossils and their completeness. It has australopithecine-like hips and an ape-size brain, yet its feet and teeth are typical of the genus Homo.

The mixed features of these prehistoric remains upset the received human origin story, according to which bipedalism ushered in technology, dietary change and high intelligence. Part of the new species’ physique lags behind this scenario, while another part is ahead. It is aptly called a mosaic species.

We like the new better than the old, though, and treat every fossil as if it must fit somewhere on a timeline leading to the crown of creation. Chris Stringer, a prominent British paleoanthropologist who was not involved in the study, told BBC News: “What we are seeing is more and more species of creatures that suggests that nature was experimenting with how to evolve humans, thus giving rise to several different types of humanlike creatures originating in parallel in different parts of Africa.”

This represents a shockingly teleological view, as if natural selection is seeking certain outcomes, which it is not. It doesn’t do so any more than a river seeks to reach the ocean.

News reports spoke of a “new ancestor,” even a “new human species,” assuming a ladder heading our way, whereas what we are actually facing when we investigate our ancestry is a tangle of branches. There is no good reason to put Homo naledi on the branch that produced us. Nor does this make the discovery any less interesting…

…The problem is that we keep assuming that there is a point at which we became human. This is about as unlikely as there being a precise wavelength at which the color spectrum turns from orange into red. The typical proposition of how this happened is that of a mental breakthrough — a miraculous spark — that made us radically different. But if we have learned anything from more than 50 years of research on chimpanzees and other intelligent animals, it is that the wall between human and animal cognition is like a Swiss cheese…

… It is an odd coincidence that “naledi” is an anagram of “denial.” We are trying way too hard to deny that we are modified apes. The discovery of these fossils is a major paleontological breakthrough. Why not seize this moment to overcome our anthropocentrism and recognize the fuzziness of the distinctions within our extended family? We are one rich collection of mosaics, not only genetically and anatomically, but also mentally.

Brave New World

Read Steve Hsu’s Nautilus article on Super-Intelligence. If so-called IQ-related genetic variants are truly additive then his estimates are probably correct. His postulated being could possibly understand the fine details of any topic in less than a day or shorter. Instead of taking several years to learn enough differential geometry to develop Einstein’s General Relativity (which is what it took for Einstein), a super-intelligence could perhaps do it in an afternoon or during a coffee break. Personally, I believe that nothing is free and that there will always be tradeoffs. I’m not sure what the cost of super-intelligence will be but there will likely be something. Variability in a population is always good for the population although not so great for each individual. An effective way to make a species go extinct is to remove variability. If pests had no genetic variability then it would be a simple matter to eliminate them with some toxin. Perhaps, humans will be able to innovate fast enough to buffer them against environmental changes. Maybe cognitive variability can compensate for genetic variability. I really don’t know.

Journal Club

Here is the paper I’ll be covering in the Laboratory of Biological Modeling, NIDDK, Journal Club tomorrow

Morphological and population genomic evidence that human faces have evolved to signal individual identity

Michael J. Sheehan & Michael W. Nachman

Abstract: Facial recognition plays a key role in human interactions, and there has been great interest in understanding the evolution of human abilities for individual recognition and tracking social relationships. Individual recognition requires sufficient cognitive abilities and phenotypic diversity within a population for discrimination to be possible. Despite the importance of facial recognition in humans, the evolution of facial identity has received little attention. Here we demonstrate that faces evolved to signal individual identity under negative frequency-dependent selection. Faces show elevated phenotypic variation and lower between-trait correlations compared with other traits. Regions surrounding face-associated single nucleotide polymorphisms show elevated diversity consistent with frequency-dependent selection. Genetic variation maintained by identity signalling tends to be shared across populations and, for some loci, predates the origin of Homo sapiens. Studies of human social evolution tend to emphasize cognitive adaptations, but we show that social evolution has shaped patterns of human phenotypic and genetic diversity as well.

Incompetence is the norm

People have been justly anguished by the recent gross mishandling of the Ebola patients in Texas and Spain and the risible lapse in security at the White House. The conventional wisdom is that these demonstrations of incompetence are a recent phenomenon signifying a breakdown in governmental competence. However, I think that incompetence has always been the norm; any semblance of competence in the past is due mostly to luck and the fact that people do not exploit incompetent governance because of a general tendency towards docile cooperativity (as well as incompetence of bad actors). In many ways, it is quite amazing at how reliably citizens of the US and other OECD members respect traffic laws, pay their bills and service their debts on time. This is a huge boon to an economy since excessive resources do not need to be spent on enforcing rules. This does not hold in some if not many developing nations where corruption is a major problem (c.f. this op-ed in the Times today). In fact, it is still an evolutionary puzzle as to why agents cooperate for the benefit of the group even though it is an advantage for an individual to defect. Cooperativity is also not likely to be all genetic since immigrants tend to follow the social norm of their adopted country, although there could be a self-selection effect here. However, the social pressure to cooperate could evaporate quickly if there is the perception of the lack of enforcement as evidenced by looting following natural disasters or the abundance of insider trading in the finance industry. Perhaps, as suggested by the work of Karl Sigmund and other evolutionary theorists, cooperativity is a transient phenomenon and will eventually be replaced by the evolutionarily more stable state of noncooperativity. In that sense, perceived incompetence could be rising but not because we are less able but because we are less cooperative.

Linear and nonlinear thinking

A linear system is one where the whole is precisely the sum of its parts. You can know how different parts will act together by simply knowing how they act in isolation. A nonlinear function lacks this nice property. For example, consider a linear function f(x). It satisfies the property that f(a x + b y) = a f(x) + b f(y). The function of the sum is the sum of the functions. One important point to note is that what is considered to be the paragon of linearity, namely a line on a graph, i.e. f(x) = mx + b is not linear since f(x + y) = m (x + y) + b \ne f(x)+ f(y). The y-intercept b destroys the linearity of the line. A line is instead affine, which is to say a linear function shifted by a constant. A linear differential equation has the form

\frac{dx}{dt} = M x

where x can be in any dimension.  Solutions of a linear differential equation can be multiplied by any constant and added together.

Linearity is thus essential for engineering. If you are designing a bridge then you simply add as many struts as you need to support the predicted load. Electronic circuit design is also linear in the sense that you combine as many logic circuits as you need to achieve your end. Imagine if bridge mechanics were completely nonlinear so that you had no way to predict how a bunch of struts would behave when assembled together. You would then have to test each combination to see how they work. Now, real bridges are not entirely linear but the deviations from pure linearity are mild enough that you can make predictions or have rules of thumb of what will work and what will not.

Chemistry is an example of a system that is highly nonlinear. You can’t know how a compound will act just based on the properties of its components. For example, you can’t simply mix glass and steel together to get a strong and hard transparent material. You need to be clever in coming up with something like gorilla glass used in iPhones. This is why engineering new drugs is so hard. Although organic chemistry is quite sophisticated in its ability to synthesize various compounds there is no systematic way to generate molecules of a given shape or potency. We really don’t know how molecules will behave until we create them. Hence, what is usually done in drug discovery is to screen a large number of molecules against specific targets and hope. I was at a computer-aided drug design Gordon conference a few years ago and you could cut the despair and angst with a knife.

That is not to say that engineering is completely hopeless for nonlinear systems. Most nonlinear systems act linearly if you perturb them gently enough. That is why linear regression is so useful and prevalent. Hence, even though the global climate system is a highly nonlinear system, it probably acts close to linear for small changes. Thus I feel confident that we can predict the increase in temperature for a 5% or 10% change in the concentration of greenhouse gases but much less confident in what will happen if we double or treble them. How linear a system will act depends on how close they are to a critical or bifurcation point. If the climate is very far from a bifurcation then it could act linearly over a large range but if we’re near a bifurcation then who knows what will happen if we cross it.

I think biology is an example of a nonlinear system with a wide linear range. Recent research has found that many complex traits and diseases like height and type 2 diabetes depend on a large number of linearly acting genes (see here). Their genetic effects are additive. Any nonlinear interactions they have with other genes (i.e. epistasis) are tiny. That is not to say that there are no nonlinear interactions between genes. It only suggests that common variations are mostly linear. This makes sense from an engineering and evolutionary perspective. It is hard to do either in a highly nonlinear regime. You need some predictability if you make a small change. If changing an allele had completely different effects depending on what other genes were present then natural selection would be hard pressed to act on it.

However, you also can’t have a perfectly linear system because you can’t make complex things. An exclusive OR logic circuit cannot be constructed without a threshold nonlinearity. Hence, biology and engineering must involve “the linear combination of nonlinear gadgets”. A bridge is the linear combination of highly nonlinear steel struts and cables. A computer is the linear combination of nonlinear logic gates. This occurs at all scales as well. In biology, you have nonlinear molecules forming a linear genetic code. Two nonlinear mitochondria may combine mostly linearly in a cell and two liver cells may combine mostly linearly in a liver.  This effective linearity is why organisms can have a wide range of scales. A mouse liver is thousands of times smaller than a human one but their functions are mostly the same. You also don’t need very many nonlinear gadgets to have extreme complexity. The genes between organisms can be mostly conserved while the phenotypes are widely divergent.

Did microbes cause the Great Dying?

In one of my very first posts almost a decade ago, I wrote about the end-Permian extinction 250 million years ago, which was the greatest mass extinction thus far. In that post I covered research that had ruled out an asteroid impact and found evidence of global warming, possibly due to volcanos, as a cause. Now, a recent paper in PNAS proposes that a horizontal gene transfer event from bacteria to archaea may have been the main cause for the increase of methane and CO2. This paper is one of the best papers I have read in a long time, combining geological field work, mathematical modeling, biochemistry, metabolism, and evolutionary phylogenetic analysis to make a compelling argument for their hypothesis.

Their case hinges on several pieces of evidence. The first comes from well-dated carbon isotopic records from China.  The data shows a steep plunge in the isotopic ratio (i.e ratio between the less abundant but heavier carbon 13 and the lighter more abundant carbon 12) in the inorganic carbonate reservoir with a moderate increase in the organic reservoir. In the earth’s carbon cycle, the organic reservoir comes from the conversion of atmospheric CO2 into carbohydrates via photosynthesis, which prefers carbon 12 to carbon 13. Organic carbon is returned to inorganic form through oxidation by animals eating photosynthetic organisms or by the burning of stored carbon like trees or coal. A steep drop in the isotopic ratio means that there was an extra surge of carbon 12 into the inorganic reservoir. Using a mathematical model, the authors show that in order to explain the steep drop, the inorganic reservoir must have grown superexponentially (faster than exponential). This requires some runaway positive feedback loop that is difficult to explain by geological processes such as volcanic activity, but is something that life is really good at.

The increased methane would have been oxidized to CO2 by other microbes, which would have lowered the oxygen concentration. This would allow for more efficient fermentation and thus more acetate fuel for the archaea to make more methane. The authors showed in another simple mathematical model how this positive feedback loop could lead to superexponential growth. Methane and CO2 are both greenhouse gases and their increase would have caused significant global warming. Anaerobic methane oxidation could also lead to the release of poisonous hydrogen sulfide.

They then considered what microbe could have been responsible. They realized that during the late Permian, a lot of organic material was being deposited in the sediment. The organic reservoir (i.e. fossil fuels, methane hydrates, soil organic matter, peat, etc) was much larger back then than today, as if someone or something used it up at some point. One of the end products of fermentation of this matter would be acetate and that is something archaea like to eat and convert to methane. There are two types of archaea that can do this and one is much more efficient than the other at high acetate concentrations. This increased efficiency was also shown recently to have arisen by a horizontal gene transfer event from a bacterium. A phylogenetic analysis of all known archaea showed that the progenitor of the efficient methanogenic one likely arose 250 million years ago.

The final piece of evidence is that the archaea need nickel to make methane. The authors then looked at the nickel concentrations in their Chinese geological samples and found a sharp increase in nickel immediately before the steep drop in the isotopic ratio. They postulate that the source of the nickel was the massive Siberian volcano eruptions at that time (and previously proposed as the cause of the increased methane and CO2).

This scenario required the unlikely coincidence of several events –  lots of excess organic fuel, low oxygen (and sulfate), increased nickel, and a horizontal gene transfer event. If any of these were missing, the Great Dying may not have taken place. However, given that there have been only 5 mass extinctions, although we may be currently inducing the 6th, low probability events may be required for such calamitous events. This paper should also give us some pause about introducing genetically modified organisms into the environment. While most will probably be harmless, you never know when one will be the match that lights the fire.

 

 

The myth of maladaptation

A fairly common presumption among biologists and adherents of paleo-diets is that since humans evolved on the African Savannah hundreds of thousands of years ago, we are not well adapted to the modern world. For example, Harvard biologist Daniel Lieberman has a new book out called “The Story of the Human Body” explaining through evolution why our bodies are the way they are. You can hear him speak about the book on Quirks and Quarks here. He talks about how our maladaptation to the modern world has led to widespread myopia, back pains, an obesity epidemic, and so forth.

This may all be true but the irony is that we as a species have never been more fit and adapted from an evolutionary point of view. In evolutionary theory, fitness is measured by the number of children or grandchildren we have. Thus, the faster a population grows the more fit and hence adapted to its environment it is. Since our population is growing the fastest it’s ever been (technically, we may have been fitter a few decades ago since our growth rate may actually be slowing slightly), we are the most fit we have ever been. In the developed world we certainly live longer and are healthier than we have ever been even when you account for the steep decline in infant death rates. It is true that heart disease and cancer has increased substantially but that is only because we (meaning the well-off in the developed world) no longer die young from infectious diseases, parasites, accidents, and violence.

One could claim that we are perfectly adapted to our modern world because we invented it. Those that live in warm houses are in better health than those  sitting in a damp cave. Those that get food at the local supermarket live longer than hunter-gatherers.  The average life expectancy of a sedentary individual is not different from an active person. One reason we have an obesity epidemic is that obesity isn’t very effective at killing people. An overweight or even obese person can live a long life.  So even though I type this peering through corrective eye wear while nursing a sore back, I can confidently say I am better adapted to my environment than my ancestors were thousands of years ago.

Heritability and additive genetic variance

Most people have an intuitive notion of heritability being the genetic component of why close relatives tend to resemble each other more than strangers. More technically, heritability is the fraction of the variance of a trait within a population that is due to genetic factors. This is the pedagogical post on heritability that I promised in a previous post on estimating heritability from genome wide association studies (GWAS).

One of the most important facts about uncertainty and something that everyone should know but often doesn’t is that when you add two imprecise quantities together, while the average of the sum is the sum of the averages of the individual quantities, the total error (i.e. standard deviation) is not the sum of the standard deviations but the square root of the sum of the square of the standard deviations or variances. In other words, when you add two uncorrelated noisy variables, the variance of the sum is the sum of the variances. Hence, the error grows as the square root of the number of quantities you add and not linearly as it had been assumed for centuries. There is a great article in the American Scientist from 2007 called The Most Dangerous Equation giving a history of some calamities that resulted from not knowing about how variances sum. The variance of a trait can thus be expressed as the sum of the genetic variance and environmental variance, where environment just means everything that is not correlated to genetics. The heritability is the ratio of the genetic variance to the trait variance.

Continue reading

New paper on population genetics

James Lee and I just published a paper entitled “The causal meaning of Fisher’s average effect” in the journal Genetics Research. The paper can be obtained here. This paper is the brainchild of James and I just helped him out with some of the proofs.  James’s take on the paper can be read here. The paper resolves a puzzle about the incommensurability of Ronald Fisher’s two definitions of the average effect noted by population geneticist D.S. Falconer three decades ago.

Fisher was well known for both brilliance and obscurity and people have long puzzled over the meaning of some of his work.  The concept of the average effect is extremely important for population genetics but it is not very well understood. The field of population genetics was invented in the early twentieth century by luminaries such as Fisher, Sewall Wright, and JBS Haldane to reconcile Darwin’s theory of evolution with Mendelian genetics. This is a very rich field that has been somewhat forgotten. People in mathematical,  systems, computational, and quantitative biology really should be fully acquainted with the field.

For those who are unacquainted with genetics, here is a quick primer to understand the paper. Certain traits, like eye colour or the ability to roll your tongue, are affected by your genes. Prior to the discovery of the structure of DNA, it was not clear what genes were, except that they were the primary discrete unit of genetic inheritance. These days it usually refers to some region on the genome. Mendel’s great insight was that genes come in pairs, which we now know to correspond to the two copies of each of the 23 chromosomes we have.  A variant of a particular gene is called an allele.  Traits can depend on genes (or more accurately genetic loci) linearly or nonlinearly. Consider a quantitative trait that depends on a single genetic locus that has two alleles, which we will call a and A. This means that a person will have one of three possible genotypes: 1) homozygous in A (i.e. have two A alleles), 2) heterozygous (have one of each), or 3) homozygous in a (i.e. have no A alleles). If the locus is linear then if you plot the measure of the trait (e.g. height) against the number of A alleles, you will get a straight line. For example, suppose allele A contributes a tenth of a centimetre to height. Then people with one A allele will be on average one tenth of a centimetre taller than those with no A alleles and those with two A alleles will be two tenths taller. The familiar notion of dominance is a nonlinear effect. So for example, the ability to roll your tongue is controlled by a single gene. There is a dominant rolling allele and a recessive nonrolling allele. If you have at least one rolling allele, you can roll your tongue.

The average effect of a gene substitution is the average change in a trait if one allele is substituted for another. A crucial part of population genetics is that you always need to consider averages. This is because genes are rarely completely deterministic. They can be influenced by the environment or other genes. Thus, in order to define the effect of the gene, you need to average over these other influences. This then leads to a somewhat ambiguous definition of average effect and Fisher actually came up with two. The first, and as James would argue the primary definition, is a causal one in that we want to measure the average effect of a gene if you experimentally substituted one allele for another prior to development and influence by the environment. A second correlation definition would simply be to plot the trait against the number of alleles as in the example above. The slope would then be the average effect. This second definition looks at the correlation between the gene and the trait but as the old saying goes “correlation does not imply causation”. For example, the genetic loci may not have any effect on the trait but happens to be strongly correlated with a true causal locus (in the population you happen to be examining). Distinguishing between genes that are merely associated with a trait from ones that are actually causal remains an open problem in genome wide association studies.

Our paper goes over some of the history and philosophy of the tension between these two definitions. We wrote the paper because these two definitions do not always agree and we show under what conditions they do agree. The main reason they don’t agree is that averages will depend on the background over which you average. For a biallelic gene, there are 2 alleles but 3 genotypes. The distribution of alleles in a population is governed by two parameters. It’s not enough to specify the frequency of one allele. You also need to know the correlation between alleles. The regression definition matches the causal definition if a particular function representing this correlation is held fixed while the experimental allele substitutions under the causal definition are carried out. We also considered the multi-allele and multi-loci case in as much generality as we could.

The myth of the single explanation

I think one of the things that tends to lead us astray when we try to understand complex phenomena like evolution, disease, or the economy, is that we have this idea that they must have a single explanation. For example, recently two papers have been published in high profile journals trying to explain mammal monogamy. Although monogamy is quite common in birds it only occurs in 5% of mammals. Here is Carl Zimmer’s summary.  The study in Science, which surveyed 2545 mammal species, argued that monogamy arises when females are solitary and sparse. Males must then commit to one since dates are so hard to find. The study in PNAS examined 230 primate species, for which monogamy occurs at the higher rate of 27%, and used Bayesian inference to argue that monogamy arises to prevent male infanticide. It’s better to help out at home rather than go around killing other men’s babies. Although both of these arguments are plausible, there need not be a single universal explanation. Each species could have its own set of circumstances that led to monogamy involving these two explanations and others. However, while we should not be biased towards a single explanation, we shouldn’t also throw up our hands like Hayek and argue that no complex phenomenon can be understood. Some phenomena will have simpler explanations than others but since the Kolmogorov complexity is undecidable there is no algorithm that can tell you which is which. We will just have to struggle with each problem as it comes.

The problem with “just desserts”

The blogosphere is aflutter over Harvard economist and former chairman of the Council of Economic Advisors to Bush 43, Greg Mankiw‘s recent article “Defending the One Percent“. Mankiw’s paper mostly argues against the classic utilitarian reason for redistribution – a dollar is more useful to a poor person than a rich one.  However, near the end of the paper he proposes that an alternative basis for fair income distribution should be the just desserts principle where everyone is compensated according to how much they contribute. Mankiw believes that the recent surge in income inequality is due to changes in technology that favour superstars who create much more value for the economy than the rest. He then argues that the superstars are superstars because of heritable innate qualities like IQ and not because the economy is rigged in their favour.

The problem with this idea is that genetic ability is a shared natural resource that came through a long process of evolution and everyone who has ever lived has contributed to this process. In many ways, we’re like a huge random Monte Carlo simulation where we randomly try out lots of different gene variants to see what works best. Mankiw’s superstars are the Monte Carlo trials that happen to be successful in our current system. However, the world could change and other qualities could become more important just as physical strength was more important in the pre-information age. The ninety-nine percent are reservoirs of genetic variability that we all need to prosper. Some impoverished person alive today may possess the genetic variant to resist some future plague and save humanity. She is providing immense uncompensated economic value. The just desserts world is really nothing more than a random world; a world where you are handed a lottery ticket and you hope you win. This would be fine but one shouldn’t couch it in terms of some deeper rationale. A world with a more equitable distribution is one where we compensate the less successful for their contribution to economic progress. However, that doesn’t mean we should have a world with completely equal income distribution. Unfortunately, the human mind needs incentives to try hard so for maximal economic growth, the lottery winners must always get at least a small bonus.

Is irrationality necessary?

Much has been made lately of the anti-science stance of a large segment of the US population. (See for example Chris Mooney’s book). The acceptance of anthropomorphic climate change or the theory of evolution is starkly divided by political inclinations. However, as I have argued in the past, seemingly irrational behavior can actually make sense from an evolutionary perspective. As I have posted on before, one of the best ways to find an optimal solution to a problem is to search randomly, the Markov Chain Monte Carlo method being the quintessential example. Randomness is useful for searching in places you wouldn’t normally go and in overcoming unwanted correlations, which I recently attributed to most of our current problems (see here). Thus, we may have been evolutionarily selected to have diverse viewpoints and degrees of rational thinking. Given some situation, there is only one rationally optimal response and in the case of incomplete information, which is almost always true, it could be wrong. Thus, when a group of individuals is presented with a challenge, it may be more optimal for the group if multiple strategies, including irrational ones, are tried rather than putting all the eggs into one rational basket. I truly doubt that Australia could have been discovered 60 thousand years ago without some irrationally risky decisions. Even within science, people pursue ideas based on tenuous hunches all the time. Many great discoveries were made because people ignored conventional rational wisdom and did something irrational. Many have failed as a result as well. However, society as a whole is arguably better since generally success goes global while failure stays local.

It is not even necessary to have great differences in cognitive abilities to produce a wide range in rationality. One only needs to have a reward system that is stimulated by a wide range of signals.  So while some children are strongly rewarded by finding self-consistent explanations to questions others are rewarded by acting rashly. Small initial differences would then amplify over time as the children seek environments that maximize their rewards. Sam Wang and Sandra Aamodt covered this in their book, Welcome to Your Brain. Thus you would end up with a society with a wide variety of rationality.

 

 

A new strategy for the iterated prisoner’s dilemma game

The game theory world was stunned recently when Bill Press and Freeman Dyson found a new strategy to the iterated prisoner’s dilemma (IPD) game. They show how you can extort an opponent such that the only way they can maximize their payoff is to give you an even higher payoff. The paper, published in PNAS (link here) with a commentary (link here), is so clever and brilliant that I thought it would be worthwhile to write a pedagogical summary for those that are unfamiliar with some of the methods and concepts they use. This paper shows how knowing a little bit of linear algebra can go a really long way to exploring deep ideas.

In the classic prisoner’s dilemma, two prisoner’s are interrogated separately. They have two choices. If they both stay silent (cooperate) they get each get a year in prison. If one confesses (defects) while the other stays silent then the defector is released while the cooperator gets 5 years.  If both defect then they both get 3 years in prison. Hence, even though the highest utility for both of them is to both cooperate, the only logical thing to do is to defect. You can watch this played out on the British television show Golden Balls (see example here). Usually the payout is expressed as a reward so if they both cooperate they both get 3 points, if one defects and the other cooperates then the defector gets 5 points and the cooperator gets zero,  and if they both defect they both get 1  point each. Thus, the combined reward is higher if they both cooperate but since they can’t trust their opponent it is only logical to defect and get at least 1 point.

The prisoner’s dilema changes if you play the game repeatedly because you can now adjust to your opponent and it is not immediately obvious what the best strategy is. Robert Axelrod brought the IPD to public attention when he organized a tournament three decades ago. The results are published in his 1984 book The Evolution of Cooperation.  I first learned about the results in Douglas Hofstader’s Metamagical Themas column in Scientific American in the early 1980s. Axelrod invited a number of game theorists to submit strategies to play IPD and the winner submitted by Anatol Rappaport was called tit-for-tat, where you always cooperate first and then do whatever your opponent does.  Since this was a cooperative strategy with retribution, people have been using this example of how cooperation could evolve ever since those results. Press and Dyson now show that you can win by being nasty. Details of the calculations are below the fold.

Continue reading

New paper on GPCRs

New paper in PloS One:

Fatakia SN, Costanzi S, Chow CC (2011) Molecular Evolution of the Transmembrane Domains of G Protein-Coupled Receptors. PLoS ONE 6(11): e27813. doi:10.1371/journal.pone.0027813

Abstract

G protein-coupled receptors (GPCRs) are a superfamily of integral membrane proteins vital for signaling and are important targets for pharmaceutical intervention in humans. Previously, we identified a group of ten amino acid positions (called key positions), within the seven transmembrane domain (7TM) interhelical region, which had high mutual information with each other and many other positions in the 7TM. Here, we estimated the evolutionary selection pressure at those key positions. We found that the key positions of receptors for small molecule natural ligands were under strong negative selection. Receptors naturally activated by lipids had weaker negative selection in general when compared to small molecule-activated receptors. Selection pressure varied widely in peptide-activated receptors. We used this observation to predict that a subgroup of orphan GPCRs not under strong selection may not possess a natural small-molecule ligand. In the subgroup of MRGX1-type GPCRs, we identified a key position, along with two non-key positions, under statistically significant positive selection.

Action on a whim

One of the big news stories last week was the publication in Science on the genomic sequence of a hundred year old  Aboriginal Australian.  The analysis finds that the Aboriginal Australians are descendants of an early migration to Asia between 62,000 and 75,000 years ago and this migration is different from the one that gave rise to modern Asians 25,000 to 38,000 years ago.  I have often been amazed that humans were able to traverse over harsh terrain and open water into the complete unknown.  However, I briefly watched a documentary  on CNBC last night about Apocalypse 2012 that made me understand this much better.  Evidently, there is a fairly large group of people who believe the world will end in 2012.  (This is independent of the group that thought the world would end earlier this year.)  The prediction is based on the fact that a large cycle in the Mayan calendar will supposedly end in 2012.  According to some of the believers, the earth’s rotation will reverse and that will cause massive earthquakes and tsunamis.  These believers have thus managed to recruit followers and start building colonies in the mountains to try to survive.  People are taking this extremely seriously.  I think this ability to change the course of one’s entire life on the flimsiest of evidence is what led our ancestors to leave Africa and head into the unknown.  People will get ideas in their head and nothing will stop them from pursuing them.  It’s what led us to populate every corner of the world and reshape much of the surface of the earth.  It also suggests that the best optimization algorithms that seek a global maximum may be ones that have some ‘momentum’ so that they can leave local maxima and head downhill to find higher peaks elsewhere.