Our shared inheritance: the human genome

At the beginning of CBC radio’s science program Quirks and Quarks are brief audio snippets of scientists speaking.  One of them is Francis Collins, of the Human Genome Project, saying “The human genome is a our shared inheritance.”   On the eve of the new year, I thought I would reflect  on what this means for us.

When the human genome was first published at the beginning of the century it was proclaimed that humans were 99.9% identical and that there was no genetic basis for race.  Since that time, the estimate of our similarity has been revised downwards to about 99.5% and it could drop further.  The reason is that the first estimate was based on patching together the genomes of about 100 different people so differences were underestimated.  In the last few years, the genomes of individuals like Craig Venter and Jim Watson have been sequenced and the differences appear to be far larger.  One of the more recent and unexpected findings is that Copy Number Variants, where pieces of the genome including entire genes are repeated are far more numerous than SNPs where differences in just a single nucleotide occur.  Additionally, it now appears that members of different races can be distinguished genetically (see here for argument). It was once claimed that the variance within races swamped the differences between the means of the races.  Now it appears that while this is true in most directions in genome space, there are directions where this is not true.  It is still not known if the differences that are observable are meaningful.

Continue reading

Everting a sphere

To evert a sphere means to turn it inside out with a continuous transformation where the sphere can pass through itself without puncturing, tearing, ripping, creasing, or pinching.  To see how nontrivial this is just imagine trying to push the north pole through the south pole; a crease will develop along the equator.  Stephen Smale proved that an eversion was possible in 1957 but there wasn’t a concrete example until a few years later.  The video shown below uses a technique of corrugation developed by Bill Thurston, who’s geometrization conjecture, which implies the Poincare conjecture, was recently proved by Grigori Perelman.

The above video is an excerpt from a 20 minute video called “Outside In“, which involved Bill Thurston’s input.  The video can be found at You Tube in two parts.  Part 1 is here and part 2 is here.

The Fredholm Alternative

One of the most useful theorems in applied mathematics is the Fredholm Alternative.  However, because the theorem has several parts and gets expressed in different ways, many people don’t know why it has “alternative” in the name.  For them, the theorem is a means of constructing solvability conditions for linear equations used in perturbation theory.

The Fredholm Alternative Theorem can be easily understood if you consider solutions to the matrix equation  A v = b, for a matrix A and vectors v and b.  Everything that applies to matrices can then be generalized to infinite dimensional linear operators that occur in differential or integral equations.  The theorem is:  Exactly one of the two following alternatives hold

  1. A v = b has one and only one solution
  2. A^* w = 0 has a nontrivial solution

where A^* is the transpose or adjoint of A.  However, this is not the form of the theorem that is usually used in applied math.   A corollary to the above theorem is that if the second alternative holds then A v = b has a solution if and only if the inner product (b,w)=0, where w is in the nullspace of  the adjoint of A, i.e. A^* w=0.  The condition (b,w)=0 is then a solvability condition for an operator equation (e.g. differential or integral equation) that can be used in  perturbation theory.  One can clearly see that if the theorem is stated this way,  the “alternative” is obscured. Continue reading

Nonlinearity

I think people generally view nonlinear effects in one of two ways.  They either 1) do not think of it at all and basically view everything through a linear lens or 2) they think of it in terms of a lack of predictability as in chaos and the butterfly effect. I think both are somewhat dangerous viewpoints.  Given that the twenty first century is shaping up to be one where complex systems, such as the economy and climate, directly influence our lives, it is important that the general public and especially scientists have a more precise understanding of what nonlinearity can and cannot do.  Although Stan Ulam once remarked something to the effect that the term “nonlinear science” was about as meaningful as calling the bulk of zoology, the “study of non-elephant animals”, I actually think there are some concrete notions to the term that can give some valuable insight. In particular, nonlineariy does not only imply a lack of predictability, in some cases it can make things more predictable.

The first thing is to note that for most applications, there are basically two important effects of nonlinearity, namely “threshold” and “saturation”.  In fact, saturation can be thought of as negative feedback with a threshold, so threshold is really the only effect to keep in mind although separating the two is useful conceptually sometimes. By threshold, I mean that some variable does nothing until it crosses a threshold and by saturation, I mean that the effect of some variable does not change much beyond some point.  Here, I’ll give some examples of how saturation and thresholds can go a long way in understanding complex phenomena. Continue reading

Plausible Reasoning

The seeds of the modern era could arguably be traced to the Enlightenment and the invention of rationality. I say invention because although we may be universal computers and we are certainly capable of applying the rules of logic, it is not what we naturally do. What we actually use, as coined by E.T. Jaynes in his iconic book Probability Theory: The Logic of Science, is plausible reasoning. Jaynes is famous for being a major proponent of Bayesian inference during most of the second half of the last century. However, to call Jaynes’s book a book about Bayesian statistics is to wholly miss Jayne’s point, which is that probability theory is not about measures on sample spaces but a generalization of  logical inference.  In the Jaynes view, probabilities measure a degree of plausibility.

I think a perfect example of how unnatural the rules of formal logic are is to consider  the simple implication

A \rightarrow B

which means – If A is true then B is true.  By the rules of formal logic, if A is false then B can be true or false (i.e. a false premise can prove anything). Conversely, if B is true, then A can be true or false.  The only valid conclusion you can deduce from A\rightarrow B is that if B is false then A is false.   Implication is equivalent to the logical statement (\neg A) \vee B, where \neg means negation and \vee means logical OR.

Continue reading

Why ugly is sometimes beautiful

When Stravinsky’s ballet “The Rite of Spring” debuted in 1913 in Paris it caused a riot.  The music was so complex and novel that the audience didn’t know how to react.  They became agitated, jeered, argued amongst themselves and eventually became violent.  However, by the 1920’s the Rite of Spring was well accepted and now it is considered one of the greatest works of the 20th century.  When Impressionism was introduced in the late 19th century it was not well received.  The term was actually meant to be a derisive of the movement.  These days, the Impressionist rooms are often the most popular and crowded at Art Museums.  There was strong opposition to Maya Lin’s design for the Vietnam Memorial in 1981. She actually had to defend it before the US Congress and fought to keep it from being changed.   Now it is considered one of the most beautiful monuments in Washington D.C.  There are countless other examples of icons of beauty that were initially considered offensive or ugly.  I think this is perfectly consistent with what we know about neuroscience. Continue reading

New Location for Scientific Clearing House

Scientific Clearing House has a new web address. It is now at sciencehouse.wordpress.com. The reason I’m moving is because WordPress supports latex commands. I’ve been wanting to have the option of putting equations and mathematical symbols into my blog posts but it is impossible to do so in blogger. I tried using images of equations in my last post but it was extremely painful and didn’t work all that well. I have started to migrate my previous posts to the new address and I’ll eventually move all of my posts over.

Unwinding CDS’s

As pointed out several times by Steve Hsu recently, a major instigator of our current financial crisis is the Credit Default Swap (CDS). As far as financial instruments go, this one is almost understandable. Basically, it is an insurance policy that is exchanged between two firms. So, say you just loaned a bunch of money and you want to insure it. Well, you can buy a CDS for some fee from someone who will pay you some agreed upon amount if the loan goes bad. It is a zero sum game, which is why the amounts insured (notional amount) can be larger than the GDP of the entire world. Although the notional amounts of these CDS’s could be large, the big banks and hedge funds that traded them are often hedged so that the net gain or loss are manageable. The problem was that when Lehman Brothers went down, the whole network became unbalanced and some parties were exposed to huge losses. However, given that there is no market for these things, no one knows who is holding what. Steve drew a complex graph and an example to demonstrate how difficult it would be to unwind everything.

I like to visualize this as a problem of flux balance. For example, let C_{ij} be a CDS payout from party j to i. Then the total net gain or loss (i.e. flux) for party i is given by

F_i=\sum_j C_{ij} - \sum_j C_{ji}

We see that the sum over F_i is zero verifying that it is a zero sum game. It actually would be a simple matter to unwind all the obligations if everyone agreed to do it all at once. This could even be done without anyone disclosing any of their trades. People may want to do this so others wouldn’t know how weak they were. The way you would do it is for everyone to compute their net flux F_i and disclose this amount to a central clearinghouse. The clearing house then checks to see if the sum of all the fluxes is zero. If the sum is not zero then either someone made a mistake or tried to cheat. If it sums to zero then those with a negative flux would deposit that amount to the clearinghouse and those with a positive flux could then withdraw from it. It is possible to cheat if you collude with someone else so that the net change to the sum of your two fluxes is zero. However, that would not affect anyone else so it would be like a private deal between two parties.

Formal Proof

This month’s Notices of the American Mathematical Society is highlighting formal proof, which is a proof where every step is justified with respect to the axioms of mathematics so there is no chance of error. Generally, this would be extremely tedious to do by hand so people have been developing computerized proof assistants to help do the job. My good friend and former colleague at the University of Pittsburgh, Tom Hales penned one of the articles. I actually lured Tom from Michigan to Pittsburgh several years ago. At that time, Tom had recently finished his proof of the Kepler conjecture, which asserts that the densest arrangement of balls in three dimensional space is face-centered cubic (FCC) packing. He had been battling with the editors of the Annals of Mathematics to get his proof published. The problem was that Tom’s proof was quite long (300 pages) and used a computer. Tom’s strategy for the proof was to show that all possible packings in infinite space could be reduced to a finite number of arrangements, which needed to be tested individually. Tom and a student then wrote a program with about forty thousand lines that used interval arithmetic to show that none of these other arrangements was ever denser than FCC. A panel of experts worked on reviewing the proof and essentially gave up. They were pretty sure it was right but were unwilling to certify it. Eventually, the Annals published the proof with an asterisk that it was not fully checked by referees. I may have played a small part in helping Tom finally get the proof published. I remember going down to Tom’s office one day. He was slumped in his chair and declared that he was giving up on the Annals and writing a book instead. I convinced him not to give up and gave him some tips on the strategy to deal with the editors and referees. I’m not sure exactly what happened but the next time I saw Tom he was beaming that the work would finally be published.

However, Tom was so frustrated with the whole episode that he decided that he needed to prove the Kepler conjecture formally, so that the asterisk could be removed. He began what he called the Flyspeck project, which is a stylized acronym for Formal Proof of Kepler. He’s already recruited a number of mathematicians around the world to work on Flyspeck and nearly half of the computer code used to prove the Kepler conjecture is now certified. He estimates that it may take as long as twenty work years to complete the project so if he gets enough people interested it can be done quite quickly in real time.

A formal proof essentially involves translating mathematics into symbol manipulation that is encapsulated within a foundational system. Tom uses a system called HOL Light (an acronym for lightweight implementation of Higher Order Logic), which I’ll summarize here. The details are fairly technical; they involve using a new axiomatic or logical system that involves types (similar to what is used in computer languages like C). HOL Light differs slightly from the Zermelo-Fraenkel-Choice axioms of set theory used in traditional mathematics. The use of types means that certain incongruous operations that any mathematician would deem nonsensical (like taking the union of a real number and a function), would automatically be disallowed. The system then involves mathematical statements or objects called terms involving symbols and logical operations or inference rules. Theorems are expressed as a set of terms called the sequent that imply the truth (or more accurately the provability) of another term called the conclusion. Proofs are demonstrations that using the allowed inference rules and axioms, it is possible to arrive at the conclusion.

What I like about formal proofs is that it reduces mathematics to dynamical systems. Each term is a point in the theorem space of all terms (whatever that means). A proof is then a trajectory between the initial condition (the sequent) and the conclusion. Technically, a formal proof is not a true dynamical system because the next step is not uniquely specified by the current state. The fact that there are multiple choices at each step is why theorem proving is hard. Interestingly, this is connected to the famous computer science problem of whether or not P=NP. Theorem proving is in the complexity class NP because any proof can be verified in polynomial time. The question is whether or not a proof can be found polynomial time. If it can be shown that this is possible then you would have a proof of P=NP and get a million dollars from the Clay foundation. It would also mean that you could prove the Riemann Hypothesis and all the other Clay Millennium problems and collect for those as well. In fact, if P=NP you could prove all theorems in polynomial time. This is one of the reasons why most people (including myself) don’t think that P=NP.

I think it would be very interesting to analyze the dynamics of formal proof. So many questions immediately come to my mind. For example, what are the properties of theorem space? We know that the set of all theorems is countable but the set of possible terms is uncountable. A formal system consists of the space of all points reachable from the axioms. What does this look like? Can we define a topology on the space of all terms? I suppose a metric could be the fewest number of steps to get from one term to another term, which might be undecidable. Do people even think about these questions?

Explanation versus narration

Noted Harvard economist Greg Mankiw wrote an op-ed last week for the New York Times and posted to his blog, a letter to the president-elect. One of his recommendations was to listen to economists. Following up on Steve Hsu’s post on intellectual honesty, I think this exhibits an element of hubris since the majority of economists did not foresee this current financial meltdown. There were certainly those that were warning about the collapse of the housing bubble, like Robert Shiller, but other than Nouriel Roubini, I didn’t hear too much about it causing the worst crisis since the Great Depression. Even Paul Krugman admits that this took him by surprise. I could see that there was a housing bubble back in 2004, which is why I haven’t bought a house yet, but I had no idea that the bursting of a bubble could cause so much damage. The bursting of the internet bubble caused a lot of pain to some people but did not destroy the financial system.

The current crisis first became public knowledge when Bear Stearns went under in March of this year. Federal Reserve Chairman Ben Bernanke quickly engineered a buy out of Bear by JPMorgan and the market calmed for a while. Then in quick succession starting in September came the bailout of Fannie Mae and Freddie Mac, the sale of Merrill Lynch to Bank of America, the bankruptcy of Lehman Brothers, and the bailout of A.I.G. Shortly afterwards, Treasury Secretary Hank Paulson went to Congress to announce that the entire financial system is in jeopardy and requested 700 billion dollars for a bail out. The thinking was that banks and financial institutions had stopped lending to each other because they weren’t sure which banks were sound and which were on the verge of collapse. The money was originally intended to purchase suspect financial instruments in an attempt to restore confidence. The plan has changed since then and you can read Steve Hsu’s blog for the details.

What I want to point out here is that a narrative of what happened is not the same as understanding the system. There were certainly a lot of key events and circumstances starting in the 1980’s that may have contributed to this collapse. There was the gradual deregulation of the financial industry including the Gramm-Leach-Bliley Act in 1999 that allowed investment banks and commercial banks to coalesce and the Commodity-Futures-Modernization Act in 2000 that ensured that financial derivatives remained unregulated. There was the rise of hedge funds and the use of massive amounts of leveraging by financial institutions. There were low interest rates following the internet bubble that fueled the housing bubble. There was the immense trade deficit with China (and China’s interest in keeping the US dollar high) that allowed low interest rates to persist. There was the general world savings glut that allowed so much capital to flow to the US. There was the flood of physicists and mathematicians to Wall Street and so on.

Anyone can create a nice story about what happened and depending on their prior beliefs they can be dramatically different – compare George Soros to Phil Gramm. We really would like to understand in general how the economy and financial markets operate but we only have one data point. We can never rerun history and obtain a distribution of outcomes. Thus, although we may be able to construct a plausible and consistent story for why an event happened we can never know if it were correct and even more importantly we don’t know if that can tell us how to prevent it from happening again. It could be that no matter what we had done, a crisis would still have ensued. My father warned of a collapse of the capitalistic system his entire life. I’m not sure how he would have felt had he lived to see the current crisis but he probably would have said it was inevitable. Or perhaps, if interest rates were a few points higher nothing would have happened. The truth is probably somewhere in between.

Another way of saying this is that we have a very large complex dynamical system and we have one trajectory. What we want to know about are the attractors, the basins of attraction, and the structural stabilty of the system. These are things that are difficult to determine even if we had full knowledge of the underlying dynamical system. We are trying to construct the dynamical system and infer all these properties from the observation of a single trajectory. I’m not sure if this task is impossible (i.e. undecidable) but it is certainly intractable. I don’t know how we should proceed but I do know that conventional economic dogma about efficient markets needs to be updated. Theorems are only as good as their axioms and we definitely don’t know what the axioms are for sure. I think the sooner economists own up to the fact that they really don’t know and can’t know what is going on, the better we will be.

Dead Zones

Marine dead zones are regions of the ocean, usually near the mouths of rivers or waterways, which receive a large amount of nutrient (phosphorous and nitrogen) run off mostly from fertilizer. This causes a great phytoplankton and algae bloom that takes up carbon dioxide. However, when they die they sink to the ocean floor where other aerobic bacteria break them down with such vigor that they deplete the oxygen supply leaving an anoxic zone that cannot support marine life. The environmental movement is striving to curb fertilizer use in an attempt to mitigate these dead zones. There are also theories that the increase in the number of these dead zones are related to global warming.

I have a heretical thought on this regard and I haven’t been able to find any information on it so if anyone knows please enlighten me. The earth’s oxygen was originally created by cyanobacteria, which make up the algae that are causing the dead zones. So, could these dead zones actually be removing and sequestering carbon dioxide? Once the ocean bottom becomes anoxic, would the phytoplankton and algae fecal matter and remains pile upon the ocean floor and turn into fossil fuels in a few hundred million years? I don’t think we should necessarily encourage dead zones but is there any data out there that they could be mitigating global warming? I don’t want to be another one of those staunchly leftist youths going conservative in their old age so please set me straight if I’m wrong.

Rationality and politics

I’ve been trying to reconcile the current political environment in terms of a consistent framework. In particular, I’ve been interested in dissecting how issues have been divided between the so-called left and right in the United States. My premise is based on ideas set down in my previous post on the genetic basis of political orientation. In that post I proposed that the political thesis of the right is that the wealth should be distributed according to a person’s direct contribution while the left’s premise is that wealth should be distributed equitably regardless of standing in the community. I think these are fair definitions based on historical notions of the right and left. What I want to do now is to see how current issues should be divided between these positions in a perfectly rational world.

Let me first summarize some positions currently held by the US right: 1) low taxes, 2) small government, 3) deregulation of industries, 3) free trade, 4) gun rights, 5) strong military, 6) anti-abortion, 7) anti-gay rights, and 8) anti-immigration. I would say positions 1) through 4) seem consistent with the historical notion of the right (although regulation can be consistent with the right if it makes markets more transparent), position 5) is debatable, while positions 6) through 8) seem dissonant. The left generally but not always take the opposite positions except possibly on point 5), which is mixed. The strong military position was understood as a right wing position during the Cold War because of the opposition to communism. The rationale for a strong military waned after the fall of the Soviet Union but 9/11 changed the game again and now the military is justified as a bulwark against terrorism.

The question is how to reconcile positions 6) through 8) and we could add pro-death penalty and anti-evolution into the mix as well. These positions are aligned on the right because of several historical events. The first was that many of the early settlers to the United States came to escape religious persecution at home and this is why there is a significant US Christian fundamentalist population. The second is slavery and the Civil Rights movement. The third is that middle class whites fled the cities for the suburbs in the 50’s and 60’s. These people were probably religious but a genetic mix between left and right.

In the early 20th century, the Republicans were an economic right wing party while the Democrats under FDR veered to the left although they were mostly Keynesian and not socialist. The South had been Democratic because Lincoln was a Republican. The Civil Rights movement in the 60’s angered and scared many suburban and southern whites and this was exploited by Nixon’s “Southern Strategy”, which flipped the South to the Republicans. This was also a time of economic prosperity for the middle class so they were more influenced by issues regarding crime, safety, religion, and keeping their communities “intact”. Hence, as long as economic growth continued, there could be a coalition between the economic right and the religious right since the beliefs of both sides didn’t really infringe on each other. Hence, culturally liberal New York bankers could coexist with culturally conservative southern factory workers.

Let me now go through each point and see if we can parse them rationally. I will not try to ascribe any moral or normative value to the positions, only on whether or not it would be consistent in a right or left worldview. I think low taxes, small government, deregulation and free trade certainly belong on the right without much argument. Gun rights seem to be consistent with the right since it is an anti-regulatory sentiment. Strong military is not so clear cut to me. It certainly helps to ensure that foreign markets remain open so that would help the right. However, it could also enforce rules on other people, which is more left. It is also a big government program, which is not so right. So my sense is that a strong military is neither right nor left. Abortion is quite difficult. From the point of view of the woman, I think being pro-choice is consistent with being on the right. Even if you believe that life begins at conception and I’ve argued before that defining when life begins is problematic, the fetus is also a part of the woman’s body. From the point of view of the fetus, I think it’s actually a left wing position to be pro-life. Gay rights seems to be clearly a right wing position in that there should not be any regulation on personal choice between consenting adults. However, if you view gay behavior as being very detrimental to society then as a left winger you could possibly justify disallowing it. So interestingly, I think being against gay rights is only viable from a left wing point of view. Anti-immigration is probably more consistent with the left since immigrants could be a competitive threat to one’s job. A right winger should encourage immigration and more competition. Interestingly, anti-evolution used to be a left wing position. William Jennings Bryan, who was against evolution in the Scopes Monkey trial, was a populist Democrat. He was worried that evolution theory would justify why some people had more than others. Survival of the fittest is a very right wing concept.

I doubt that political parties will ever be completely self-consistent in their positions given accidents of history. However, the current economic crisis is forcing people to make economic considerations more of a priority. I think what will happen is that there will be a growth in socially conservative economic populism, which as I argued is probably more self-consistent. Republican presidential candidate Mike Huckabee is an example of someone in that category. The backlash against Johnson’s Great Society and anti-poverty measures was largely racially motivated. However, as the generation that lived through the Cold War and the Civil Rights era shrinks in influence, I think a slow rationalizing realignment in the issues among the political parties may take place.

Addendum (11/4/2008)
I think it is appropriate to add on Election Day that I don’t think either of the two major American political parties fall into the right or left camp as I’ve defined it. There are elements of both right and left (as well as a royalist bent) in both party’s platforms.

Living in a simulation – Part 2

Suppose you are living in a simulation and you wanted to discover the theory of everything. What would that theory be? Probably in your (simulated) mind it would be the set of laws that govern all physical phenomena in your (simulated) observable universe. You would also want to understand how your universe came about and where it will end up. Let’s suppose that the programmer of your universe came up with a set of physical laws and let it run. As I discussed before, the programmer really can’t be sure what will happen in his simulation but let’s say he was inspired or lucky and hit upon something that led to a universe that produced an inhabitant that could ask about the theory of everything. Continue reading

Absolute versus relative wealth

There is a debate among sociologists, political scientists and economists on whether or not absolute wealth or relative wealth is more important. There seems to be a trend recently that happiness is linked more to relative wealth than absolute wealth. The number of people who say they are happy has not gone up with a rise in standard of living, and in fact it may have even come down. Also, a paper in the journal Science last year reported that activation in brain areas related to reward responded more to relative differences in wealth than absolute amounts. I recall reading an article recently about Silicon Valley millionaires feeling poor and unsatisfied because of the billionaires in their neighbourhood. There was a difference between being rich and being “plane”-rich.

However, the current economic turmoil is uncovering a more complex (or maybe obvious) interaction at play. The anti-correlation between the performance of the economy and the likelihood of a Democratic US president seems to indicate that there is a threshold effect for wealth. Happiness does not go up appreciably above this threshold but certainly goes down a great deal below it. For people above this threshold, other factors start to play a role in their political decisions and sense of well being. However, when you are below this threshold then the economy is the dominant issue. Continue reading

Genetic basis for political orientation

I was listening to a podcast of Quirks and Quarks yesterday that featured an interview of political scientist James Fowler on his recent work showing that the likelihood to vote was partially genetic. (Fowler is the same person who recently argued in New England Journal of Medicine paper that obese people tend to have obese friends.) The likelihood that genes may play a role in politics has come up before most notably in a paper by John Alford, Carolyn Funk and John Hibbing in 2005 that argued that political leanings are heritable. That study looked at identical and fraternal twins and found that the heritability of political ideology was about 50%. The work didn’t say that genes could predict party affiliation just how a person stood on the left-right divide on a number of issues. Fowler hypothesized that the reason politics has a genetic basis is that back in our hunter-gatherer days, figuring out how to divide the spoils of a hunt would be important to the survival of the troop. Continue reading

Genetic basis for political orientation

I was listening to a podcast of Quirks and Quarks yesterday that featured an interview of political scientist James Fowler on his recent work showing that the likelihood to vote was partially genetic. (Fowler is the same person who recently argued in New England Journal of Medicine paper that obese people tend to have obese friends.) The likelihood that genes may play a role in politics has come up before most notably in a paper by John Alford, Carolyn Funk and John Hibbing in 2005 that argued that political leanings are heritable. That study looked at identical and fraternal twins and found that the heritability of political ideology was about 50%. The work didn’t say that genes could predict party affiliation just how a person stood on the left-right divide on a number of issues. Fowler hypothesized that the reason politics has a genetic basis is that back in our hunter-gatherer days, figuring out how to divide the spoils of a hunt would be important to the survival of the troop.

Following Fowler, I can imagine how early humans could take two approaches to how to divide up a downed mastodon. The paleo-leftists would argue that the meat should be shared equally among everyone in the tribe. The rightwingers would argue that each tribe member’s share should be based solely on how much they contributed to that hunt. My guess is that any ancient group that had approximately equal representation of these two opposing views would outcompete groups that had unanimous agreement of either viewpoint. In the rightwing society, the weaker members of the group simply wouldn’t eat as much and hence would have a lesser chance of survival reducing the population and diversity of the group. The result may be a group of excellent hunters but perhaps they won’t be so good at adapting to changing circumstances. Now in the proto-socialist group, the incentive to go out and hunt would be reduced since everyone would eat no matter what. This might make hunts less frequent and again weaken the group. The group with political tension may compromise on a solution where everyone gets some share of the spoils but there would be incentives or peer pressure to contribute. This may be why genes for left and right leanings have both persisted.

If this is true, then it would imply that we may always have political disagreement and the pendulum will continuously swing back and forth between left and right. However, this doesn’t imply that progress can’t take place. No one in a modern society tolerates slavery even though that was the central debate a hundred and fifty years ago. Hence, progress is made by moving the center and arguments between the left and the right lead to fluctuations around this center. A shrewd politician can take advantage of this fact by focusing on how to frame an issue instead of trying to win an argument. If she can create a situation where two sides argue about a tangential matter to the pertinent issue than the goal can still be achieved. For example, suppose a policy maker wanted to do something global warming. Then the strategy should not be to go out and try to convince people on what to do. Instead, it may be better to find a person on the opposite political spectrum (who also wants to do something about global warming) and then stage debates on their policy differences. One side could argue for strict regulations and the other could argue for tax incentives. They then achieve their aim by getting the country to take sides on how to deal with global warming, instead of arguing about whether or not it exists.

Complexity of art and science

There seems to be a consensus that art cannot be compressed. A plot summary of Hamlet is not the same as Hamlet. A photo of Picasso’s Guernica is not the same as the actual painting in Madrid. Music is a particulary interesting case. A Bach partita can be written down in a few thousand bits but reading the music is not the same as hearing it played by Heifetz or Menuhin. One could even argue that one performance by the same artist is not the same as a recording or even another performance. Continue reading

Modeling the financial crisis

There is an interesting op-ed piece in the New York Times this week by physicist and science writer Mark Buchanan on predicting the current financial crisis. His argument is that traditional economists were unable to predict or handle the current situation (Nouriel Rubini notwithstanding) since their worldviews are shaped by equilibrium theorems, which unfortunately are either incomplete or wrong. Buchanan writes: Continue reading

Living in a simulation

There has been a lot of press lately (summarized here) on the possibility that we are living inside of a computer simulation. Much of the attention has been focused on whether or not you could know if you lived in a simulation. Here, I will focus on what you could know (or compute) if you were running the simulation. Before I proceed, I’ll briefly summarize some points about the theory of computation. I’ve alluded to these ideas in several of my recent posts but have never formally introduced them. Obviously, this is very deep area so I’ll just briefly summarize some important points. Continue reading

Complexity of the brain

The Kolmogorov complexity of an object is the length of the minimal description of that object. In terms of the brain, it would correspond to the length of the smallest computer program that could reproduce the brain. It could also be thought of as the amount of information necessary to model the brain. Computing the Kolmogorov complexity is not possible since it is an undecidable problem but we can estimate it. If we presume that molecular biology is computable then one estimate of the Kolmogorov complexity of the brain is given by the length of the genome, which is 3 billion base pairs long or 6 billion bits. To be conservative, we could also include the genome of the mother and baby, which implies 12 billion bits. This corresponds to less than two billion bytes and easily fits on a DVD. Hence in principle, we could potentially grow a brain with less than 12 billion bits of information and this is probably an upper bound. Continue reading