Our shared inheritance: the human genome

At the beginning of CBC radio’s science program Quirks and Quarks are brief audio snippets of scientists speaking.  One of them is Francis Collins, of the Human Genome Project, saying “The human genome is a our shared inheritance.”   On the eve of the new year, I thought I would reflect  on what this means for us.

When the human genome was first published at the beginning of the century it was proclaimed that humans were 99.9% identical and that there was no genetic basis for race.  Since that time, the estimate of our similarity has been revised downwards to about 99.5% and it could drop further.  The reason is that the first estimate was based on patching together the genomes of about 100 different people so differences were underestimated.  In the last few years, the genomes of individuals like Craig Venter and Jim Watson have been sequenced and the differences appear to be far larger.  One of the more recent and unexpected findings is that Copy Number Variants, where pieces of the genome including entire genes are repeated are far more numerous than SNPs where differences in just a single nucleotide occur.  Additionally, it now appears that members of different races can be distinguished genetically (see here for argument). It was once claimed that the variance within races swamped the differences between the means of the races.  Now it appears that while this is true in most directions in genome space, there are directions where this is not true.  It is still not known if the differences that are observable are meaningful.

Continue reading

Everting a sphere

To evert a sphere means to turn it inside out with a continuous transformation where the sphere can pass through itself without puncturing, tearing, ripping, creasing, or pinching.  To see how nontrivial this is just imagine trying to push the north pole through the south pole; a crease will develop along the equator.  Stephen Smale proved that an eversion was possible in 1957 but there wasn’t a concrete example until a few years later.  The video shown below uses a technique of corrugation developed by Bill Thurston, who’s geometrization conjecture, which implies the Poincare conjecture, was recently proved by Grigori Perelman.

The above video is an excerpt from a 20 minute video called “Outside In“, which involved Bill Thurston’s input.  The video can be found at You Tube in two parts.  Part 1 is here and part 2 is here.

The Fredholm Alternative

One of the most useful theorems in applied mathematics is the Fredholm Alternative.  However, because the theorem has several parts and gets expressed in different ways, many people don’t know why it has “alternative” in the name.  For them, the theorem is a means of constructing solvability conditions for linear equations used in perturbation theory.

The Fredholm Alternative Theorem can be easily understood if you consider solutions to the matrix equation  A v = b, for a matrix A and vectors v and b.  Everything that applies to matrices can then be generalized to infinite dimensional linear operators that occur in differential or integral equations.  The theorem is:  Exactly one of the two following alternatives hold

  1. A v = b has one and only one solution
  2. A^* w = 0 has a nontrivial solution

where A^* is the transpose or adjoint of A.  However, this is not the form of the theorem that is usually used in applied math.   A corollary to the above theorem is that if the second alternative holds then A v = b has a solution if and only if the inner product (b,w)=0, where w is in the nullspace of  the adjoint of A, i.e. A^* w=0.  The condition (b,w)=0 is then a solvability condition for an operator equation (e.g. differential or integral equation) that can be used in  perturbation theory.  One can clearly see that if the theorem is stated this way,  the “alternative” is obscured. Continue reading

Nonlinearity

I think people generally view nonlinear effects in one of two ways.  They either 1) do not think of it at all and basically view everything through a linear lens or 2) they think of it in terms of a lack of predictability as in chaos and the butterfly effect. I think both are somewhat dangerous viewpoints.  Given that the twenty first century is shaping up to be one where complex systems, such as the economy and climate, directly influence our lives, it is important that the general public and especially scientists have a more precise understanding of what nonlinearity can and cannot do.  Although Stan Ulam once remarked something to the effect that the term “nonlinear science” was about as meaningful as calling the bulk of zoology, the “study of non-elephant animals”, I actually think there are some concrete notions to the term that can give some valuable insight. In particular, nonlineariy does not only imply a lack of predictability, in some cases it can make things more predictable.

The first thing is to note that for most applications, there are basically two important effects of nonlinearity, namely “threshold” and “saturation”.  In fact, saturation can be thought of as negative feedback with a threshold, so threshold is really the only effect to keep in mind although separating the two is useful conceptually sometimes. By threshold, I mean that some variable does nothing until it crosses a threshold and by saturation, I mean that the effect of some variable does not change much beyond some point.  Here, I’ll give some examples of how saturation and thresholds can go a long way in understanding complex phenomena. Continue reading

Plausible Reasoning

The seeds of the modern era could arguably be traced to the Enlightenment and the invention of rationality. I say invention because although we may be universal computers and we are certainly capable of applying the rules of logic, it is not what we naturally do. What we actually use, as coined by E.T. Jaynes in his iconic book Probability Theory: The Logic of Science, is plausible reasoning. Jaynes is famous for being a major proponent of Bayesian inference during most of the second half of the last century. However, to call Jaynes’s book a book about Bayesian statistics is to wholly miss Jayne’s point, which is that probability theory is not about measures on sample spaces but a generalization of  logical inference.  In the Jaynes view, probabilities measure a degree of plausibility.

I think a perfect example of how unnatural the rules of formal logic are is to consider  the simple implication

A \rightarrow B

which means – If A is true then B is true.  By the rules of formal logic, if A is false then B can be true or false (i.e. a false premise can prove anything). Conversely, if B is true, then A can be true or false.  The only valid conclusion you can deduce from A\rightarrow B is that if B is false then A is false.   Implication is equivalent to the logical statement (\neg A) \vee B, where \neg means negation and \vee means logical OR.

Continue reading

Why ugly is sometimes beautiful

When Stravinsky’s ballet “The Rite of Spring” debuted in 1913 in Paris it caused a riot.  The music was so complex and novel that the audience didn’t know how to react.  They became agitated, jeered, argued amongst themselves and eventually became violent.  However, by the 1920’s the Rite of Spring was well accepted and now it is considered one of the greatest works of the 20th century.  When Impressionism was introduced in the late 19th century it was not well received.  The term was actually meant to be a derisive of the movement.  These days, the Impressionist rooms are often the most popular and crowded at Art Museums.  There was strong opposition to Maya Lin’s design for the Vietnam Memorial in 1981. She actually had to defend it before the US Congress and fought to keep it from being changed.   Now it is considered one of the most beautiful monuments in Washington D.C.  There are countless other examples of icons of beauty that were initially considered offensive or ugly.  I think this is perfectly consistent with what we know about neuroscience. Continue reading