The ontological argument

An ontological argument is an attempt to prove the existence of God using logic alone. It is one of those things that I had come across a few times in the past but  never took the time to understand.  So, when a blog post by Nathan Schneider on this topic appeared last week in the New York Times, I finally tried to follow the argument.  There have been many attempts at an ontological proof throughout history.  Schneider wrote about an early one from the eleventh century by an English Catholic Bishop named St. Anslem of Canterbury.  I couldn’t really understand Schneider’s somewhat poetic version of the proof but after reading several other versions on the web I think I finally get it.  It is actually quite clever although it obviously must make some assumptions that are not self-evident.

Here’s the proof:  The first assumption is that God is the greatest possible being that can be conceived.  The second assumption is that existing and being conceived is greater than not existing and being conceived.  Hence, God must exist, since a God that exists is greater than a God that doesn’t exist.  Another way of saying this is that the fact that I can think of a God implies that she must exist.  In some sense, this is a precursor to modal realism.

There have been many historical criticisms of this proof.  Kant’s critique is that existence is not a predicate.  By this he means that existence is not a property of an object.  For example, a unicorn has properties such as having a horn, looks like a horse and so forth.  But a unicorn could exist or not exist and that doesn’t change what a unicorn is. So existence is something that must be inferred empirically and can never be deduced a priori.  I think Kant effectively kills Anslem’s proof. However, another weakness in the argument is that Anslem assumes that the set of all conceived beings has an upper bound.  This may not be true as well.  There could be an infinite hierarchy of Gods, a possibility that has interesting philosophical consequences as well.

Life without free will

Given that a materialistic theory of mind is becoming more and more mainstream, we must face the prospect of living our lives without free will.  That is not to say that our lives will be predictable or even determined.  Given what we know about dynamical systems, computer science and quantum mechanics it is almost certain that life is completely unpredictable and undetermined.  However, there is no “you” or “me to make decisions about what we do.  Results from neuroscience (e.g. Bill Newsome’s lab at Stanford) show that there are neurons in cortex that fire before a monkey makes a decision and the stimulation of some of these neurons can influence a monkey’s choice.  We too are at the mercy of our neurons.

So the question I have is once a large fraction of the population believes that free will does not exist,  will that change society.   Although this is a dynamical systems question where the belief of free will is some aspect of the state of the system and what I ask is how the system evolves subsequent to reaching a state of no belief in free will, I will address it using language that still connotes some sense of agency or directed action since it is more convenient to do so.  However, keep in mind that everything I say is with respect to how  society will evolve after it attains a state where there is no longer a belief in free will.

Continue reading

Being wrong may be rational

This past Sunday, economist Paul Krugman was lamenting in a book review of Justin Fox’s book “Myth of the Rational Market” (which he liked very much) that despite this current financial crisis and previous crises, like the failure of the hedge fund Long Term Capital Management, people still believe in efficient markets as strongly as ever. The efficient market hypothesis is the basis of most of modern finance and assumes that the price of a security is always correct and that you can never beat the market.  So artificial bubbles should never occur.  Krugman wonders what it will take to ever change people’s minds.

I want to show here that there might be no amount of evidence that will ever change their minds and they can still be perfectly rational in the Bayesian sense.  The argument can also apply to all other controversial topics.  I think it is generally believed in intellectual circles that the reason there is so much disagreement on these issues is that the other side is either stupid, deluded or irrational.   I want to point out that believing in something completely wrong even in the face of overwhelming evidence may arise in perfectly rational beings.  That is not to say that faulty reasoning does not exist and can be dangerous.  It just explains why two perfectly reasonable and intelligent people can disagree so alarmingly.

Continue reading

Ecosystem ghosts

Olivia Judson’s blog post in the New York times today talks about the fragility and robustness of ecosystems.  She talks about how we really don’t know what happens to an ecosystem when a single species goes extinct.  Can that species be restored or has another species taken over its niche?  Also, when an invasive species arrives, it can thrive or not thrive.  Mathematical models have found that the perturbation induced by such an event can cause another species to go extinct even if the invasive species also goes extinct.  These transient invaders are called ghosts. She also talks about experimental ecosystems with single-cell organisms.  In these artificial settings, the equilibrium states are generally composed of a small number of organisms and ghosts can cause established species to disappear.

Now, this brings me to something that has always puzzled me, which is why are natural ecosystems so varied and relatively robust when they are at the same time so susceptible to invasive species?  Examples being rabbits and cane toads ravaging Australia, zebra mussels clogging up the North American Great Lakes, and Kudzu taking over the American southwest.  Clearly, these examples show that there were niches in the ecosystems that were not being exploited.  My guess is that if we wait long enough, the these invaded ecosystems will eventually adjust and become varied again.  After all, these invasive species are held in check in their native habitats. Thus, ecosystems may tend to evolve to a state with wide variety but also one that always leaves them vulnerable to attack.  Can we mathematically prove this? The really interesting thing is that this fragile stability seems to require a large number of species since experiments with small numbers tend to evolve to small communities.  Why is that? What is the difference between a large system and a small system?  Is there a bifurcation or phase transition as you increase the size of the ecosystem?  Is there an analogy to economics or the brain?  This is why I’m so interested in large but finite interacting systems.  There seems to be something there that I just don’t understand.

Information theory

It took me a very long time to develop an intuitive feel for information theory and Shannon entropy.  In fact, it was only when I had to use it for an application that it finally fell together and made visceral sense.  Now, I understand fully why people, especially in the neural coding field, are so focused on it.  I also think that the concepts are actually very natural and highly useful in a wide variety of applications.  I now think in terms of entropy and mutual information all the time.

Our specific application was to develop a scheme to find important amino acid positions in families of proteins called GPCRs, which I wrote about recently.  Essentially, we have a matrix of letters from an alphabet of twenty letters.  Each row of the matrix corresponds to a protein and each column of the matrix corresponds to a specific position on the protein.  Each column is a vector of letters.  Conserved columns are those which have very little variation in the letters and these positions are thought to be functionally important.   They are also relatively easy to find.

What we were interested in was finding pairs of columns that are correlated in some way. For example, suppose whenever column 1 had the letter A (in a given row), column 2 was more likely to have the letter W, whereas when column 1 had the letter C, column 2 was less likely to have the letter P but more likely to have R and D.  You can appreciate how hard it would be to spot these correlations visually.  This is confounded by the fact that if the two columns have high variability then there will be random coincidences from time to time.  So what we really want to know is the amount of correlation that exceeds randomness. As I show below, this is exactly what mutual information quantifies.

Continue reading

Incentives in health care

The American health care system relies on a “fee for service” model, in which physicians are reimbursed for the procedures they perform.  I think this is a perfect example of how organizational structure and in particular incentives can affect outcomes.  Free market proponents argue that the only system that can optimally distribute goods and services is a free market.  I tangentially posted on efficient markets a short time ago.  However, even with a free market, the rules of the game determine what it means to win.  For example, when physicians are reimbursed for procedures then it makes sense for them to perform as many procedures as possible.  If it is the choice between an inexpensive therapy and an expensive one and there is no clear cut evidence for the benefit of either then why choose the inexpensive option.  A provocative article in the New Yorker by Atul Gawande  shows what can happen when this line of thought is taken to the extreme.  Another unintended consequence of the fee for service model may be that there is no incentive to recruit individuals for clinical studies as detailed in this article by Gina Kolata  in the New York Times.  The interesting thing about both of these examples is that they are independent of whether health insurance is private,  public or single payer.  Gawande’s article was mostly about Medicare, which is government run. An alternative to fee for service is  “fee for outcome”, where physicians are rewarded for having healthier patients.  Gawande favours the Mayo Clinic model where the physicians have a fixed salary and focus on maximizing patient care.  There must be a host of different possible compensation models that are possible, which I’m sure economists have explored. However, perhaps this is also a (critically important) problem where ideas from physics and applied math might be useful.