The Scientific Worldview

An article that has been making the rounds on the twitter/blogosphere is The Science of Why We Don’t Believe Science by Chris Mooney in Mother Jones.  The article asks why it is that people cling to old beliefs even in the face of overwhelming data against them.  It argues that we basically use values to evaluate scientific facts.  Thus if the facts go against a value system that was built over a lifetime, we will find ways to rationalize away the facts.  This is particularly true for climate change and vaccines causing autism.  The scientific evidence is pretty strong that our climate is changing and vaccines don’t cause autism but adherents to these beliefs simply will not change their minds.

I mostly agree with the article but I would add that the idea that the scientific belief system is somehow more compelling than an alternative belief system may not be on as solid ground as scientists think.  The concept of rationality and the scientific method was a great invention that has improved the human condition dramatically.  However,  I think one of the things that people trained in science forget is how much we trust the scientific process and other scientists.  Often when I watch a science show like  NOVA on paleontology, I am simple amazed that archeologists can determine that a piece of bone that looks like some random rock to me, is a fragment of a finger bone of a primate that lived two million years ago.  However,  I trust them because they are scientists and I presume that they have received  the same rigorous training and constant scrutiny I have received.  I know that their conclusions are based on empirical evidence and a line of thought that I could follow if I took the time.  But if I grew up in a tradition where a community elder prescribed truths from a pulpit, why would I take the word of a scientist over someone I know and trust?  To someone not trained or exposed to science, it would just be the word of one person over another.

Thus, I think it would be prudent for scientists to realize that they possess a belief system that in many ways is no more self-evident than any other system.  Sure, our system has proven to be more useful over the years but ancient cultures managed to build massive architectural structures like the pyramids and invented agriculture without the help of modern science and engineering.   What science prizes is parsimony of explanation but at the risk of being called a post-modern relativist, this is mostly an aesthetic judgement.  The worldview that everything is the way it is because a creator insisted on it is as self-consistent as the scientific view.  The rational scientific worldview takes a lot of hard work and time to master.  Some (many?) people are just not willing to put in the effort it takes to learn it.   We may need to accept that a scientific worldview may not be palatable to everyone.  Understanding this truth may help us devise better strategies for conveying scientific ideas.

Does the cosmos know you exist?

After a year-long public battle with cancer, the writer and cultural critic Christopher Hitchens died this Thursday. Commenting on his early death, Hitchens reportedly told  NPR that he was “dealt a pretty good hand by the cosmos, which doesn’t know I’m here and won’t know when I’m gone.”  Hitchens made this comment because he was a fervid atheist.  However, the statement could be valid even if the universe has a creator.  It all depends on whether you think the universe is computable or not.  (By all accounts, it is at least well approximated by a computable universe.)  If the universe is computable, then in principle it is equivalent to one (or many) of the countably infinite number of possible computer programs.  This implies that it is possible that someone wrote the program that generated our universe and this person would in fact be the Creator.  However, depending on the cardinality of the Creator (by cardinality I mean the size of a set and not a reference to Catholicism), the Creator may or may not know that you or any thing at all exists in her universe.

Let’s take a specific example to make this more concrete.  It has been shown that simple cellular automata (CA) like  Rule 110 are universal computers.  A CA is a discrete dynamical system on a grid where each grid point can be either 1 or 0 (i.e. bits) and there is an update rule where the bits stay the same or flip on the next time step depending on the current state of the bits.  (Rule 110 is a one-dimensional CA where a bit is updated depending on the state of its two nearest neighbours and itself.)  Thus every single possible computation can be generated by simply using every bit string as an initial state of Rule 110.  So the entire history of our universe is encoded by a single string of binary digits together with the bits that encode Rule 110. Note that it doesn’t matter if our universe is quantum mechanical since any quantum mechanical system can be simulated on a classical computer.  Thus, all the Creator needed to do was to write down some string of digits and let the CA run.

Now, what constitutes “you” and any macroscopic thing in the universe is a collection of bits.  These bits need not be contiguous since nothing says that you have to be local at the level of the bits.  Thus you would be one of all the possible subsets of the bits of a binary string.  The set of all these subsets is called the power set. Since, any bit can either be in a subset or not, there are 2^N sets in the power set.  Thus, you are one of an exponential number of possible bit combinations for a finite universe and if the universe is infinitely large then you are one of an uncountably infinite number of possible combinations.  Hence, in order for the Creator to know you exist she has to a) know which subset corresponds to you and be able to find you and b) know when that subset will appear in the universe.  Thanks to the brilliance of Georg Cantor and Alan Turing, we can prove that even if a Creator can solve a) (which is no easy task), she cannot solve b) unless she is more powerful than a classical computer.  The reason is because in order to solve b), she has to predict when a given set of symbols will appear in the computation and this is equivalent to solving the Halting Problem (see here for a recent post I wrote introducing the concepts of computability).  Hence, knowing if “you” will exist, is undecidable.  In a completely self-consistent world where every being is computable, no being can systematically determine if another being exists in their own creation. In such a universe, Hitchens’s is right.  However, the converse is also true so that if there is a universe where there is a Creator that knows about “you”, then that Creator must also be computationally more powerful than you.

 

Two talks

Last week I gave a talk on obesity at Georgia State University in Atlanta, GA. Tomorrow, I will be giving a talk on the kinetic theory of coupled oscillators at George Mason University in Fairfax, VA. Both of these talks are variations of ones I have given before so instead of uploading my slides, I’ll just point to links to previous talks, papers, and posts on the topics.  For obesity, see here and for kinetic theory, see here, here and here.

In the Times

The New York Times has some interesting articles online right now.   There is a series of interesting essays on the Future of Computing in the Science section and the philosophy blog The Stone has a very nice post by Alva Noe on Art and Neuroscience.  I think Noe’s piece eloquently phrases several ideas that I have tried to get across recently, which is that while mind may arise exclusively from brain this doesn’t mean that looking at the brain alone will explain everything that the mind does.  Neuroscience will not make psychology or art history obsolete.  The reason is simply a matter of computational complexity or even more simply combinatorics.  It goes back to Phillip Anderson’s famous article More is Different (e.g. see here), where he argued that each field has its own set of fundamental laws and rules and thinking at a lower level isn’t always useful.

For example, suppose that what makes me enjoy or like a piece of art is set by a hundred or so on-off neural switches.  Then there are 2^{100} different ways I could appreciate art.  Now, I have no idea if a hundred is correct but suffice it to say that anything above 50 or so makes the number of combinations so large that it will take Moore’s law a long time to catch up and anything above 300 makes it virtually impossible to handle computationally in our universe with a classical computer.  Thus, if art appreciation is sufficiently complex, meaning that it involves a few hundred or more neural parameters, then Big Data on the brain alone will not be sufficient to obtain insight into what makes a piece of art special. Some sort of reduced description would be necessary and that already exists in the form of art history.  That is not to say that data mining how people respond to art may not provide some statistical information on what would constitute a masterpiece.  After all, Netflix is pretty successful in predicting what movies you will like based on what you have liked before and what other people like.  However, there will always be room for the art critic.

New paper on GPCRs

New paper in PloS One:

Fatakia SN, Costanzi S, Chow CC (2011) Molecular Evolution of the Transmembrane Domains of G Protein-Coupled Receptors. PLoS ONE 6(11): e27813. doi:10.1371/journal.pone.0027813

Abstract

G protein-coupled receptors (GPCRs) are a superfamily of integral membrane proteins vital for signaling and are important targets for pharmaceutical intervention in humans. Previously, we identified a group of ten amino acid positions (called key positions), within the seven transmembrane domain (7TM) interhelical region, which had high mutual information with each other and many other positions in the 7TM. Here, we estimated the evolutionary selection pressure at those key positions. We found that the key positions of receptors for small molecule natural ligands were under strong negative selection. Receptors naturally activated by lipids had weaker negative selection in general when compared to small molecule-activated receptors. Selection pressure varied widely in peptide-activated receptors. We used this observation to predict that a subgroup of orphan GPCRs not under strong selection may not possess a natural small-molecule ligand. In the subgroup of MRGX1-type GPCRs, we identified a key position, along with two non-key positions, under statistically significant positive selection.

The pitfalls of obesity and cancer drugs

The big medical news last week was that the US FDA revoked the use of the drug Avastin for the treatment of breast cancer.  The reason was that any potential efficacy did not outweigh the side effects.  Avastin is an anti-angiogenesis drug that blocks the formation of blood vessels by inhibiting the vascular endothelial growth factor VEGF-A.  This class of drugs is a big money-maker for the biotechnology firm Genentech and has been used in cancer treatments and for macular degeneration where it is called Lucentis.  Avastin will still be allowed for colorectal and lung cancer and physicians can still prescribe it off-label for breast cancer.  The strategy of targeting blood delivery as an anti-tumour strategy was pioneered by Judah Folkman.  He and collaborators also showed that adipose tissue mass (i.e. fat cells) can be regulated through controlling blood vessel growth (Rupnick et al., 2002) and this has been proposed as a potential therapy for obesity (e.g. Kolonin et al, 2004; barnhart et al. 2011).  However, the idea will probably not go very far because of potential severe side effects.

I think this episode illustrates a major problem in developing any type of drug for obesity and to some degree cancer.  I’ve posted  on the basic physiology and physics of weight change multiple times before (see here) so I won’t go into details here but suffice it to say that we get fat because we eat more than we burn.  Consider this silly analogy:  Suppose we have a car with an expandable gas tank and we seem to be overfilling it all the time so that it’s getting really big and heavy.  What should we do to lighten the car?  Well, there are three basic strategies: 1) We can put a hole in the gas tank so as we fill the tank gas just leaks out. 2) We can make the engine more inefficient so it burns gas faster or 3) We can put less gas in the car.  If you look at it this way, the first two strategies seem completely absurd but they are pursued all the time in obesity research.  The drug Orlistat blocks absorption of fat in the intestines, which basically tries to make the gas tank (and your bowels) leaky.  One of the most celebrated recent discoveries in obesity research was the discovery that human adults have brown fat.  This is a type of adipocyte that converts food energy directly into heat.  It is abundant in small mammals like rodents and babies (that’s why your  newborn is nice and warm) but was thought to disappear in adults. Now, various labs are trying to develop drugs that activate brown fat.  In essence they want to make us less efficient and turn us into heaters.   The third strategy of reducing input has also been tried and has failed various times.  Stimulants such as methampthetamines were found very early on to suppress appetite but turning people into speed addicts wasn’t a viable strategy.  A recent grand failure was the cannabinoid receptor CB-1 blocker Rimonabant.  It worked on the principle that since cannabis seems to enhance appetite, blocking it suppresses appetite. It does work but it also caused severe depression and suicidal thoughts.  Also, given that CB-1 is important in governing synaptic strengths, I’m sure there would have been bad long-term effects as well. I won’t bother telling the story of fen-phen.

It’s kind of easy to see why almost all obesity drug therapies will fail because they must target some important component of metabolism or neural function.  While we seem to have some unconscious controls of appetite and satiety, we can also easily override them (as I plan to do tomorrow for Thanksgiving).  Hence, any drug that targets some mechanism will likely either cause bad side effects or  be compensated by other mechanisms.  This also applies to some degree to cancer drugs, which must kill cancer cells while ignoring healthy cells.  This is why I tend not to get overly excited whenever another new discovery in obesity research is announced.

New paper

A new paper in Physical Review E is now available on line here.  In this paper Michael Buice and I show how you can derive an effective stochastic differential (Langevin) equation for a single element (e.g. neuron) embedded in a network by averaging over the unknown dynamics of the other elements. This then implies that given measurements from a single neuron, one might be able to infer properties of the network that it lives in.  We hope to show this in the future. In this paper, we perform the calculation explicitly for the Kuramoto model of coupled oscillators (e.g. see here) but it can be generalized to any network of coupled elements.  The calculation relies on the path or functional integral formalism Michael developed in his thesis and generalized at the NIH.  It is a nice application of what is called “effective field theory”, where new dynamics (i.e. action) are obtained by marginalizing or integrating out unwanted degrees of freedom.  The path integral formalism gives a nice platform to perform this averaging.  The resulting Langevin equation has a noise term that is nonwhite, non-Gaussian and multiplicative.  It is probably not something you would have guessed a priori.

 Michael A. Buice1,2 and Carson C. Chow11Laboratory of Biological Modeling, NIDDK, NIH, Bethesda, Maryland 20892, USA
2Center for Learning and Memory, University of  Texas at Austin, Austin, Texas, USA

Received 25 July 2011; revised 12 September 2011; published 17 November 2011

Complex systems are generally analytically intractable and difficult to simulate. We introduce a method for deriving an effective stochastic equation for a high-dimensional deterministic dynamical system for which some portion of the configuration is not precisely specified. We use a response function path integral to construct an equivalent distribution for the stochastic dynamics from the distribution of the incomplete information. We apply this method to the Kuramoto model of coupled oscillators to derive an effective stochastic equation for a single oscillator interacting with a bath of oscillators and also outline the procedure for other systems.

Published by the American Physical Society

URL: http://link.aps.org/doi/10.1103/PhysRevE.84.051120
DOI: 10.1103/PhysRevE.84.051120
PACS: 05.40.-a, 05.45.Xt, 05.20.Dd, 05.70.Ln