Parallel worlds

I’m not referring to counter factual worlds from science fiction movies or even the many worlds hypothesis of Hugh Everett (although that is the most plausible resolution of the quantum measurement problem we have).  What I’m talking about is the possibility of an uncountable number of overlapping macroscopic existences on top of a single microscopic substrate.

As I discussed previously,  from any given set, finite or infinite, we can always construct a set with more elements.  For example, if I have a list of objects, I can always take combinations of objects and combinations of combinations and so forth to make as many meta-objects as I like.  This is how we can infer the existence of real numbers just starting from the integers and as I’ll argue, how we can have multiple universes overlapping on the same substrate.

Continue reading

Nonlinear saturation

The classic Malthus argument is that populations will grow exponentially until they are nonlinearly suppressed by famine or disease due to overcrowding.  However, the lesson of the twentieth century is that populations can be checked for other reasons as well.  This is not necessarily a refutation of Malthus per se but rather that the quantity that populations  conserve need not be restricted to food or health.  There seems to be a threshold of economic prosperity when family income or personal freedom becomes the rate limiting step for a bigger family.  Developed nations such as Japan and Germany are approaching zero population growth and trending towards negative growth.  Russia currently has negative growth.

Hence, we can slow down population growth by increasing economic growth.  China is starting to see a very steep decline in population growth in the big cities like Shanghai that is independent of the one child policy.  The emerging middle class is now taking into account the cost of raising a child and how it would affect their lifestyle.  In a very poor country, the cost of raising a child is not really an issue.  In fact, if the probability of a child making it to adulthood is low and help from children is the only way for the elderly to survive then it is logical to have as many children as possible.  In this case, the classic Malthus argument with food and health (and aid) as the rate limiting quantities applies.

I bring this point up now because  it is crucial for the current debate about what to do about climate change.  One  way of mitigating human impact on the environment is to slow down population growth.  However, the most humane and effective method of doing that is to increase economic growth, which will then lead to an increase in emissions.  For example, in 2005, the US produced 23.5 tonnes of CO2 equivalents per person, (which incidentally is not the highest in the world and less than half of world leader Qatar), while  China produces about 5.5 tonnes and Niger 0.1 tonnes.  (This is not accounting for the extra emissions due to changes in land use.)   In absolute terms, China already produces more green house gases than the US and India is not far behind.   On the other hand the population growth rate of Niger is 3.6%, India is 1.5% and dropping, China is 0.6% and the US is 1%.   So, when we increase economic prosperity, we can reduce population growth and presumably suffering, but we will also increase greenhouse gas emissions.

Given that the world economy and agricultural system is entirely based on fossil fuels, it is also true that, at least on the short term, a restriction of carbon emissions will slow down or reduce economic growth.  Thus, even though climate change could have a catastrophic outcome for the future, curbing economic growth could also be bad.  Thus, for a developing nation and even some developed nations, the choice may not be so clear cut.  It is thus no wonder, that the current debate in Copenhagen is so contentious.  I think that unless the developed world can demonstrate that viable economic growth and prosperity can be achieved with reduced carbon emissions, the rest of the world and many people will remain skeptical.  I don’t think the leaders in the climate change community realize that this skepticism about carbon restrictions may not be all irrational.

Was the stimulus too slow

Paul Krugman seems to think so.  As I posted five months ago, if we use the analogy of persistent activity in the brain for the economy then the total amount of stimulus would only be one variable that is important in knocking the economy out of a recession and into a new equilibrium point.  Another variable is how fast the money is spent.  Thus far, only 30% of the stimulus has been disbursed and Krugman thinks that the impact of it has already been maximized.  The rest of the money will then be dissipated without a stimulatory effect.

Continue reading

Two talks at University of Toronto

I’m currently at the University of Toronto to give two talks in a series that is jointly hosted by the Physics department and the Fields Institute.  The Fields Institute is like the Canadian version of the Mathematical Sciences Research Institute in the US and is named in honour of  Canadian mathematician J.C. Fields, who started the Fields Medal (considered to be the most prestigious prize for mathematics).  The abstracts for my talks are here.

The talk today was a variation on my kinetic theory of coupled oscillators talk.  The slides are here.  I tried to be more pedagogical in this version and because it was to be only 45 minutes long, I also shortened it quite a bit.   However, in many ways I felt that this talk was much less successful than the previous versions.  In simplifying the story, I left out much of the history behind the topic and thus the results probably seemed somewhat disembodied.  I didn’t really get across why a kinetic theory of coupled oscillators is interesting and useful.   Here is the post giving more of the backstory on the topic, which has a link to an older version of the talk as well.  Tomorrow, I’ll talk about my obesity work.

More news stories on food waste

www.economist.com/sciencetechnology/displayStory.cfm?story_id=14960159

www.cbc.ca/consumer/story/2009/11/24/tech-environment-food-waste.html

www.cbc.ca/news/yourview/2009/11/food-waste-and-climate-change-will-you-change-your-habits.html

sciencenow.sciencemag.org/cgi/content/full/2009/1125/1

http://www.voanews.com/mp3/voa/english/ourw/ourw0305a.mp3

journalwatch.conservationmagazine.org/2009/11/26/america-clean-your-plate/

www.livescience.com/culture/091126-food-waste.html

en.greenplanet.net/food/research/1172-wasteful-eating-habits-sharpens-world-hunger.html

news.mongabay.com/2009/1125-hance_foodwaste.html

www.greenbang.com/rising-food-waste-also-wastes-oil-and-water-researchers-find_12708.html

www.digitaljournal.com/article/282792#tab=comments&sc=0&contribute=&local=

www.deakin.edu.au/deakin-speaking/node/77

www.tribalinsight.com/

blog.friendseat.com/forty-percent-of-food-supply-in-usa-wasted/

www.scoop.co.nz/stories/SC0911/S00054.htm

timesofindia.indiatimes.com/world/us/Chew-on-it-Americans-throw-away-40-of-food/articleshow/5277225.cms

www.foodproductiondaily.com/Processing/US-food-waste-impacts-climate-say-scientists?utm_source=RSS_text_news

http://www.theygaveusarepublic.com/diary/4285/pig-nation-americans-waste-40-percent-of-food-produced-here

www.independent.co.uk/life-style/health-and-families/health-news/clear-your-plate-and-spare-the-planet-1829914.html

roguepundit.typepad.com/roguepundit/2009/11/food-wastelines.html

trueslant.com/daviddisalvo/2009/11/27/study-finds-that-americans-throw-away-40-of-all-food/

healthystealthy.wordpress.com/2009/11/27/does-global-warming-make-my-ass-look-fat-americans-waste-1400-calories-per-person-enough-to-feed-another-whole-person-its-negatively-impacting-the-environment/

www.myfoxdc.com/dpp/news/dpgo-Study-US-Wastes-40-Percent-of-Its-Food-mb-200911291259516017848

www.greenlivingtips.com/blogs/456/Food-waste-epidemic.html

www.foodnavigator-usa.com/Science-Nutrition/US-food-waste-impacts-climate-say-scientists?utm_source=RSS_text_news

www.stuffedandstarved.org/drupal/node/528

news.mongabay.com/2009/1129-hance_foodwastetwo.html

New paper on food waste

Hall KD, Guo J, Dore M, Chow CC (2009) The Progressive Increase of Food Waste in America and Its Environmental Impact. PLoS ONE 4(11): e7940. doi:10.1371/journal.pone.0007940

This paper started out as a way to understand the obesity epidemic.  Kevin Hall and I developed a reduced model of how food intake is translated into body weight change [1].  We then decided to apply the model to the entire US population.  For the past thirty years there has been an ongoing study (NHANES) that has been taking a representative sample of the US population and taking anthropomorphic measurments like body weight and height. The UN Food and Agriculture Organization and the USDA have also kept track of how much food is available to the population. We thought it would be interesting to see if the food available accounted for the increase in body weight over the past thirty years.

What we  found was that the available food more than accounted for all of the weight gain.  In fact our calculations showed that the gap between predicted food intake and actual intake has diverged linearly over time.  This “energy gap” could be due to two things: 1) people were actually burning more energy than our model indicated because they were more physically active than expected (we assumed that physical activity stayed constant for the last thirty years), or 2) there has been a progressive increase of food waste.  Given that most people have argued that physical activity has gone down recently, which would make the energy gap even greater, we opted to for conclusion 2).   Our estimate is also on the conservative side because we didn’t accout for the fact that children eat less than adults on average.

I didn’t want to believe the result at first but the numbers were the numbers.  We have gone from wasting about 900 kcal per person per day in 1974 to 1400 kcal  in 2003.  It takes about 3 kcal to make 1 kcal of food so the energy in the wasted food amounts to about 4% of total US oil consumption.  The wasted food also uses about 25% of all fresh water use.  Ten percent of it could feed Canada.  The press has taken some interest in our result.  Our paper was covered by CBC news, Kevin and I were interviewed by Science and Kevin was interviewed on Voice of America.

[1] Chow CC, Hall KD (2008) The Dynamics of Human Body Weight Change. PLoS Comput Biol 4(3): e1000045. doi:10.1371/journal.pcbi.1000045

New paper on transients

A new paper, Competition Between Transients in the Rate of Approach to a Fixed Point, SIAM J. Appl. Dyn. Syst. 8, 1523 (2009) by Judy Day, Jonathan Rubin and myself appears today. The official journal link to the paper is here and the PDF can be obtained here.  This paper came about because of a biological phenomenon known as tolerance.  When the body is exposed to a pathogen or toxin there is an inflammatory response.  This makes you feel ill and initiates the immune system to mount a defense.  In some cases, if you are hit with a second dose of the toxin you’ll get a heightened response.  However, there are situations where you can have a decreased response to a second dose and that is called tolerance.

Judy Day was my last graduate student at Pitt.  When I left for NIH,  Jon Rubin stepped in to advise her.  Her first project on tolerance was to simulate a reduced four dimensional model of the immune system and see if tolerance could be observed in the model [1]. She found that it did occur under certain parameter regimes. What she showed was that if you watch a particular inflammatory marker, then it’s response could be damped if a preconditioning dose is first administered.

The next step was to understand mathematically how and why it occurred. The result after several starts and stops was this paper. We realized that tolerance boiled down to a question regarding the behavior of transients, i.e. how fast does an orbit get to a stable fixed point starting from different initial conditions. For example, consider two orbits with initial conditions (x1,y1) and (x2,y2) with x1 > x2, where y represents all the other coordinates. Tolerance occurs if the x coordinate of orbit 1 ever becomes smaller than the x coordinate of orbit 2 independent of what the other coordinates do. From continuity arguments, you can show that if tolerance occurs at a single point in space or time it must occur in a neighbourhood around those points. In our paper, we showed that tolerance could be understood geometrically and that for linear and nonlinear systems with certain general properties, tolerance is always possible although the theorems don’t say which orbits in particular will exhibit it.   However, regions of tolerance can be calculated explicitly in two dimensional linear systems and estimated for nonlinear planar systems.

[1] Day J, Rubin J, Vodovotz Y, Chow CC, Reynolds A, Clermont G, A reduced mathematical model of the acute inflammatory response II. Capturing scenarios of repeated endotoxin administration, Journal of theoretical biology, 242(1):237-56 2006.

Screening for terrorists

The recent tragedy at Fort Hood has people griping about missed signals that could have been used to prevent the attack.  However, I will argue that is likely to be impossible to ever have a system that can screen out all terrorists without also flagging a lot of innocent people.  The calculation is a simple exercise in probability theory that is often given in first year statistic classes.

Suppose we have a system in place that gives a yes Y or no response of whether or not a person is a terrorist T.  Let P(T) be the probablity that a given person is a terrorist,  P(T|Y) be the probability that a person  is a terrorist given that the test said yes.  Thus P(~T|Y)=1-P(T|Y) is the probability that one is not a terrorist even though the test said so.  Using Bayes theorem we have that

P(~T|Y)=P(Y|~T) P(~T)/P(Y)  (*)

where P(Y)=P(Y|T)P(T) + P(Y|~T)P(~T) is the probability of getting a yes result.   Now, the probability of being a terrorist is very low.   Out of the 300 million or so people in the US a small number are probably potential terrorists.  The US military has over a million people on active service.   Hence, the probability of not being a terrorist is very high.

From (*),  we see that in order to have a low probability of flagging an innocent person we need to have  P(Y|~T)P(~T)<< P(Y|T)P(T), or P(Y|~T)<< P(Y|T) P(T)/P(~T).  Since  P(T) is very small, P(T)/P(~T)~ P(T),   so if the true positive probability P(Y|T) was near one (i.e. a test that catches all terrorists), we need the false positive probability P(Y|~T) to be much smaller than the probability of having a terrorist, which means we need a test that gives false positives at a rate of less than 1 in a million.  The problem is that the true positive and false positive probabilities will be correlated.  The more sensitive the test the more likely it is to get a false positive.  So if you set your threshold to be very low so P(Y|T) is very high (i.e. make sure you never miss a terrorist), you’ll most certainly have P(Y|~T) to also be high.  I doubt you’ll ever have a test where P(Y|T) is near one while P(Y|~T) is less than one in a million.   So basically, if we want to catch all the terrorists, we’ll also have to flag a lot of innocent people.

Evidence based medicine

The New York Times magazine’s headline story this Sunday is on evidence-based medicine.  It talks about how a physician, Brent James, has been developing objective empirical means to measure outcomes and use the data to design medical protocols.  This is a perfect example of the migration from a highly skilled and paid profession (what I called an NP job in a recent post) to a more algorithmic and mechanical one (a P job).  Here are some excerpts from the story:

For the past decade or so, a loose group of reformers has been trying to do precisely this. They have been trying to figure out how to improve health care while also holding down the growth in costs. The group includes Dr. John Wenn­berg and his protégés at Dartmouth, whose research about geographic variation in care has received a lot of attention lately, as well as Dr. Mark McClellan, who ran Medicare in the Bush administration, and Dr. Donald Berwick, a Boston pediatrician who has become a leading advocate for patient safety. These reformers tend to be an optimistic bunch. It’s probably a necessary trait for anyone trying to overturn an entrenched status quo. When I have asked them whether they have any hope that medicine will change, they have tended to say yes. When I have asked them whether anybody has already begun to succeed, they have tended to mention the same name: Brent James.

…In the late 1980s, a pulmonologist at Intermountain named Alan Morris received a research grant to study whether a new approach to ventilator care could help treat a condition called acute respiratory distress syndrome. The condition, which is known as ARDS, kills thousands of Americans each year, many of them young men. (It can be a complication of swine flu.) As Morris thought about the research, he became concerned that the trial might be undermined by the fact that doctors would set ventilators at different levels for similar patients. He knew that he himself sometimes did so. Given all the things that the pulmonologists were trying to manage, it seemed they just could not set the ventilator consistently.

Working with James, Morris began to write a protocol for treating ARDS. Some of the recommendations were based on solid evidence. Many were educated guesses. The final document ran to 50 pages and was left at the patients’ bedsides in loose-leaf binders. Morris’s colleagues were naturally wary of it. “I thought there wasn’t anybody better in the world at twiddling the knobs than I was,” Jim Orme, a critical-care doctor, told me later, “so I was skeptical that any protocol generated by a group of people could do better.” Morris helped overcome this skepticism in part by inviting his colleagues to depart from the protocol whenever they wanted. He was merely giving them a set of defaults, which, he emphasized, were for the sake of a research trial.

… While the pulmonologists were working off of the protocol, Intermountain’s computerized records system was tracking patient outcomes. A pulmonology team met each week to talk about the outcomes and to rewrite the protocol when it seemed to be wrong. In the first few months, the team made dozens of changes. Just as the pulmonologists predicted, the initial protocol was deeply flawed. But it seemed to be successful anyway. One widely circulated national study overseen by doctors at Massachusetts General Hospital had found an ARDS survival rate of about 10 percent. For those in Intermountain’s study, the rate was 40 percent.

Continue reading

Hurdles for mathematical thinking

From my years as both a math professor and observer of people, I’ve come up with a list of  hurdles for mathematical thinking.  These are what I believe to be the essential set of skills  a  person must have if they want to understand and do mathematics.  They don’t need to have all these skills to use mathematics but would need most of them if they want to progress far in mathematics.  Identifying what sorts of conceptual barriers people may have could help in improving mathematics education.

I’ll first give the list and then explain what I mean by them.

1. Context dependent rules

2. Equivalence classes

3. Limits and infinitesimals

4. Formal  logic

5. Abstraction

Continue reading

Darts and Diophantine equations

Darts is a popular spectator sport in the UK. I had access to cable television recently so I was able to watch a few games.  What I find interesting about professional darts is that the players must solve a Diophantine equation to win.  For those who know nothing of the game, it involves throwing a small pointed projectile at an enumerated target board that looks like this:

dartboard

A dart that lands on a given sector on the board obtains that score.  The center circle of the board called the bulls eye is worth 50 points.  The ring around the bulls eye is worth 25 points.  The wedges are worth the score ascribed by the number on the perimeter.  However, if you land in the inner ring then you get triple the score of the wedge and if you land in the outer ring you get double the score.  Hence, the maximum number of points for one dart is the triple twenty worth 60 points.

Continue reading

The NP economy

I used to believe that one day all human labour would be automated (e.g. see here).  Upon further reflection, I realize that I am wrong.  The question of whether or not machines will someday replace all humans depends crucially on whether or not P is equal to NP.   Jobs that will eventually be automated will be the ones that can be solved easily with an algorithm.  In computer science parlance, these are problems in the computational complexity class P (solvable in polynomial time).   For example, traditional travel agents have disappeared faster than bluefin tuna because their task is pretty simple to automate.  However, not all travel agents will disappear.  The ones that survive will be more like concierges that put together complex travel arrangements or require negotiating with many parties.

Eventually, the jobs that humans will hold (barring a collapse of civilization as we know it) will involve solving problems in the complexity class NP (or harder).  That is not to say that machines won’t be doing some of these jobs, only that the advantage of machines over humans will not be as clear cut.  While it is true that if we could fully reproduce a human and make it faster and bigger then it could do everything that a human could do better but as I blogged about before, I think it will be difficult to exactly reproduce humans.  Additionally, for some very hard problems that don’t even have any good approximation schemes, blind luck will play an important role in coming up with solutions.  Balancing different human centric priorities will also be important and that may be best left for humans to do.   Even if it turns out that P=NP there could still be some jobs that humans can do like working on undecidable problems.

So what are some jobs that will be around in the NP economy?  Well, I think mathematicians will still be employed. Theorems can be verified in polynomial time but there are no known algorithms in P to generate them.   That is not to say that there won’t be robot mathematicians and mathematicians will certainly use automated theorem proving programs to help them (e.g. see here). However, I think the human touch will always have some use.  Artists and comedians will also have jobs in the future.  These are professions that require intimate knowledge of  what it is like to be human .  Again, there will be machine comics and artists but they won’t fully replace humans.  I also think that craftsmen like carpenters, stone masons, basket weavers and so forth could also make a comeback.  They will have to exhibit some exceptional artistry to survive but the demand for them could increase since some people will always long for the human touch in their furniture and houses.

The question then is whether or not there will be enough NP jobs to go around and whether or not everyone is able and willing to hold one.  To some, an NP economy will be almost Utopian – everyone will have interesting jobs.    However, there may be some people who simply don’t want or can’t do an NP job.   What will happen to them?  I think that will be a big (probably undecidable) problem that will face society in the not too distant future, provided we make it that far.

Arts and Crafts

There is an opinion piece by  Denis Dutton in the New York Times today on Conceptual Art, which presents some views that I am very sympathetic to.   All creative endeavours involve some inspiration and perspiration – There is the idea and then there is the execution of that idea.  Conceptual art essentially removes the execution aspect of art and makes it a pure exercise in cleverness.  In some sense it does crystallize the essence of art but I’ve always found it lacking.   I just can’t get that inspired by a medicine cabinet.  I’ve always found that the craft of a work of art to be as compelling (if not more) as the idea itself.  In many cases the two are inseparable.  Dutton argues that the craft aspect of art will never disappear because people intrinsically enjoy witnessing virtuosity.  I’m inclined to agree.  So while Vermeer or Caravaggio will remain timeless Damien Hirst may just fade away in time.

 

Corrected the spelling of Damien Hirst’s name on May 15,2012

Retire the Nobel Prize

I’ve felt for sometime now that perhaps we should retire the Nobel Prize.  The money could be used to fund grants, set up an institute for peace and science, or even have a Nobel conference like TED.  The prize puts too much emphasis on individual achievement and in many instances misplaced emphasis.  The old view of science involving the lone explorer seeking truth in the wilderness needs to be updated to a new metaphor of the sandpile, as used to described self-organized criticality by Per Bak, Chao Tang, and Kurt Wiesenfeld.  In the sandpile model, individual grains of sand are dropped on the pile and every once in awhile there are “avalanches” where a bunch of grains cascade down.  The distribution of avalanche sizes is a power law.  Hence, there is no scale to avalanches and there is no grain that is more special than any other.

This is just like science.  The contributions of scientists and nonscientists are like grains of sand dropping on the sandpile of knowledge and every once in awhile a big scientific avalanche is triggered.  The answer to the question of who triggered the avalanche is that everyone contributed to it.  The Nobel Prize rewards a few of the grains of sand that happened to be proximally located to some specific avalanche (and sometimes not) but the rewarded work always depended on something else.

Continue reading

Talk at MBI

I’m currently at the Mathematical Biosciences Institute for a workshop on Computational challenges in integrative biological modeling.  The slides for my talk on using Bayesian methods for parameter estimation and model comparison are here.

Title: Bayesian approaches for parameter estimation and model evaluation of dynamical systems

Abstract: Differential equation models are often used to model biological systems. An important and difficult problem is how to estimate parameters and decide which model among possible models is the best. I will argue that Bayesian inference provides a self-consistent framework to do both tasks. In particular, Bayesian parameter estimation provides a natural measure of parameter sensitivity and Bayesian model comparison automatically evaluates models by rewarding fit to the data while penalizing the number of parameters. I will give examples of employing these approaches on ODE and PDE models.

Human scale

I’ve always been intrigued by how long we live compared to the age of the universe.  At 14 billion years, the universe is only a factor of 10^8 older than a long-lived human.  In contrast, it is immensely bigger than us.  The nearest star is 4 light years away, which is a factor of 10^{16} larger than a human, and the observable universe is about 25 billion times bigger than that.    The size scale of the universe is partly dictated by the speed of light which at 3 \times 10^8 m/s is coincidentally (or not) the same order of magnitude faster than we can move as the universe is older than we live.

Although we are small compared to the universe, we are also exceedingly big compared to our constituents. We are comprised of about 10^{13} cells, each of which are about 10^{-5} m in diameter.  If we assume that the density of the cell is about that of water (1 {\rm g/ cm}^3) then that roughly amounts to 10^{14} molecules.  So a human is comprised of something like 10^{27} molecules, most of it being water which has an atomic weight of 18.  Given that proteins and organic molecules can be much larger than that a lower bound on the number of atoms in the body is 10^{28}.

The speed at which we can move is governed by the reaction rates of metabolism.  Neurons fire at an average of approximately 10 Hz, so that is why awareness operates on a time scale of a few hundred milliseconds.  You could think of a human moment as being one tenth of a second.  There are 86,400 seconds in a day so we have close to a million moments in a day although we are a sleep for about a third of them.  That leads to about 20 billion moments in a lifetime. Neural activity also sets the scale for how fast we can move our muscles, which is a few metres per second.  If we consider a movement every second then that implies about a billion twitches per lifetime.  Our hearts beat about once a second so that is also the number of heart beats in a lifetime.

The average thermal energy at body temperature is about 10^{-19} Joules, which is not too far below the binding energies of protein-DNA and protein-protein interactions required for life.   Each of our  cells can translate about 5 amino acids per second, which is a lot of proteins in our lifetime.  I find it completely amazing that  a bag of 10^{28} or more things, incessantly buffeted by noise, can stay coherent for a hundred years.  There is no question that evolution is the world’s greatest engineer.  However, for those that are interested in artificial life this huge expanse of scale does pose a question –  What is the minimal computational requirement to simulate life and in particular something as complex as a mammal?  Even if you could do a simulation with say 10^{32} or more objects,  how would you even know that there was something living in it?

The numbers came from Wolfram Alpha and Bionumbers.

Are mass extinctions inevitable?

It is well known from the fossil record that there have been a large number of extinction events of various magnitudes.  Some famous examples include the Cretaceous-Tertiary extinction that killed off the dinosaurs 65 million years ago and the Great Dying 250 million years ago where almost everything died.  It has been postulated that mass extinctions occur every ~30 or ~60 million years.  Most explanations for these events are exogenous – some external astrophysical or geological cataclysm like an asteroid slamming into the Yucatan 65 million years ago or large scale volcanic eruptions.  However, as I watch the news every night, I’m beginning to wonder if life itself is unstable and prone to wild fluctuations.  We are currently in the midst of a mass extinction and it is being caused by us.  However, we are not separate from the ecosystem so in effect, the system is causing it’s own extinction.

I listen to a number of podcasts of science radio shows (e.g. CBC’s Quirks and Quarks, ABC’s The Science Show, BBC’s The Naked Scientists, …) on my long drive home from work each day.  Each week I hear stories and interviews of scientists finding that climate change is worse than they predicted and we’re nearing a point of no return.  (Acidification of the oceans is what scares me the most.)  However, in all of these shows there is always an optimistic undertone that implores us do something about this, under the assumption that we have a choice in what we do.   It is at this point that I can’t help but to smirk because we really don’t have a choice. We’re just a big dynamical (probably stochastic) system that is plunging along.  We may have the capability to experience and witness what is happening (a mystery of which I actually have the privilege to think about for a living) but we don’t have control per se as I wrote about recently.

Continue reading

Seeing red

This week’s Nature has a fascinating article where gene therapy was used to reverse colour blindness in monkeys.  The remarkable thing is that the monkeys were red-green colour blind from birth because they lacked a long wavelength (L-opsin) gene.   A virus containing the human L-opsin gene was injected into the monkey’s eyes.  The virus inserted the gene into some of the medium wavelength cones.  It took about 20 weeks for the inserted gene to be expressed robustly. The amazing thing is that almost immediately after robust expression the treated monkeys were able to discern the frequencies that were missing before in behavioural tests.  In essence, they could now see the colour red when they couldn’t before.

The rapidity in which the behavioural effect occured implies that the neural plasticity required to adopt a new colour was minor.  It could be possible that the neural mechanisms for the missing colours already exists since only the males of the species are colour blind (the females are not) and could thus be tapped into immediately.  However, the gene was inserted randomly into the cones and developmentally it takes a few months before babies can distinguish colours so it is not obvious at all as to how the circuits could be idle for so long and suddenly be activated.

I think understanding how a new colour can suddenly pop into existence may be the avenue to investigate the neural basis of qualia.  The researchers of the study are conducting human trials now on patients that have retinal degeneration.  If it works, then it is only a matter of time before they try it on healthy humans with colour blindness.  We can then ask them what they actually experience when they see red for the first time.

Energy efficiency and boiling water

I’ve noticed that my last few posts have been veering towards the metaphysical so I thought today I would talk about some kitchen science, literally. The question is what is the most efficient way to boil water.  Should one turn the heat on the stove to the maximum or is there some mid-level that should be used?  I didn’t know what the answer was so I tried to calculate it.  The answer turned out to be more subtle than I anticipated.

Continue reading