New fuel for the calorie debate

The big news in obesity this week is the publication of this paper Effects of Dietary Composition on Energy Expenditure During Weight-Loss Maintenance, by Ebbeling et al. in JAMA. The study examines the effects of three types of diets – low fat, low carbs, and low glycemic index – on energy expenditure and weight loss maintenance. It was a cross-over study on 21 obese young adults, where all three diets were given to each subject consecutively. The basic result of the study was that people on the low carb (i.e. Atkins) diet had the highest total energy expenditure (TEE), followed by the low glycemic index, and coming in last place was the low fat diet. The study certainly bolsters the claims of  those on the “calorie is not a calorie” and  “carbs are bad” side. The implication being that you can eat more on a low carb diet than on a low fat diet. While the study was carefully done, there are some discrepancies that may call into question some of the results. Here is my colleague Kevin Hall’s take:

The resting RQ values in Table 3 seem too high. Generally, resting RQ should be lower than 24hr RQ which should match the daily food quotient (FQ) when the subjects are in macronutrient balance. My rough calculation of the FQ values for the test diets give about 0.9, 0.84, and 0.76 for the LF, LGI, and VLC diets, respectively. The reported resting RQ values are 0.9, 0.86, and 0.83. This would usually suggest a degree of overfeeding during the weight loss maintenance phases since RQ generally exceeds FQ during positive energy balance. However, the reported energy intake during the test diet phase was 2626 +/- 686 kcal/d which is lower than the TEE in all 3 test diets reported in Table 3. Something is odd here.

For those not up on metabolism lingo, the RQ is the respiratory quotient, which is the ratio between carbon dioxide expired and oxygen inhaled and gives a measure of what types of fuel the body is burning. The RQ works because carbs, fat and protein are all oxidized slightly differently.  Carbs have an RQ of 1 meaning every mole of oxygen consumed produces a mole of carbon dioxide.  Protein has an RQ that is between 0.8 and 0.9 and fat has an RQ around 0.7. Different types of proteins and fats will have slightly different RQs. The FQ is the expected RQ given the diet. The resting RQ values were measured by wearing a mask that measures the air you breath for twenty minutes after an overnight fast. Generally, when you fast, your body switches from burning carbs to burning fat. Hence, fasting RQs are usually lower than FQs. However, in this study the opposite is true. Fasting RQs can be higher than FQs if the person is not in energy balance and increasing in weight. However, the study also reports that the energy intake was lower than the TEE. Their estimates for TEEs are based on results from doubly labelled water measurements which uses the reported RQs as one of the input paramters. Hence, the differences in TEE that they report could be explained by experimental error.


In my  post on panpsychism, a commenter, Matt Sigl, made a valiant defense of the ideas of Koch and Tononi about consciousness. I claimed in my post that panpsychism, where some or all the constituents of a system possess some elementary form of consciousness, is no different from dualism, which says that mind and body are separate entities. Our discussion, which can be found in the comment thread, made me think more about what it means for a theory to be monistic and understandable.  I have now revised my claim to be that panpsychism is either dualist or superfluous. Tononi’s idea of integrated information may be completely correct but panpsychism would not add anything more to it. In my view, a  monistic theory is one where all the properties of a system can be explained by the fundamental governing rules. Most importantly there can only be a finite set of rules. A system with an infinite set of rules is not understandable since every situation has its own consequence. There would be no predictability; there would  be no science. There would only be history where we could write down each rule whenever we observed it.

Consider a system of identical particles that can move around in a three dimensional space and interact with each other in a pairwise fashion. Let the motion of these particles obey Newton’s laws, where their acceleration is determined by a force that is given by an interaction rule or potential. The proportionality constant between acceleration and force is the mass, which is assigned to each particle. The particles are then given an initial position and velocity. All of these rules can be specified in absolute precise terms mathematically. Space can be discrete so the particles can only occupy a finite or countably infinite number of points or continuous where the particles can occupy an uncountable infinite number of points.

Depending on how I define the interactions, select the masses, and specify the initial conditions, various things could happen.  For example,  I could have an attractive interaction, start all the particles with no velocity at the same point, and they would stay clumped together. This clumped state is a fixed point of the system. If I can move one of the particles slightly away from the point and it falls back to the clump then the fixed point is stable.  However, even a stable fixed point doesn’t mean all initial conditions will end up clumped. For example, if I have a square law attraction like gravity, then particles can orbit one another or scatter off of each other. For many initial conditions, the particles could just bounce around indefinitely and never settle into a fixed point. For more than two particles, the fate of all initial conditions is generally  impossible to predict. However, I claim that the configuration of the system at any given time is explainable or understandable because I could in principle simulate the system from a given specific initial condition and determine its trajectory for any amount of time. For a continuous system, where positions require an infinite amount of information to specify, an understandable system would be one where one could prove that there is always an initial condition that can be specified with a finite amount of information that remains close to any arbitrary initial condition.

If I make the dynamics sufficiently complex then there could be some form of basic chemistry and even biology. This need not be fully quantum mechanical;  Bohr-like atoms may be enough. If the system can form sufficiently complex molecules then evolution could take over and generate multi-cellular life forms. At some point, animals with brains could arise.  These animals could possess memory and enough computational capability to strategize and plan for the future.  There could be an entire ecosystem of plants and animals at multiple scales interacting in highly complex ways. All of this could be understandable in the sense that all of the observed dynamics could be simulated on a big enough computer if you knew the rules and the initial conditions. You may even be lucky enough that almost all initial conditions will lead to complex life.

At this point, all the properties of the system can be completely specified by an outside observer. Understandable means that all of these properties can be shown to arise from a finite set of rules and initial conditions. Now, suppose that some of the animals are also conscious in the sense that they have a subjective experience. The  panpsychic hypothesis is that consciousness is a property of some or all the particles. However, proponents must then explain why even the biggest rock does not seem conscious or human consciousness disappears when we are in deep sleep. Tononi and Koch try to finesse this problem by saying that it is only if one has enough integrated information does one notice the effect of the accumulated consciousness. However, bringing in this secondary criterion obviates the panpsychic hypothesis because there is now a systematic way to identify consciousness that is completely consistent with an emergent theory of consciousness. This doesn’t dispel the mystery of  “the hard problem” of consciousness of what exactly happens when the threshold is crossed to give subjective experience. However, the resolution is either that consciousness can be described by the finite set of rules of the constituent particles or there is a dualistic explanation where the brain “taps” into some other system that generates consciousness.  Panpsychism does not help in resolving this dilemma. Finally, it might be that the question of whether or not a system has sufficient integrated information to exhibit noticeable consciousness may be undecidable in which case there would be  no algorithm to test for consciousness. The best that one could do is to point to specific cases. If this were true then panpsychism does not solve any problem at all. We would never have a theory of consciousness. We would only have examples.

Indistinguishability and transporters

I have a memory of  being in a bookstore and picking up a book with the title “The philosophy of Star Trek”.  I am not sure of how long ago this was or even what city it was in. However, I cannot seem to find any evidence of this book on the web.  There is a book entitled “Star Trek and Philosophy: The wrath of Kant“, but that is not the one I recall.  I bring this up because in this book that may or may not exist, I remember reading a chapter on the philosophy of transporters.  For those who have never watched the television show Star Trek, a transporter is a machine that can dematerialize something and rematerialize it somewhere else, presumably at the speed of light.  Supposedly, the writers of the original show invented the device so that they could move people to planet surfaces from the starship without having to use a shuttle craft, for which they did not have the budget to build the required sets.

What the author was wondering was whether or not the particles of a transported person were the same particles as the ones in the pre-transported person or were people reassembled with stock particles lying around in the new location.  The implication being that this would then illuminate the question of whether what constitutes “you” depends on your constituent particles or just the information on how to organize the particles.  I remember thinking that this is a perfect example of how physics can render questions of philosophy obsolete. What we know from quantum mechanics is that particles are indistinguishable. This means that it makes no sense to ask whether a particle in one location is the same as a particle at a different location or time.  A particle is only specified by its quantum properties like its mass, charge, and spin.   All electrons are identical.  All protons are identical and so forth.  Now they could be in different quantum states, so a more valid question is whether a transporter transports all the quantum information of a person or just the classical information, which is much smaller.  However, this question is really only relevant for the brain since we know we can transplant all the other organs from one person to another.   The neuroscience enterprise, Roger Penrose notwithstanding, implicitly operates on the principle that classical information is sufficient to characterize a brain.

Transit of Venus

Don’t forget to catch the Transit of Venus tomorrow (June 5) if you can.  The next one won’t be until 2117.  NASA will be broadcasting it live from Mauna Kea, Hawaii here.  It will start around 6PM US east coast time and end about 7 hours later so only those in the Pacific will catch all of it.  Check your local science museum, planetarium or university astronomy department for information on where telescopes will be available to see it.   Venus will be a tiny dot moving slowly across the face of the sun.

Andrew Huxley 1917-2012

Andrew Huxley, died last week at the age of 94. Huxley, with his research advisor Alan Hodgkin, proved that the mechanism for action potential propagation in nerve cells were due to the passage of ions through voltage gated ion channels in the cell membrane. In the course of their work, they developed the Hodgkin-Huxley model of the neuron and launched the field of computational neuroscience. While their work was a monumental achievement and deserved a Nobel prize, it still built upon the work of many others. I recommend reading the  Nobel address of both Huxley (see  here) and Hodgkin (see  here) for the story of their discovery.

Physical activity

Several people have questioned my assertion in the New York Times interview that physical activity has not changed much in the past thirty years. My claim is partially based on work by Klaas Westerterp and John Speakman, who are two highly respected researchers in the field.  Klaas gave a very nice talk on the topic at a metabolism workshop at  NIMBIOS in 2011.  His slides are here.  What they basically did was to compare total daily energy expenditure (DEE) measurements to basal energy expenditure (BEE) over time.  The ratio of DEE to BEE is called the physical activity level (PAL). The higher the PAL the more of  the energy you burn every day is due to physical activity. Klaas and John showed that PAL has not changed significantly since 1980 and if you squint hard enough at the plots in the slides it looks like it may even have increased a little.  While we seem to be very sedentary now, people tend to forget that we were also very sedentary thirty years ago.