Miraculous technologies

This month’s Scientific American magazine has a story on 7 Radical Energy Solutions.  The link is here although you need a subscription to access the full article.  The 7 solutions are 1) Fusion-triggered fission – using lasers to trigger  fusion in small pellets to produce neutrons to ignite fission; advantage being that a chain reaction is not necessary so nuclear waste can be used as fuel. 2) Solar gasoline – converting solar energy directly into a carbon-based liquid fuel. 3) Quantum photovoltaics – use quantum dots to increase efficiency of solar cells by trapping hot electrons that are lost with existing technology. 4) Heat engines – generate power by capturing waste heat using shape-memory alloys. 5) Shock-wave auto engine – a new internal combustion engine that uses shock waves to propel a turbine. 6) Magnetic air conditioners – make a fridge with no moving parts by using special magnets to replace the refrigerant and  pumps. 7)  Clean coal – use an ionic liquid to pull CO2 out of coal plant exhaust;  the CO2 would then have to be sequestered underground. See here for descriptions of  projects funded by the US Department of Energy.

The article made me think of technology we use today that seems miraculous.  The first thing that comes to mind is the airplane.  People had dreamed of flight for centuries if not millenia but it wasn’t until technology matured enough that the dream was realized in 1903 by the Wright brothers.  The mobile phone was just a science fiction dream to me when I was child.  The refrigerator has always seemed miraculous to me.  Even after thermodynamics and the heat cycle was understood, it is still amazing that actual substances that could act as refrigerants were discovered.  I find  bullet proof glass kind of astounding.  All of our electronic technology is based on silicon, which is made from sand.   Water itself is kind of magical.  The fact that it is so abundant and takes on three phases in a human accessible range of temperatures is astonishing (or maybe not – cf. Anthropic Principle).    I could go on and on.  As Arthur C. Clarke once wrote: “Any sufficiently advanced technology is indistinguishable from magic.”  At any moment, there could be a technological breakthrough that changes history.

Jump Math

David Bornstein wrote a very interesting opinion article in the New York Times this week.  He tells the story about a new way of teaching math called Jump Math.  The basic concept is that you teach math by breaking it down to the smallest steps and getting the students to understand these steps.  Here is an excerpt of the article

New York Times:  Children come into school with differences in background knowledge, confidence, ability to stay on task and, in the case of math, quickness. In school, those advantages can get multiplied rather than evened out. One reason, says Mighton, is that teaching methods are not aligned with what cognitive science tells us about the brain and how learning happens.

In particular, math teachers often fail to make sufficient allowances for the limitations of working memory and the fact that we all need extensive practice to gain mastery in just about anything. Children who struggle in math usually have difficulty remembering math facts, handling word problems and doing multi-step arithmetic (pdf). Despite the widespread support for “problem-based” or “discovery-based” learning, studies indicate that current teaching approaches underestimate the amount of explicit guidance, “scaffolding” and practice children need to consolidate new concepts. Asking children to make their own discoveries before they solidify the basics is like asking them to compose songs on guitar before they can form a C chord.

Mighton, who is also an award-winning playwright and author of a fascinating book called “The Myth of Ability,” developed Jump over more than a decade while working as a math tutor in Toronto, where he gained a reputation as a kind of math miracle worker. Many students were sent to him because they had severe learning disabilities (a number have gone on to do university-level math). Mighton found that to be effective he often had to break things down into minute steps and assess each student’s understanding at each micro-level before moving on.

Take the example of positive and negative integers, which confuse many kids. Given a seemingly straightforward question like, “What is -7 + 5?”, many will end up guessing. One way to break it down, explains Mighton, would be to say: “Imagine you’re playing a game for money and you lost seven dollars and gained five. Don’t give me a number. Just tell me: Is that a good day or a bad day?”

I completely agree.  I’ve always felt that we should teach math like we teach sports.  If you want to be a good golfer then you should go to the range and hit thousands of golf balls.  Almost everyone thinks that they can improve in golf or any sport if they practiced more.  Well the same is true for math.  If you want to get better you should practice.  I’ve always felt that this idea that we need to make math more pertinent to students lives to get them motivated to study it to be completely misguided.  From my experience as a former math professor, I found that most students liked to do math for its own sake and didn’t really care if it was useful for their lives (even though it is).

Cognitive toolkit

The World Question Center posed the question recently: What scientific concept would improve everyone’s cognitive toolkit?  The list of contributions is here.  The contributors were mostly public intellectuals, which generally means scientists or science writers who have written books for the general public.  I’ve only casually browsed through them and there are some interesting entries. I liked the contributions of Nick Bostrom, Robert Sapolsky and Nigel Goldenfeld.  Unfortunately, or perhaps intentionally, they are not listed in alphabetical order.  However, I think a lot of the entries were rather sophisticated and philosophical.  I would propose something simple that everyone should know but doesn’t  like the classical logical constructions known as Modus Ponens and  Modus tollens, which together say that if A implies B then the only conclusions that can be drawn are that  if A is true then B is true and if B is not true then A is not true.  More pertinently, A is not true does not say anything about B.  I would venture that most people, including scientists, cannot keep that straight.

The perils of sugar

Science writer Gary Taubes has a provocative article in the forthcoming New York Times magazine on whether sugar is toxic.  Taubes has penned two well received books on metabolism and obesity recently – Good calories bad calories and Why we get fat?.   In the context of the article, sugar is defined to be either sucrose, which is composed of 50% glucose and 50% fructose  or high fructose corn syrup (HFCS), which is composed of 55% fructose and 45% glucose.  The fact that sucrose, which is what you put in your coffee and HFCS, which until recently had replaced sucrose in many products like soft drinks, are so similar in composition has always been sufficient evidence for me that if one of them is bad for you then the other must be as well.

In order to understand why sugar could be unhealthful requires some background in human metabolism.   The energetic portions of food consists of carbohydrates, fat or protein, which are used by the cells of our body for fuel.   However, the brain only utilizes glucose (or ketone bodies when glucose is not available) and very little glucose is stored in the body (half a kilogram in the form of glycogen).  Hence, glucose is tightly regulated in the blood.  When we eat glucose, it enters the blood stream fairly rapidly.  The body responds by secreting insulin, which activates transporters in non-brain cells to take up glucose.   The cells will either burn the glucose or use it to replace depleted glycogen stores.  Glucose can also be utilized by muscle cells during intense exercise.  Any excess glucose will be taken up by the liver and be converted into fat in the form of triglycerides.

The reason fructose is considered  bad is that it is only metabolized in the liver where it is converted into glucose or fat.   The fat is either stored, which is thought to be bad, or gets secreted into the blood inside VLDL particles that are precursors of LDL particles, which is considered to be the bad cholesterol.  This line of reasoning is highly plausible but I think it is incomplete.  The argument is that  since fructose leads to fatty liver, I must avoid fructose.  However, what you really want to avoid is excess fat in the liver and high LDL and this can be caused by things other than fructose.  So in addition to  avoiding fructose, you must also make sure you don’t replace that fructose with something that is equally or almost as bad.  For instance, if you decided to replace that fructose with lots of glucose, it could still end up getting converted into fat in the liver.

To me, the real problem is not the fructose per se but the imbalance between glycogen use and carbohydrate input because  glucose or fructose will first replace depleted glycogen before getting converted to fat.   Hence the amount of exercise one engages in cannot be ignored.  Basically, any amount of carbohydrates consumed above your glucose/glycogen utilization level will necessarily be converted into fat.  Thus,  if you believe in this hypothesis, what you really should believe is that the increase in obesity, insulin resistance and Type II diabetes is not just due to any specific dietary element like fructose but a general  imbalance between carbs burned and carbs eaten.

The long view

Physicist and Nobel Laureate Robert Laughlin was on Econtalk this past summer. The link to the podcast is here.  Laughlin, who likes to take on contrarian positions and is always entertaining, talks about the future of carbon and his forthcoming book “When coal is gone”.  Chapter two of his book originally entitled “Geological Time” was excerpted in The American Scholar with the title “What the earth knows” and can be obtained here.  In that chapter and on the podcast, Laughlin argues that the human age of fossil fuels and its effect on climate is but a blink of an eye in geological time.   The earth has endured much larger perturbations then humans will ever inflict.  He claims that we’ll run out of oil in about 60 years but we will still use carbon-based liquid fuels because their energy densities are without peer.  (You can’t fly an airplane without it.)  However, instead of getting it out of the ground we will manufacture it using coal or natural gas as feed stock.  In about 200 years we’ll run out of coal but we’ll still want to make fuels.  At that point, we’ll have to extract carbon out of the air or ocean, mostly likely using plants.  Laughlin tries to avoid taking political positions and does acknowledge that climate change could be bad for this and the next generation of humans even if it won’t matter much in the long term.  He’s confident the earth and humans will survive this crisis.  The one thing he does worry about is biodiversity loss, which is permanent.  There is a switch in topic to Laughlin’s previous book The Crime of Reason 50 minutes into the podcast.  In that book, Laughlin argues that  the US switch from a manufacturing economy to an information economy will stifle  learning and the dissemination of knowledge because if information becomes a commodity, its value depends on its scarcity.  Thus, the rate of innovation will decrease not increase and we will become more secretive in general.

Talk and SIAM news story

I gave a talk on obesity yesterday at Montclair State University.  The talk was mostly the same as the plenary talk I gave at the SIAM Annual and Life Sciences meeting in 2010, which I summarized here.  Science and math writer Barry Cipra also wrote a piece about the talk in SIAM News.   I find it amusing that he called me a mathematical obesity expert.   I presented the “push hypothesis” for the obesity epidemic in more detail here.

Does NIH need to change?

Michael Crow,  president of Arizona State University, has an opinion piece in Nature this week arguing that the NIH needs to be revamped.  He points out that the although the NIH budget is 30 billion a year, there have relatively few recent benefits for public health.  He argues that the problem is that there is no emphasis on  promoting outcomes beyond basic science.   Right now the NIH consists of 27 separate institutes (I’m in NIDDK) with little coordination between them and great redundancy in their missions.  For an intramural principal investigator such as myself, the walls are invisible when it comes to science and collaborations but very apparent when it comes to regulations and navigating the bureaucracy.  Crow uses the obesity pandemic as an example of the NIH’s ineffectiveness in combating a health problem.  This point really hits home since the NIH Obesity Research Task Force, which is spread out over 27 NIH components, is largely unaware of the novel work coming out of our  group – the Laboratory of Biological Modeling.  Crow’s solution is to drastically reorganize the NIH.  An excerpt of his article is below.

Nature: What if the NIH were reconfigured to reflect what we know about the drivers of innovation and progress in health care?

“A new NIH should be structured around three institutes.”

This new NIH should be structured around three institutes. A fundamental biomedical systems research institute could focus on the core questions deemed most crucial to understanding human health in all its complexity — from behavioural, biological, physical, environmental and sociological perspectives.

Take, for instance, the ‘obesity pandemic’. In the United States, medical costs related to obesity (currently around $160 billion a year) are projected to double within the decade. And by some estimates, indirect spending associated with obesity by individuals, employers and insurance payers — for example on absenteeism, decreased productivity or short-term disability, exceeds direct medical costs by nearly threefold8. The NIH conducts and supports leading research on numerous factors relevant to obesity, but efforts are fragmented: 27 NIH components are associated with the NIH Obesity Research Task Force, a programme established to speed up progress in obesity research.

Within a systems research institute, scientists could better integrate investigations of drivers as diverse as genetics, psychological forces, sedentary lifestyles and the lack of availability of fresh fruit and vegetables in socioeconomically disadvantaged neighbourhoods.

A second institute should be devoted to research on health outcomes, that is, on measurable improvements to people’s health. This should draw on behavioural sciences, economics, technology, communications and education as well as on fundamental biomedical research. Existing NIH research in areas associated with outcomes could serve as the basis for expanded programmes that operate within a purpose-built organization. If the aim is to reduce national obesity levels — currently around 30% of the US population is obese — to less than 10% or 15% of the population, for example, project leaders would measure progress against that goal rather than according to some scientific milestone such as the discovery of a genetic or microbial driver of obesity.

The third institute, a ‘health transformation’ institute, should develop more sustainable cost models by integrating science, technology, clinical practice, economics and demographics. This is what corporations have to do to be successful in a competitive high-tech world. Rather than be rewarded for maximizing knowledge production, this institute would receive funding based on its success at producing cost-effective public-health improvements.

This kind of tripartite reorganization would limit the inevitable Balkanization that has come from having separate NIH units dedicated to particular diseases. Indeed, such a change would reflect today’s scientific culture, which is moving towards convergence — especially in the life sciences, where collaboration across disciplines is becoming the norm, advances in one field influence research in others, and emerging technologies are frequently relevant across different fields.