Archive for March, 2010

New paper on gene induction

March 30, 2010

Karen M. Ong, John A. Blackford, Jr., Benjamin L. Kagan, S. Stoney Simons, Jr., and Carson C. Chow. A theoretical framework for gene induction and experimental comparisons PNAS 200911095; published ahead of print March 29, 2010, doi:10.1073/pnas.0911095107

This is an open access article so it can be downloaded directly from the PNAS website here.

This is a paper where group theory appears unexpectedly.  The project grew out of a chance conversation between Stoney Simons and myself in 2004.  I had arrived recently at the NIH and was invited to give a presentation at the NIDDK retreat.  I spoke about how mathematics could be applied to obesity research and I’ll talk about the culmination of that effort in my invited plenary talk at the joint SIAM Life Sciences and Annual meeting this summer.  Stoney gave a summary of his research on steroid-mediated gene induction.  He showed some amazing data of the dose response curve of the amount of gene product induced for a given amount of steroid.  In these experiments,  different concentrations of steroid are added to cells  in a dish and then after waiting awhile, the total amount of gene product in the form of luciferase is measured.  Between steroid and gene product is a lot of messy and complicated biology starting with  steroid binding to a steroid receptor, which translocates to the nucleus while interacting with many molecules on the way. The steroid-receptor complex then binds  to DNA, which triggers a transcription cascade involving many other factors binding to DNA, which gives rise to mRNA, which is then translated into luciferase and then measured as photons.  Amazingly, the dose response curve after all of this was fit almost perfectly (R^2 > 95%) by a Michaelis-Menton or first order Hill function

P = \frac{Amax [S]}{EC50 + [S]}

where  P is the amount of gene product, [S] is the concentration of steroid, Amax is the maximum possible amount of product and EC50 is the concentration of steroid giving half the maximum of product. Stoney also showed that Amax and EC50 could be moved in various directions by the addition of various cofactors.  I remember thinking to myself during his talk that there must be a nice mathematical explanation for this.  After my talk, Stoney came up to me  and asked me if I thought I could model his data.  We’ve been in sync like this ever since.

(more…)

Arnall Patz 1920-2010

March 22, 2010

We’ve recently lost a great physician who really did make a difference. The disease he cured is what caused Stevie Wonder to lose his vision and although Patz’s discovery was not in time to save Stevie’s eyesight it did benefit the lives of many children after him.  Patz probably should have received the Nobel Prize.  Below is an excerpt from an email sent by the chairman of the Wilmer Eye Institute at Johns Hopkins University.

Dr. Patz was born in 1920 in rural Georgia. After completing his undergraduate degree at Emory University, he graduated from Emory School of Medicine in Atlanta in 1945. He and his wife Ellen were loving parents of five children and eight grandchildren.

After World War II he served at the Walter Reed Army Medical Center and trained at D.C. General Hospital. It was there, beginning in 1950 that Dr. Patz noticed an association between incubators and retinopathy of prematurity (known then as retrolental fibroplasia), a leading cause of infant blindness. In one of the first clinical trials in all of medicine he followed premature babies who were routinely given high concentrations of oxygen and others who were given lower doses. Rebuffed by a funding agency, which thought the proposal unscientific and possibly dangerous, he conducted the clinical trial without federal funding. For this discovery and the subsequent saving of vision in thousands of premature infants he was given the Albert Lasker Medical Research Award, one of the most prestigious honors in American medicine. Helen Keller presented him with the award in 1956.

In 1970 he joined the full time faculty at Johns Hopkins and pioneered the management of diabetic retinopathy. During this time he made important discoveries about diseases caused by abnormal growth of blood vessels in the eye and helped to develop one of the first argon lasers for treating diabetic retinopathy and age-related macular degeneration. In 1979 he became the Director of the Wilmer Eye Institute and as Director he enlarged the clinical and research facilities and programs in his typical visionary fashion. His colleagues at Hopkins praise him for serving as mentor for more than five decades to scores of today’s leading eye specialists.  As one of his former residents, I can personally attest to his interest in and support for his young trainees.

He has been referred to by one of his colleagues as “one of the greatest ophthalmologists of the 20th century.” He received honorary degrees from the University of Pennsylvania, Emory University, Thomas Jefferson University and Johns Hopkins University.

A past president of the American Academy of Ophthalmology, Dr. Patz authored more than 250 scientific publications and four textbooks. He received many distinguished awards including the Friedenwald Research Award in 1980, the inaugural Isaac C. Michaelson Medal in 1986, the first Helen Keller prize for Vision Research in 1994, the Pisart International Vision Award from the Lighthouse International in 2001, and the Presidential Medal of Freedom, the nation’s highest civilian honor, in 2004. In 2005 he received the Lions Humanitarian Award, the Lester S. Levy Humanitarian Award, and the Laureate Recognition Award from The American Academy of Ophthalmology.

Aligning incentives

March 19, 2010

A free market does optimize but what it optimizes may not be necessarily what you want.    For example, when it comes to building homes, making toys or preparing food,  we want to make sure that there is an incentive for the producer to make safe products.  Milton Friedman would argue that if a company made shoddy or unsafe products, the market would eventually punish that company.  This might be true but who wants to be the parent of the first child that gets lead poisoning from a toy or be the first occupant of a house that burns down because of an electrical fault.  Sure, the company may go out of business after it is known that they make dangerous products but some people have to be sacrificed to obtain this knowledge.  In some cases, like cigarettes or asbestos, the health consequences of a product may not be known for decades so millions of people could be affected before the market corrects.  That is why we have regulations for toys, buildings and food.

In terms of health insurance, it would seem like the most sensible business model is to do your best to not pay claims.  What you are taught in business school is that to increase profits you must decrease cost and paying out claims is the main cost.  Hence, it is not surprising that you hear all sorts of horror stories about health insurers denying coverage.  Paul Krugman gives some examples in today’s New York Times.  This seems to be perfectly logical and a perfect example of a misaligned incentive.  The goal of the customer (which is to get health care) and that of the provider (which is to not give health care) are diametrically opposed.  This is not true of all industries.  For example, in violin making, the goals of the customer and the producer, which is to get/make the best quality violin for the lowest cost, are aligned.  Hence, it is necessary to regulate insurers.  However, in many ways this is an unstable situation because the goal of the insurer is always be to find loop holes around the regulations.

The misalignment in incentives is further compounded because the health care providers are reimbursed by the number of procedures they perform so their incentive is to provide as much health care as possible.  Thus,  on the one hand the providers are doing their best to maximize the cost of a visit and on the other hand the insurers are doing their best to not honor the claims.  It’s not hard to see why the result is sometimes an astronomical bill for an unsuspecting patient who just got their insurance retroactively terminated.  There is also a conflict between the insurers and the providers because it is in the interest of the insurer to not reimburse the provider.  Hence, we have this three way battle between insurers, providers and consumers.  The ideal situation would be to align the incentives of the health care providers and the insurers with the consumers.  After all, the receiver of health care doesn’t really want more health care, she just wants to be healthy.   I don’t really know what the optimal system would be but I do know that our current system is pretty far away from that solution.

The state of software

March 12, 2010

It seems to me that software obeys a Peter Principle-like law in that it will always lie on the border of just-tolerable functionality.  They will mostly do what you need them to do but there will always be frustrating glitches that never gets corrected.  Newer versions may add more features but the user experience does not improve.  For example, in order for me to read all email I receive I need to use two programs – the Mac Mail program and Thunderbird. The reason is that some attachments on emails are scrambled by Thunderbird, while other attachments do not show up at all on Mail without even an indication that there was an attachment at all.  I have no idea when this will happen so I keep both programs open and randomly switch between them.  Sometimes entire mail messages will only appear in one program but not the other.  I  haven’t tried very hard to alleviate these problems but a cursory examination on discussion groups indicates that the Mac Mail problem has been around for at least four years. It appears that the problem is insufficiently frequent enough to be worth correcting (although frequent enough to be a nuisance).  Another  example is that even though computers double in speed every year and a half, it still takes about the same amount of time for my computer to turn on now as it did a decade or more ago. The 20 or 30 seconds it takes to boot up is probably the maximal time a human will tolerate so that every time a computer gets faster, they just load it up with more things for it to do until it hits this tolerabilty threshold.  What I can do on a computer does increase with time but these new capabilities tend to saturate at a given level of effectiveness.

This issue becomes a serious problem when your life depends on the software.  My guess is that Toyota will never fully understand what causes unintentional acceleration in their vehicles.  The events are infrequent enough that they may never be reproduced exactly and so corrections may be impossible.   I’m also certain that similar problems will appear for other car makers and will only increase in frequency as more and more of the car becomes automated.  I remember reading a few years ago that the Navy used Windows NT to run its computers on its warships.  That wasn’t exactly a comforting thought.  However, the public does need to be aware that simply chastising executives at Toyota won’t solve this problem.  They could have been as responsive as they could  to owner complaints and still never have solved the problem   To really solve this problem we would probably  need a fundamental change in the philosophy of system design.  We may need to give up on the notion of complex comprehensive systems and instead rely on very simple finite state automata that can be proven to be reliable.   To be proven reliable, the response of the system to all possible inputs in all possible internal states must be checked  and this is only possible if the set is finite or at most countable.  This means that  “intelligent” systems that can adapt to a multitude of contingencies and respond in kind must be scrapped or at least always be backed up with a simple provable system and the conditions where the back up system takes over must also be proven to be reliable.  I think there will always be a trade-off.  If we insist on complete reliability we must give up some features.  We must decide what risk we are willing to accept.

Obesity, weight gain and a cookie

March 4, 2010

Many books and articles on dieting, weight gain, and obesity often quote the rule of “3500 Calories is a pound”.  By this, they mean that for every 3500 “extra” Calories consumed, you will gain one pound.  So if you ate 100 extra Calories a day (e.g. a cookie), you would gain a pound in 35 days.  In  metric units, this is equivalent to 7700 Calories per kilogram.  As I will show below, this rule is confusing and wrong on two accounts, although by sheer coincidence, it can be used as a mnemonic to estimate how extra food will lead to increased body weight.   The basis of this rule is that the metabolizable energy density of fat is about 9300 Calories per kilogram and protein is about 4000 Calories per kilogram, and if most of our weight gain is in fat then you can persuade yourself that 7700 Calories per kilogram is a reasonable number.  I should  note that the dietary Calorie is equivalent to one kilocalorie or about 4.2 kilojoules of energy.

The first reason this rule is wrong is that the energy density of deposited tissue is not fixed at 3500 Calories per pound because the composition of that tissue is not fixed.  It depends on how much you weigh and possibly diet composition. Kevin Hall has a nice article on this.  The second problem is that this rule completely ignores the fact that as you gain weight your energetic needs change so not all of the extra 100 Calories can go to new tissue.  If you apply the rule, it says that if you eat an extra 100 Calories per day, or one cookie, then in 35 days you’ll have gained a pound and in one year you’ll have gained 10 pounds.  This is actually reasonable and close to being correct.  However, if you continue to apply the rule then you’ll also predict that in 10 years you’ll gain 100 pounds and your weight will increase linearly forever.    This is clearly not correct.  What really happens is that as you gain weight you also burn more energy and eventually you’ll reach a new steady state where you burn what you eat.

The rule is also confusing because there are at least three ways of interpreting  “eating an extra cookie per day”.  What it was originally intended to address is the situation  where everything else in your life is constant (i.e. eat the same thing, do the same amount of physical activity, and have no change in health), and then you eat  an extra cookie.  The presumption is that previously you were burning everything you eat and now you have an extra 100 Calories per day that you store as new body tissue.  A second interpretation is that you are always exactly one cookie or 100 Calories out of energy balance every day.  So, whatever amount of energy you are burning at this moment, you eat 100 Calories extra.  This would imply that you are always out of energy balance and your weight would indeed increase linearly in time.  The third interpretation is that you eat an extra cookie per day over what you ate the previous day.  This would imply that you eat one cookie today, two tomorrow, three the next day and so forth.   Because, the rule is inherently wrong and that it can be interpreted in multiple ways, it has led to great confusion and myths about dieting and how much you need to eat to lose or maintain weight.  Below, I will try to make all of these concepts precise.

(more…)


Follow

Get every new post delivered to your Inbox.

Join 117 other followers