## Archive for the ‘Medicine’ Category

### The Stephanie Event

January 14, 2014

You should read this article in Esquire about the advent of personalized cancer treatment for a heroic patient named Stephanie Lee.  Here is Steve Hsu’s blog post. The cost of sequencing is almost at the point where everyone can have their normal and tumor cells completely sequenced to look for mutations like Stephanie. The team at Mt.  Sinai Hospital in New York described in the article inserted some of the mutations into a fruit fly and then checked to see what drugs killed it. The Stephanie Event was the oncology board meeting at Sinai where the treatment for Stephanie Lee’s colon cancer, which had spread to the liver, was discussed. They decided on a standard protocol but would use the individualized therapy based on the fly experiments if the standard treatments failed.  The article was beautifully written, combining a compelling human story with science.

### Fred Sanger 1918 – 2013

November 21, 2013

Perhaps the greatest biologist of the twentieth century and two-time Nobel prize winner, Fred Sanger, has died at the age of 95. He won his first Nobel in 1958 for determining the amino acid sequence of insulin and his second in 1980 for developing a method to sequence DNA.  An obituary can be found here.

### TB, streptomycin, and who gets credit

September 4, 2013

The Science Show did a feature story recently about the discovery of streptomycin,  the first antibiotic to treat tuberculosis, which had killed 2 billion people in the 18th and 19th centuries. Streptomycin was discovered by graduate student Albert Schatz in 1943, who worked in the lab of Professor Selman Waksman at Rutgers. Waksman was the sole winner of the 1952 Nobel Prize for this work. The story is narrated by the author of the book Experiment Eleven, who paints Waksman as the villain and Schatz as the victim. Evidently, Waksman convinced Schatz to sign away his patent rights to Rutgers but secretly negotiated a deal to obtain 20% of the royalties. When Schatz discovered this, he sued Waksman and obtained a settlement. However, this turned the scientific community against him and he forced him out of microbiology into science education. To me, this is just more evidence that prizes and patents are incentives for malfeasance.

### New paper on childhood growth and obesity

August 1, 2013

Kevin D Hall, Nancy F Butte, Boyd A Swinburn, Carson C Chow. Dynamics of childhood growth and obesity: development and validation of a quantitative mathematical model. Lancet Diabetes and Endocrinology 2013 .

You can read the press release here.

In order to curb childhood obesity, we need a good measure of how much food kids should eat. Although people like Claire Wang have proposed quantitative models in the past that are plausible, Kevin Hall and I have insisted that this is a hard problem because we don’t fully understand childhood growth. Unlike adults, who are more or less in steady state, growing children are a moving target. After a few fits and starts we finally came up with a satisfactory model that modifies our two compartment adult body composition model to incorporate growth. That previous model partitioned excess energy intake into fat and lean compartments according to the Forbes rule, which basically says that the ratio of added fat to lean is proportional to how much fat you have so the more fat you have the more excess Calories go to fat. The odd consequence of that model is that the steady state body weight is not unique but falls on a one dimensional curve. Thus there is a whole continuum of possible body weights for a fixed diet and lifestyle. I actually don’t believe this and have a modification to fix it but that is a future story.

What puzzled me about childhood growth was how do we know how much more to eat as we grow? After some thought, I realized that what we could do is to eat enough to maintain the fraction of body fat at some level, using leptin as a signal perhaps, and then tap off the energy stored in fat when we needed to grow. So just like we know how much gasoline (petrol) to add by simply filling the tank when it’s empty, we simply eat to keep our fat reserves at some level. In terms of the model, this is a symmetry breaking term that transfers energy from the fat compartment to the lean compartment. In my original model, I made this term a constant and had food intake increase to maintain the fat to lean ratio and showed using singular perturbation theory that his would yield growth that was qualitatively similar to the real thing. This then sat languishing until Kevin had the brilliant idea to make the growth term time dependent and fit it to actual data that Nancy Butte and Boyd Swinburn had taken. We could then fit the model to normal weight and obese kids to quantify how much more obese kids eat, which is more than previously believed. Another nice thing is that when the child stops growing the model is automatically the adult model!

### New paper on measuring gastric acid output

July 16, 2013

This paper started many years ago when Steve Wank, of the Digestive Diseases Branch of NIDDK, had this idea to use this new wireless PH detecting SmartPill that you could swallow to determine how much acid your stomach was producing.  There really was no noninvasive way to monitor how well medications would work for certain reflux diseases.  What he wanted was a model of gastric acid secretion output based on the dynamics of PH when a buffer was added to design a protocol for the experiment.  I came up with a simple mass-action model of acid buffering and made some graphs for him.  We then tested the model out in a beaker.  He thought the model worked better than I did but it was somewhat useful to him in designing the experiment.

Weinstein et al.  A new method for determining gastric acid output using a wireless pH-sensing capsule.  Aliment Pharmacol Ther 37: 1198 (2013)

Abstract:

BACKGROUND:Gastro-oesophageal reflux disease (GERD) and gastric acid hypersecretion respond well to suppression of gastric acid secretion. However, clinical management and research in diseases of acid secretion have been hindered by the lack of a non-invasive, accurate and reproducible tool to measure gastric acid output (GAO). Thus, symptoms or, in refractory cases, invasive testing may guide acid suppression therapy.

AIM:To present and validate a novel, non-invasive method of GAO analysis in healthy subjects using a wireless pH sensor, SmartPill (SP) (SmartPill Corporation, Buffalo, NY, USA).

METHODS:Twenty healthy subjects underwent conventional GAO studies with a nasogastric tube. Variables impacting liquid meal-stimulated GAO analysis were assessed by modelling and in vitro verification. Buffering capacity of Ensure Plus was empirically determined. SP GAO was calculated using the rate of acidification of the Ensure Plus meal. Gastric emptying scintigraphy and GAO studies with radiolabelled Ensure Plus and SP assessed emptying time, acidification rate and mixing. Twelve subjects had a second SP GAO study to assess reproducibility.

RESULTS:Meal-stimulated SP GAO analysis was dependent on acid secretion rate and meal-buffering capacity, but not on gastric emptying time. On repeated studies, SP GAO strongly correlated with conventional basal acid output (BAO) (r = 0.51, P = 0.02), maximal acid output (MAO) (r = 0.72, P = 0.0004) and peak acid output (PAO) (r = 0.60, P = 0.006). The SP sampled the stomach well during meal acidification.

CONCLUSIONS:SP GAO analysis is a non-invasive, accurate and reproducible method for the quantitative measurement of GAO in healthy subjects. SP GAO analysis could facilitate research and clinical management of GERD and other disorders of gastric acid secretion.

### Houghton opines on the unfairness of prizes

July 12, 2013

I recently wrote about Michael Houghton declining the prestigious Gairdner prize because it left out two critical contributors to the discovery of the Hepatitis C virus. Houghton has now written an opinion piece in Nature Medicine arguing that prizes relax the restriction to three awardees, an arbitrary number I’ve never understood. After all, one could argue that Freeman Dyson had a reasonable claim on the Nobel Prize awarded to Feynman, Schwinger, and Tomonaga for QED.  I’ve quoted the entire piece below.

Nature Medicine: Earlier this year, I was greatly honored with the offer of a 2013 Canada Gairdner International Award for my contributions to the discovery of the hepatitis C virus (HCV). I was selected along with Harvey Alter, chief of clinical studies in the Department of Transfusion Medicine at the US National Institutes of Health’s Clinical Center in Bethesda, Maryland, and Daniel Bradley, a consultant at the US Centers for Disease Control and Prevention in Atlanta, both of whom had a vital role in the research that eventually led to the identification and characterization of the virus.

My colleagues accepted their awards. However, I declined my C\$100,000 (\$98,000) prize because it excluded two other key contributors who worked with me closely to successfully isolate the viral genome for the first time. I felt that given their crucial inputs, it would be wrong of me to keep accepting major prizes just ‘on their behalf’, a situation that has developed because major award foundations and committees around the world insist that prizes be limited to no more than three recipients per topic.

HCV was identified in 1989 in my laboratory at the Chiron Corporation, a California biotechnology firm since purchased by the Swiss drug company Novartis. The discovery was the result of seven years of research in which I worked closely, both intellectually and experimentally, with Qui-Lim Choo, a member of my own laboratory, and George Kuo, who had his own laboratory next door to mine at Chiron. We finally identified the virus using a technically risky DNA-expression screening technique through which we isolated a single small nucleic acid clone from among many millions of such clones from different recombinant libraries. This was achieved without the aid of the still-evolving PCR technology to amplify the miniscule amounts of viral nucleic acid present in blood. We ultimately proved that this clone derived from a positive-stranded viral RNA genome intimately associated with hepatitis, but one not linked to either the hepatitis A or B viruses12. The finding represented the first time any virus had been identified without either prior visualization of the virus itself, characterization of its antigens or viral propagation in cell culture.

The high-titer infectious chimpanzee plasma used for our molecular analyses at Chiron was provided in 1985 by Bradley, an expert in chimpanzee transmission of HCV and in the virus’s basic properties and cellular responses, with whom I had an active collaboration since 1982. The proposed aim of the collaboration was for my laboratory to apply contemporary molecular cloning methodologies to a problem that had proven intractable since the mid-1970s, when Alter and his colleagues first demonstrated the existence of non-A, non-B hepatitis (NANBH), as it was then known. Alter’s team went on to define the high incidence and medical importance of NANBH, including the virus’s propensity to cause liver fibrosis, cirrhosis and cancer. They also identified high-titer infectious human plasma in 1980 and were instrumental in promoting the adoption of surrogate tests for NANBH by blood banks to reduce the incidence of post-transfusion infection.

With regrets to the Gairdner Foundation—a generous and altruistic organization—I felt compelled to decline the International Gairdner Award without the addition of Kuo and Choo to the trio of scientists offered the award. In 1992, all five of us received the Karl Landsteiner Memorial Award from the American Association of Blood Banks. But subsequent accolades given in honor of HCV’s discovery have omitted key members of the group: only Bradley and I received the 1993 Robert Koch Prize, and only Alter and I won the 2000 Albert Lasker Award for Clinical Medical Research—in both cases, despite my repeated requests that the other scientists involved in the discovery be recognized. With the exclusion once more of Kuo and Choo from this year’s Gairdner Award, I decided that I should not continue to accept major awards without them. In doing so, I became the first person since the Gairdner’s inception in 1959 to turn down the prize.

I hope that my decision helps bring attention to a fundamental problem with many scientific prizes today. Although some awards, such as the Landsteiner, are inclusionary and emphasize outstanding team accomplishments, the majority of the world’s prestigious scientific awards—including the Gairdner, Lasker and Shaw prizes, which all seem to be modeled on the Nobel Prize and indeed are sometimes known as the ‘baby Nobels’—are usually restricted to at most three individuals per discovery. Unsurprisingly, this limitation often leads to controversy, when one or more worthy recipients are omitted from the winners list.

Perhaps what may help this situation is for awards committees to solicit, and then be responsive to, input from potential recipients themselves prior to making their final decisions. Some of the recipients are best placed to know the full and often intricate history of the discovery and collaborative efforts, and such input should help committees better understand the size of the contributing team from which they can then choose recipients according to each award’s particular policy.

With this information in hand, award organizers should be willing to award more than three researchers. As knowledge and technology grows exponentially around the world and with an increasing need for multidisciplinary collaborations to address complex questions and problems, there is a case to be made for award committees adjusting to this changing paradigm. Moreover, it is inherently unfair to exclude individuals who played a key part in the discovery. Why should they and their families suffer such great disappointment after contributing such crucial input? Some award restructuring could also be inspirational to young scientists, encouraging them to be highly interactive and collaborative in the knowledge that when a novel, long-shot idea or approach actually translates to scientific success, all key parties will be acknowledged appropriately.

In this vein, I am happy to note that the inaugural Queen Elizabeth Prize for Engineering, a new £1 million (\$1.6 million) prize from the UK government, was awarded at a formal ceremony last month to five individuals who helped create the internet and the World Wide Web, even though the original guidelines stipulated a maximum of three recipients. If the Queen of England—the very emblem of tradition—can cast protocol aside, clearly other institutions can too. I hope more awards committees will follow Her Majesty’s lead.

### Body weight simulator iPhone app

June 19, 2013

The body weight simulator, originally a web based java application, is now also an iPhone app (see here in iTunes).  The simulator is based on the human metabolism model developed by Kevin Hall, myself, and collaborators.  The exact model is given in detail in our Lancet paper, which is listed here along with other related references.  The app predicts the time course of your body weight given your baseline parameters and your new diet and/or new physical activity.  It will also give a suggested daily caloric intake to attain a new weight over a specified period of time along with the diet required to maintain that weight.  The model uses parameters calibrated to the average American so your own mileage will vary.  Also, I basically wrote the app in my spare time over the past year so it is pretty primitive as far as apps go but it does the job.  Please try it out and give me feedback.

### Genes can no longer be patented

June 13, 2013

The US Supreme Court ruled today that human genes cannot be patented. Here is the link to the New York Times article. The specific case regards Myriad Genetics, which held a patent that controlled the rights to all tests for the BRCA1 and BRCA2 genes implicated in breast cancer. The patent essentially blocked most research on the BRCA genes. The immediate effect will be that genetic testing will become cheaper and more widespread. People will argue that not allowing genes to be patented will discourage further innovation. I doubt it. Most discoveries, like genes, come from basic federally funded research. Any company can now develop a test for any newly discovered gene. Patent law has been broken for decades and this is just one small step to correcting it.

### The demise of Arbaclofen for Fragile X

June 7, 2013

Seaside Therapeutics has recently announced that they would be withdrawing their Fragile X drug Arbaclofen (STX209) from further clinical trials (see here). They had already reached Phase 3 and the drug showed promise in some patients but probably not enough to secure enough funding to continue or guarantee FDA approval. See the New York Times story for some personal accounts of the impact of this decision. The drug is a GABA-B agonist and is similar to the drug Baclofen, which is used to treat muscle spasms. Fragile X syndrome, which has symptoms similar to autism,  is caused by a mutation to the FMR1 gene that silences the production of the FMRP protein. Like most proteins, it is not exactly clear what FMRP does except that it may involve protein translation and affects synaptic plasticity in mouse models. One hypothesis of the cause of autism and Fragile X is that there is an overabundance of synaptic excitation.

My paper with Shashaank Vattikuti explored the effects of such imbalances on a cortical circuit model and showed that it could reproduce some psychophysical experiments (see here for summary of the paper). It is thus a plausible hypothesis that a GABA-B agonist, which are inhibitory transmitters, may alleviate some of the symptoms and I believe that it does in some patients. However, such a blunt instrument would probably not work in all patients. One reason is that not all imbalances between excitation and inhibition are necessarily equal, i.e. too much excitation may not be the same as too little inhibition. A neural circuit with very high excitation and inhibition balancing each other could behave very differently from one with low amounts of each balancing each other. The high circuit would have “high gain” and be very responsive to perturbations while the low circuit would have low gain. It is also not clear that simply increasing inhibition everywhere will result in a net decrease in inhibition because of the multiple feedback loops in the system. Increasing inhibition between inhibitory neurons could decrease the net inhibition on excitatory neurons.

The trials seem to show that about a third of the patients improved with Arbaclofen. They are probably the ones that have too little inhibition and increasing inhibition helps. I think this case suggests that we may need a new model for FDA approval of drugs. Perhaps we should not insist that drugs only treat specific illnesses but should also be approved if they are shown to have some biological effect and do not cause harm. I believe that there are many drugs that have failed to obtain FDA approval that actually do work and could help some patients. Instead of waiting until we can figure out ahead of time which patients will benefit from a given drug before we approve of it, we can just approve of it for restricted use and try it on patients to see what happens. The danger of course is that it may be difficult to know if a drug works and desperate patients and especially parents will insist on using a drug even if the physician believes it has no effect. This could cause harm and increase the cost of medical care. One of the things we could do is to have the government or nonprofit companies take over failed but safe drugs and provide them at low cost under some regulation. Actually, I think we need to completely revamp how drugs are developed but I need to leave that to a future post.

### Most of neuroscience is wrong

May 20, 2013

John Ioannidis has a recent paper in Nature Reviews Neuroscience arguing that many results in neuroscience are wrong. The argument follows his previous papers of why most published results are wrong (see here and here) but emphasizes the abundance of studies with small sample sizes in neuroscience. This both reduces the chances of finding true positives and increases the chances of obtaining false positives. Under powered studies are also susceptible to what is called the “winner’s curse” where the effect sizes of true positives are artificially amplified. My take is that any phenomenon with a small effect should be treated with caution even if it is real. If you really wanted to find what causes a given disease then you probably want to find something that is associated with all cases, not just in a small percentage of them.

### New paper on fat

April 19, 2013

Sex-Associated Differences in Free Fatty Acid Flux of Obese Adolescents.

Section on Growth and Obesity (D.C.A.-W., A.H.A., S.J.R.M., G.I.U., M.T.-K., J.A.Y.), Program in Developmental Endocrinology and Genetics, Eunice Kennedy Shriver National Institute of Child Health and Human Development; Mathematical Cell Modeling Section (V.P., C.C.C.), Division of Extramural Activities (C.G.S.), Division of Nutrition Research Coordination (V.S.H.), and Laboratory of Endocrinology and Receptor Biology (A.E.S.), National Institute of Diabetes and Digestive and Kidney Diseases; and Nuclear Medicine Department (J.C.R.), Hatfield Clinical Research Center, National Institutes of Health, U.S. Department of Health and Human Services, Bethesda, Maryland 20892.

The Journal of clinical endocrinology and metabolism (impact factor: 6.5). 02/2013; DOI:10.1210/jc.2012-3817

ABSTRACT Context: In obesity, increases in free fatty acid (FFA) flux can predict development of insulin resistance. Adult women release more FFA relative to resting energy expenditure (REE) and have greater FFA clearance rates than men. In adolescents, it is unknown whether sex differences in FFA flux occur. Objective: Our objective was to determine the associations of sex, REE, and body composition with FFA kinetics in obese adolescents. Participants: Participants were from a convenience sample of 112 non-Hispanic white and black adolescents (31% male; age range, 12-18 years; body mass index SD score range, 1.6-3.1) studied before initiating obesity treatment. Main Outcome Measures: Glucose, insulin, and FFA were measured during insulin-modified frequently sampled iv glucose tolerance tests. Minimal models for glucose and FFA calculated insulin sensitivity index (SI) and FFA kinetics, including maximum (l0 + l2) and insulin-suppressed (l2) lipolysis rates, clearance rate constant (cf), and insulin concentration for 50% lipolysis suppression (ED50). Relationships of FFA measures to sex, REE, fat mass (FM), lean body mass (LBM) and visceral adipose tissue (VAT) were examined. Results: In models accounting for age, race, pubertal status, height, FM, and LBM, we found sex, pubertal status, age, and REE independently contributed to the prediction of l2 and l0 + l2 (P < .05). Sex and REE independently predicted ED50 (P < .05). Sex, FM/VAT, and LBM were independent predictors of cf. Girls had greater l2, l0 + l2 and ED50 (P < .05, adjusted for REE) and greater cf (P < .05, adjusted for FM or VAT) than boys. Conclusion: Independent of the effects of REE and FM, FFA kinetics differ significantly in obese adolescent girls and boys, suggesting greater FFA flux among girls.

### Slides for ACP talk

April 9, 2013

I just gave a talk on obesity at a diabetes course at the American College of Physicians meeting in San Francisco.  My slides are here.

### Hepatitis C and the folly of prizes

April 3, 2013

The scientific world was set slightly aflutter when Michael Houghton turned down the prestigious Gairdner Award for the the discovery of Hepatitis C. Harvey Alter and Daniel Bradley were the two other recipients. Houghton, who had previously received the Lasker Award with Alter, felt he could not accept one more award because two colleagues Qui-Lim Choo and George Kuo did not receive either of these awards, even though their contributions were equally important.

Hepatitis, which literally means inflammation of the liver, was characterized by Hippocrates and known to be infectious since the 8th century. The disease had been postulated to be viral at the beginning of the 20th century and by the 1960′s two viruses termed Hepatitis A and Hepatitis B had been established. However, there still seemed to be another unidentified infectious agent which was termed Non-A Non-B Hepatitis NANBH.

Michael Hougton, George Kuo and Qui-Lim Choo were all working at the Chiron corporation in the early 1980′s.   Houghton started a project to discover the cause of NANBH in 1982 with Choo joining a short time later. They made significant process in generating mouse monoclonal antibodies with some specificity to NANBH infected materials from chimpanzee samples received from Daniel Bradley at the CDC. They used the antibodies to screen cDNA libraries from infected materials but they had not isolated an agent. George Kuo had his own lab at Chiron working on other projects but would interact with Houghton and Choo. Kuo suggested that they try blind cDNA immunoscreening on serum derived from actual NANBH patients. This approach was felt to be too risky but Kuo made a quantitative assessment that showed it was viable. After two years of intensive and heroic screening by the three of them, they identified one clone that was clearly derived from the NANBH genome and not from human or chimp DNA. This was definitive proof that NANBH was a virus, which is now called Hepatitis C. Kuo then developed a prototype of a clinical Hepatitis C antibody detection kit and used it to screen a panel of NANBH blood provided by Harvey Alter of the NIH. Kuo’s test was a resounding success and the blood test that came out of that work has probably saved 300 million or more people from Hepititis C infection.

The question then is who deserves the prizes. Is it Bradley and Alter, who did careful and diligent work obtaining samples or is it Houghton, Choo, and Kuo, who did the heroic experiments that isolated the virus? For completely unknown reasons, the Lasker was awarded to just Houghton and Alter, which primed the pump for more prizes to these two. Now that the Lasker and Gairdner prizes have been cleared, that leaves just the Nobel Prize. The scientific community could get it right this time and award it to Kuo, Choo, and Houghton.

Addendum added 2013-5-2:  I should add that many labs from around the world were also trying to isolate the infective agent of NANBH and all failed to identify the correct samples from Alter’s panel.  It is not clear how long it would have been and how many more people would have been infected if Kuo, Choo, and Houghton had not succeeded when they did.

### Epipheo video

February 1, 2013

The narration comes from an interview with me.

### A meal for a day

January 17, 2013

The Center for Science in the Public Interest has some examples of meals in restaurants that contain the caloric requirements for a whole day.  And you doubted the push hypothesis for the obesity epidemic.

### The Land Sub Experiment

January 2, 2013

Gary Taubes penned a column in Nature last month arguing for a rigorous test of the energy balance hypothesis versus what he calls the hormonal hypothesis for the cause of obesity.  Taubes writes

Before the Second World War, European investigators believed that obesity was a hormonal or regulatory disorder. Gustav von Bergmann, a German authority on internal medicine, proposed this hypothesis in the early 1900s.

The theory evaporated with the war. After the lingua franca of science switched from German to English, the German-language literature on obesity was rarely cited. (Imagine the world today if physicists had chosen to ignore the thinking that emerged from Germany and Austria before the war.)

Instead, physicians embraced the ideas of the University of Michigan physician Louis Newburgh, who argued that obese individuals had a “perverted appetite” that failed to match the calories that they consumed with their bodies’ metabolic needs. “All obese persons are alike in one fundamental respect,” Newburgh insisted, “they literally overeat.” This paradigm of energy balance/overeating/gluttony/sloth became the conventional, unquestioned explanation for why we get fat. It is, as Bernard would say, the fixed idea.

This history would be no more than an interesting footnote in obesity science if there were not compelling reason to believe that the overeating hypothesis has failed. In the United States, and elsewhere, obesity and diabetes rates have climbed to crisis levels in the time that Newburgh’s energy-balance idea has held sway, despite the ubiquity of the advice based on it: if we want to lose fat, we have to eat less and/or move more. Yet rather than blame the advice, we have taken to blaming individuals for not following it ‘properly’.

The alternative hypothesis — that obesity is a hormonal, regulatory defect — leads to a different prescription. In this paradigm, it is not excess calories that cause obesity, but the quantity and quality of carbohydrates consumed. The carbohydrate content of the diet must be rectified to restore health.

As I have argued before (see here and here), these two hypotheses are not conflicting.  The question of whether or not carbs make you fat is not an either-or issue but a quantitative one.  I also agree that we don’t yet know the answer and a definitive carefully controlled experiment is required.  I call this the “Land Sub Experiment” because what we need to do is to completely sequester individuals from the outside world for up to a year or more so that we can precisely measure everything they eat and how much energy they expend.  We can then compare a group that consumes mostly carbs to one that doesn’t.  The NIH will actually be involved in the NuSi study that Taubes describes and Kevin Hall is directly involved in the planning.  I anxiously await the outcome.  On a side note, a recent meta-analysis (see here) reports that being overweight actually lowers your mortality rate.

### Using formal logic in biology

October 10, 2012

The 2012 Noble Prize in physiology or medicine went to John Gurdon and Shinya Yamanaka for turning mature cells into stem cells. Yamanaka shook the world just six years ago in a Cell paper (it can be obtained here) that showed how to reprogram adult fibroblast cells into pluripotent stem cells (iPS cells) by simply inducing four genes – Oct3/4, Sox2, c-Myc, and Klf4.  Although he may not frame it this way, Yamanaka arrived at these four genes by applying a simple theorem of formal logic, which is that a set of AND conditions is equivalent to negations of OR conditions.  For example, the statement A AND B  is True is the same as Not A OR Not B is False.  In formal logic notation you would write $A \wedge B = \neg(\neg A \vee \neg B)$.  The problem then is given that we have about 20,000 genes, what subset of them will turn an adult cell into an embryonic-like stem cell. Yamanaka first chose 24 genes that are known to be expressed in stem cells and inserted them into an adult cell. He found that this made the cell pluripotent. He then wanted to find a smaller subset that would do the same. This is where knowing a little formal logic goes a long way. There are $2^{24}$ possible subsets that can be made out of 24 genes so trying all combinations is impossible. What he did instead was to run 24 experiments where each gene is removed in turn and then checked to see which cells were not viable. These would be the necessary genes for pluripotency.  He found that  pluripotent stem cells never arose when either Oct3/4, Sox2, c-Myc or Klf4 were missing. Hence, a pluripotent cell needed all four genes and when he induced them, it worked. It was a positively brilliant idea and although I have spoken out against the Nobel Prize (see here), this one is surely deserved.

### More on health care costs

October 1, 2012

I posted previously that the rising cost of health care may not be a bad thing if it ends up providing jobs for the bulk of the population.  The Economist magazine blog Free Exchange had an interesting piece on how health care can become both more expensive and more affordable simultaneously. The argument comes from William Baumol of Baumol’s cost disease, (of which I posted on previously here). In simple terms, Baumol’s argument is that as society gets more productive and richer the salaries of everyone goes up including those in professions, like art and health care, where productivity does not increase. Now, given that the bulk of costs of most sectors are salaries, productivity increases generally imply decreases in the number of people in that economic sector. At current rates of growth, health care expenditures will be 60% of US GDP by 2105. However, as long as the economy as a whole grows faster than the rate of increase in health care costs then we will still have plenty leftover to buy more of everything else. If we make the simple assumption that contribution to GDP is proportional to population then an increase in health care’s share of GDP simply means that the share of the population working in health care is also increasing. Basically, at current rates of growth, we will all become health care workers. I don’t think there is anything intrinsically wrong with this.  How a nation’s wealth is distributed among its population is more important than how it is distributed among sectors.

### Eating less won’t make you live longer

August 29, 2012

Calorie restriction has been shown to extend life in several types of animals including worms, flies and mice. The question was whether it would work on humans. Given that such an experiment would take decades to conduct and carry a lot of ethical baggage,  scientists turned to monkeys and started experiments in the late eighties.  Well the results are in and the answer is probably not. A study just out in Nature (link here) finds that rhesus monkeys that were given a reduced calorie diet did not live longer. This contradicts a previous study from the University of Wisconsin published in Science in 2009 (link here) which did show caloric restriction extended life of monkeys.  However, in that study, monkeys that died due to causes that were deemed unrelated to aging were not included.  Here is a New York Times summary. There are some people who are currently limiting their diets in an attempt to live longer so perhaps we will have some unofficial data points on humans in few decades.  My guess is that there will not be a net increase in humans since lifespan is a complicated thing and there are always tradeoffs.  Caloric restriction may be protective in some aspects but it may also be detrimental in others.

### The flipside of medicare efficiency

August 26, 2012

The selection of Paul Ryan as the Republican vice presidential candidate for the upcoming US federal election has brought health care reform back into the spotlight.  While the debate has been highly acrimonious, the one point that everyone seems to agree on is that the rate of increase in health care and in particular medicare spending is unsustainable.  Health care is currently one sixth of the economy and it will take up an increasing share if the growth is not reduced.  I think that a really expensive health care system may actually be a good thing.  What people tend to forget is that there are two sides of a cost.   When we pay for healthcare, that money goes to someone.  Making something more efficient, means producing the same amount of stuff with fewer people.

The official unemployment rate is currently about 8% but the actual fraction number of people who don’t work or wish they had more work is much higher.  Efficiency eliminates jobs.  People like Tom Friedman of the New York Times thinks (e.g. see here) that this will just free us up to do “creative” jobs.  However, what if you are a person that doesn’t want or is unable to do a “creative” job?  My guess is that as we become more efficient, more and more people will be left with nothing to do.  The solution is either to have a massive welfare system or we become less efficient.

However, not all inefficiencies are equal. We wouldn’t want monopolies where all the money flows to a small number of individuals.  What we need is a highly stochastic form of inefficiency that involves lots of people. Healthcare may be just what we need. It’s something that is highly decentralized and affects everyone.  It can’t be easily outsourced. I’ve argued before that having 80% of the economy be devoted to healthcare doesn’t seem that outlandish.  After all, how many flat screen TVs do you need?