The rise and fall of Jonah Lehrer

Jonah Lehrer , staff writer for the New Yorker and a best selling science author, resigned in disgrace today.  He admitted to fabricating quotes from Bob Dylan in his most recent book:

New York Times: An article in Tablet magazine revealed that in his best-selling book, “Imagine: How Creativity Works,” Mr. Lehrer had fabricated quotes from Bob Dylan, one of the most closely studied musicians alive. Only last month, Mr. Lehrer had publicly apologized for taking some of his previous work from The Wall Street Journal, Wired and other publications and recycling it in blog posts for The New Yorker, acts of recycling that his editor called “a mistake.”

…Mr. Lehrer might have kept his job at The New Yorker if not for the Tablet article, by Michael C. Moynihan, a journalist who is something of an authority on Mr. Dylan.

Reading “Imagine,” Mr. Moynihan was stopped by a quote cited by Mr. Lehrer in the first chapter. “It’s a hard thing to describe,” Mr. Dylan said. “It’s just this sense that you got something to say.”

Lehrer was a regular on Radiolab and he seemed to always really know his science. I have linked to his articles in the past (see here). His publisher is withdrawing his book and giving refunds to anyone returning it. I haven’t read the book, but from the excerpts and his interviews on it, I think the science is probably accurate. I don’t really know what he was thinking but my guess was that he was just trying to spice up the book and imagined a quote that Dylan might say. The fabricated quote above is pretty innocuous. He probably didn’t think anyone would notice. Maybe he felt pressure to write a best seller. Maybe he was overconfident. In any case, he definitely shouldn’t have done it. It is unfortunate because he was a gifted writer and boon to neuroscience and science in general.

New paper on repetition priming and suppression

A new paper by Steve Gotts, myself, and Alex Martin has officially been published in the journal Cognitive Neuroscience:

Stephen J. Gotts, Carson C. Chow & Alex Martin (2012): Repetition priming and repetition suppression: Multiple mechanisms in need of testing, Cognitive Neuroscience, 3:3-4, 250-259 [PDF]

This paper is a review of the topic but is partially based on the PhD thesis work of Steve Gotts when we were both in Pittsburgh over a decade ago. Steve was a CNBC graduate student at Carnegie Mellon University and came to visit me one day to tell me about his research project to reconcile the psychological phenomenon of repetition priming with a neurophysiological phenomenon called repetition suppression. It is well known that performance improves when you repeat a task. For example, you will respond faster to words on a random list if you have seen the word before. This is called repetition priming. The priming effect can occur over time scales as short as a few seconds to your life time. Steve was focused on the short time effect. A naive explanation for why you would respond faster to priming is that the pool of neurons that code for the word become slightly more active so when the word reappears they fire more readily. This hypothesis could only be tested  when electrophysiological recordings of cells in awake behaving monkeys and functional magnetic resonance imaging data in humans finally became available in the mid-nineties.  As is often the case in science, the opposite was observed. Neural responses actually decreased and this was called repetition suppression.  So an interesting question arose: How do you get priming with suppression?  Steve had a hypothesis and it involved work I had done so he came to see if I wanted to collaborate.

I joined the math department at Pitt in the fall of 1998 (the webpage has a nice picture of Bard Ermentrout, Rodica Curtu and Pranay Goel standing at a white board). I had just come from doing a post doc with Nancy Kopell at BU.   At that time, the computational neuroscience community was interested in how a population of spiking neurons would become synchronous.  The history of synchrony and coupled oscillators is long with many threads but I got into the game because of the weekly meetings Nancy organized at BU, which we dubbed “N-group”. People from all over the Boston area would participate.  It was quite exciting at that time. One day Xiao Jing Wang, who was at Brandeis at the time, came to give a seminar on his joint work with Gyorgy Buzsaki on gamma oscillations in the hippocampus, which resulted in this highly cited paper. What the paper was really about was how inhibition could induce synchrony in a network with heterogeneous connections.  It had already been shown by a number of people that a network with inhibitory synapses could synchronize a network of spiking neurons. This was somewhat counter intuitive because the conventional wisdom was that inhibition would lead to anti-synchrony.  The key ingredient was that the inhibition had to be slow. Xiao Jing argued from his simulations that the hippocampus had a sweet spot for synchronization for the gamma band (i.e. frequencies around 40Hz). I was highly intrigued by his result and spent the next two years trying to understand the simulations mathematically. This resulted in four papers:

C.C. Chow, J.A. White, J. Ritt, and N. Kopell, `Frequency control in synchronized networks of inhibitory neurons’, J. Comp. Neurosci. 5, 407-420 (1998). [PDF]

J.A. White, C.C. Chow, J. Ritt, C. Soto-Trevino, and N. Kopell, `Synchronization and oscillatory dynamics in heterogeneous, mutually inhibited neurons’, J. Comp. Neurosci. 5, 5-16 (1998). [PDF]

C.C. Chow, `Phase-locking in weakly heterogeneous neuronal networks’, Physica D 118, 343-370 (1998). [PDF]

C.C. Chow and N. Kopell, `Dynamics of spiking neurons with electrical coupling’, Neural Comp. 12, 1643-1678 (2000). [PDF]

In a nutshell, these papers showed that in a heterogeneous network, neurons will tend to synchronize around the time scale of the synaptic inhibition, which in the case of the inhibitory neurotransmitter receptor GABA_A is around 25 ms or 40 Hz. When the firing frequency is too high the neurons tend to fire asynchronously and when the frequency is too slow, neurons tend to stop firing all together.

Steve read my papers (and practically everything else) and thought  that this might be the resolution of his question.  Now, it had also been known for a while that when neurons fire they tend to slow down. This is due to both spike-frequency adaptation and synaptic depression, so repetition suppression is not entirely surprising since when neurons are stimulated they will tend to fire slower. What is surprising is that slowing down makes you respond faster.  Steve thought that maybe suppression synchronized neurons and made them more effective in getting downstream neurons to fire. In essence, what he needed to find was a mechanism that increases the gain of a neuron for a decrease in input and synchrony was a solution. I helped him work out some technical details and he wrote a very nice thesis showing how this could work and match the data. He then went on to work with Bob Desimone and Alex Martin at NIH. However, we never wrote the theoretical paper from his thesis because of a critique that we never got around to answering. The issue was that if a lowering of network frequency can elicit priming then why does a reduction in contrast in the primed stimulus, which also reduces network frequency, not do the same? This came up after Steve had left and I turned my attention to other things. The answer is probably because not all frequency reductions are equal. A reduction in contrast lowers the total input to the early part of the visual system while synaptic depression will have the largest effect on the most active neurons.  The ensuing dynamics will likely be different but we never had the time to fully flesh this out. Although, I always wanted to get back to this, the project sat idle for me for about eight years until Steve sent me an email one day saying that he’s writing a review with Alex on the topic and wanted to know if I wanted to be included. I was delighted.  The paper covers all the current theories for priming and suppression and is accompanied by commentaries from many of the key players in the field. I’ve just covered a small part of the many interesting issues brought up in the review.

We can all be above average

I was listening to physicist and science writer Leonard Mlodinow on an All in the Mind podcast this morning. He was talking about his new book, Subliminal, which is about recent neuroscience results on neural processes that operate in the absence of conscious awareness. During the podcast, which was quite good, Mlodinow quoted a result that said 95% of all professors think they are above average, then he went on to say that we all know that only %50 can be. It’s unfortunate that Mlodninow, who wrote an excellent book on probability theory, would make such a statement. I am  sure that he knows that 50% of all professors are better than the median but any number greater than one could be greater than the average (i.e. mean). He used average in the colloquial sense but knowing the difference between median and mean  is crucial for the average or should I say median person to make informed decisions.

It could be that on a professor performance scale,  5% of all professors are phenomenally bad, while the rest are better but clumped together. In this case, it would be absolutely true that 95% of all professors are better than the mean. Also, if the professors actually obeyed such a distribution then comparing to the mean would be more informative than the median because what every student should do is to simply avoid the really bad professors. However, for something like income, which is broad with a fat tail,  comparing yourself to the median is probably more informative because it will tell you where you place in society.  The mean salary of a lecture hall filled with mathematicians would increase perhaps a hundred fold if James Simons (of Chern-Simons theory as well as the CEO of  one of the most successful hedge funds, Renaissance Technologies) were to suddenly walk into the room. However, the median would hardly budge. (Almost) All the children in Lake Woebegon could really be above average.

Jun 18, 2013: I adjusted the last sentence to avoid confusion.

Double dose

I highly recommend listening to this Radiolab short podcast on the story of Tsutomo Yamaguchi who worked in Hiroshima and lived in Nagasaki in August of 1945.  He survived both blasts and lived to tell about it.  A remarkable fact is that there seems to be no statistically significant increase in birth defects or cancer rates of the children of Japanese atomic bomb survivors.

3D printing the brain

It is not at all clear what technology will attain “human-level” intelligence first. Robin Hanson proposes brain emulation (e.g see here).  I’ve been skeptical of emulation and am leaning towards machine learning (e.g. see here). However, given the recent technological advances of connectomics and 3D printing, brain emulation or rather replication might not be as distant as I thought. 3D printing is a technology to manufacture any 3 dimensional object by sequentially depositing 2 dimensional layers. You can find out more about it here including building your own 3D printer . People now regularly use open source software  to take any object they may want, slice it into 2 dimensional layers then print it.  The technology has reached the point where you can print with any material that can be squirted including biological material (see video here). People in the field are currently gearing up to print complete organs like kidneys and the liver. It is not overly far-fetched that they could print out an entire brain in the future. Recent progress in connectomics can be tracked here. The current state of the art involves taking electron microscope images of thin slices of neural tissue.  The hard part is to reassemble these 2D slices back into a 3D brian, the reverse of 3D printing. However, perhaps what we can do is to 3D print the images first to obtain a faithful 3D reconstruction of the brain and then use the model to assist in the software reconstruction. If you had molecular level image resolution, you could even try to print out a functioning brain, complete with docked synaptic vesicles ready to be released!

Nobel dilemma

Now that the Higgs boson has been discovered, the question is who gets the Nobel Prize.  This will be tricky because  the discovery was made by two detector teams with hundreds of scientists using the CERN LHC accelerator involving hundreds more and although Higgs gets the eponymous credit for the prediction, there were actually three papers published almost simultaneously on the topic, with five of the authors still alive.  In fact, we only call it the Higgs boson because of a citation error by Nobel Laureate Steven Weinberg (see here).  One could even argue further that the Higgs boson is really just a variant of the Goldstone boson, discovered by Yoichiro Nambu (Nobel Laureate) and Jeffrey Goldstone (not a Laureate). This is a perfect example of why as I argued before (see here) that discoveries are rarely made by three or fewer people.  Whatever they decide, there will be plenty of disappointed people.

Quantifying the calorie debate

I wanted to clarify that I am not attacking the “carbs are bad” idea per se. Personally, I’m agnostic. What I was trying to do in my previous posts was to point out that the question is a quantitative one and cannot be settled by a qualitative examination of the underlying physiology. Gary Taubes would be the first to agree and he is actively working towards testing his hypothesis experimentally. I also wanted to point out that single experiments are not definitive. The recent JAMA paper suggests that low carb diets speed up metabolism relative to low fat diets but like many clinical experiments, the effect could have been due to a statistical anomaly or systematic error. As I have posted previously in the past (e.g. see here), clinical and epidemiological results are as likely to be wrong (if not more) as correct. Unlike mathematics, where a theorem is either wrong or right, clinical and epidemiological results are more like imperfect snapshots of the truth. It will only be through a long process of accumulating evidence will the answer be revealed.

The false dichotomy of carbs and obesity

The law of the excluded middle is one of the foundations of logic. It says that if a proposition is false then the opposite must be true. There is no room for a middle ground in classical logic. However, one must be extremely careful when applying the law to  biology where hypotheses are generally situational and rest on many assumptions. In order to apply the law of the excluded middle, one must have only two alternatives and this is seldom true in biology and in particular human metabolism. Gary Taubes argued quite successfully in his book Good Calories, Bad Calories that fat probably doesn’t cause heart disease and in some cases may even be beneficial. A major theme of that book was that scientists can become irrationally attached to hypotheses and willfully ignore any evidence to the contrary. He recently penned a New York Times opinion piece arguing that the medical establishment is equally misguided in asserting that salt is unhealthy. One of the hypotheses that Taubes dislikes the most is that “a calorie is a calorie”, which proposes what you eat is not as important as how much you eat when it comes to weight gain and obesity. Taubes thinks that carbs and especially sugar is what makes you fat (and causes heart disease). This is summarized in his Times opinion piece  today, which covers the recent JAMA result that I posted about recently (see here).

It may very well be true that a calorie is not a calorie but that still may not mean carbs are the cause of the US obesity epidemic. I’ve posted on this a few times before (e.g. see here and here) but I thought it was important enough to reiterate and simplify the points here. In short, the carbs are bad argument is that 1) carbs induce insulin and insulin sequesters fat, and 2) carbs are metabolically more efficient so you burn fewer calories when you eat them compared to fat and protein. Even if this is true (and it may not all be) that still doesn’t mean that calories are unimportant. I don’t care how metabolically efficient carbs may be, you would starve to death if you only ate one sugar cube each day. Conversely, no matter how many excess calories you may burn eating fat, you will become obese if you eat two pounds of butter each day. Hence, even if a calorie is not a calorie, calories still matter. It is then a matter of degree. If you manage to burn everything you eat then your body won’t change. This is true if you eat a high carb or a low carb diet. Now it could be true that you could have a different amount of body fat and weight for the same calorie diet depending on diet composition. So a plausible hypothesis for the cause of the obesity epidemic is that we switched from a high fat diet to a low fat diet and everyone became fatter as a result. This is something that I’m planning to test using the same data that we used to show how the increase in food production is sufficient to explain the obesity epidemic. Ultimately though, the brain is what decides how much we eat and one of the biggest things we don’t understand is how diet composition affects food intake. It could be that low carb diets do make you thinner but the reason is that we tend to eat less when we’re on them.

2012-7-2: changed fat to carb in last  sentence.