The liquidity trap

The monetary base (i.e. amount of cash and demand deposits) has risen dramatically since the financial crisis and ensuing recession.

fredgraph

Immediately following the plunge in the economy in 2008, credit markets seized and no one could secure loans. The immediate response of the US Federal Reserve was to lower the interest rate it gives to large banks. Between January and December of 2008, the Fed discount rate dropped from around 4% to zero but the economy kept on tanking. The next move was to use unconventional monetary policy. The Fed implemented several programs of quantitative easing where they bought bonds of all sorts. When they do so, they create money out of thin air and trade it for bonds. This increases the money supply and is how the Fed “prints money.”

In the quantity theory of money, increasing the money supply should do nothing more than increase prices and people have been screaming about looming inflation for the past five years. However, inflation has remained remarkably low. The famous bond trader Bill Gross of Pimco essentially lost his job by betting on inflation and losing a lot of money. Keynesian theory predicts that increasing the money supply can cause a short-term surge in production because it takes time for prices to adjust (sticky prices) but not when interest rates are zero (at the zero lower bound). This is called a liquidity trap and there will be neither economic stimulus nor inflation. The reason is spelled out in the IS-LM model, invented by John Hicks to quantify Keynes’s theory. The Kahn Academy actually has a nice set of tutorials too. The idea is quite simple once you penetrate the economics jargon.

The IS-LM model looks at the relationship between interest rate r and the general price level/economic productivity (Y). It’s a very high level macroeconomic model of the entire economy. Even Hicks himself considered it to be just a toy model but it can give some very useful insights. Much of the second half of the twentieth century has been devoted to providing a microeconomic basis of macroeconomics in terms of interacting agents (microfoundations) to either support Keynesian models like IS-LM (New Keynesian models) or refute it (Real Business Cycle models). In may ways this tension between effective high level models and more detailed microscopic models mirrors that in biology (although it is much less contentious in biology). My take is that what model is useful depends on what question you are asking. When it comes to macroeconomics, simple effective models make sense to me.

The IS-LM model is analogous to the supply-demand model of microeconomics where the price and supply level of a product is set by the competing interests of consumers and producers. Supply increases with increasing price while demand decreases and the equilibrium is given by the intersection of these two curves. Instead of supply and demand curves, in the IS-LM model we have an Investment-Savings curve and a Liquidity-Preference-Money-supply curve. The IS curve specifies Y as an increasing function of interest rate. The rationale  that when interest rates are low, there will be more borrowing, spending, and investment and hence more goods and services will be made and sold, which increases Y.  In the LM curve, the interest rate is an increasing function of Y because as economic activity increases there will be a greater demand for money and this will allow banks to charge more for money (i.e. raise interest rates). The model shows how government or central bank intervention can increase Y. Increased government spending will shift the IS curve to the right and thus increase Y and the interest rate. It is also argued that as Y increases, employment will also increase. Here is the figure from Wikipedia:

540px-Islm.svg

Likewise, increasing the money supply amounts to shifting the LM curve to the right and this also increases Y and lowers interest rates. Increasing the money supply thus increases price levels as expected.

A liquidity trap occurs if instead of the above picture, the GDP is so low that we have a situation that looks like this (from Wikipedia):

440px-Liquidity_trap_IS-LM.svg

Interest rates cannot go lower than zero because otherwise people will simply just hold money instead of putting it in banks. In this case, government spending can increase GDP but increasing the money supply will do nothing. The LM curve is horizontal at the intersection with the IS curve, so sliding it rightward will do nothing to Y. This explains why the monetary base can increase fivefold and not lead to inflation or economic improvement. However, there is a way to achieve negative interest rates and that is to spur inflation. Thus, in the Keynesian framework, the only way to get out of a liquidity trap is to increase government spending or induce inflation.

The IS-LM model is criticized for many things, one being that it doesn’t take into account of dynamics. In economics, dynamics are termed inter-temporal effects, which is what New Keynesian models incorporate (e.g. this paper by Paul Krugman on the liquidity trap). I think that economics would be much easier to understand if it were framed in terms of ODEs and dynamical systems language. The IS-LM model could then be written as

\frac{dr}{dt} = [Y - F]_+ - r

\frac{dY}{dt} = c - r - d Y

From here, we see that the IS-LM curves are just nullclines and obviously monetary expansion will do nothing when Y-F <0, which is the condition for the liquidity trap. The course of economics may have been very different if only Poincaré had dabble in it a century ago.

2104-12-29: Fixed some typos

Code platform update

It’s a week away from 2015, and I have transitioned completely away from Matlab. Julia is winning the platform attention battle. It is very easy to code in and it is very fast. I just haven’t gotten around to learning python much less pyDSTool (sorry Rob). I kind of find the syntax of Python (with all the periods between words) annoying. Wally Xie and I have also been trying to implement some of our MCMC runs in Stan but we have had trouble making it work.  Our model requires integrating ODEs and the ODE solutions from Stan (using our own solver) do not match our Julia code or the gold standard XPP. Maybe we are missing something obvious in the Stan syntax but our code is really very simple. Thus, we are going back to doing our Bayesian posterior estimates in Julia. However, I do plan to revisit Stan if they (or we) can write a debugger for it.

New paper in eLife

Kinetic competition during the transcription cycle results in stochastic RNA processing

Matthew L FergusonValeria de TurrisMurali PalangatCarson C ChowDaniel R Larson

Abstract

Synthesis of mRNA in eukaryotes involves the coordinated action of many enzymatic processes, including initiation, elongation, splicing, and cleavage. Kinetic competition between these processes has been proposed to determine RNA fate, yet such coupling has never been observed in vivo on single transcripts. In this study, we use dual-color single-molecule RNA imaging in living human cells to construct a complete kinetic profile of transcription and splicing of the β-globin gene. We find that kinetic competition results in multiple competing pathways for pre-mRNA splicing. Splicing of the terminal intron occurs stochastically both before and after transcript release, indicating there is not a strict quality control checkpoint. The majority of pre-mRNAs are spliced after release, while diffusing away from the site of transcription. A single missense point mutation (S34F) in the essential splicing factor U2AF1 which occurs in human cancers perturbs this kinetic balance and defers splicing to occur entirely post-release.

DOI: http://dx.doi.org/10.7554/eLife.03939.001

Ideas on CBC radio

 

One of the most intellectually stimulating radio shows (and podcasts) is Ideas with Paul Kennedy on CBC radio. It basically covers all topics. Many of the shows span several hour-long segments. One inspiring show I highly recommend is devoted to landscape architect Cornelia Hahn Oberlander. She was a pioneer in green and sustainable architecture. She is also still skiing at age 93!

NIH Stadtman Investigator

The US National Institutes of Health is divided into an Extramural Program (EP), where scientists  in universities and research labs apply for grants, and an Intramural Program (IP), where investigators such as myself are provided with a budget to do research without having to write grants. Intramural Investigators are reviewed fairly rigorously every four years, which affects budgets for the next four years, but this is less stressful than trying to run a lab on NIH grants. This funding model difference is particularly salient in the face of budget cuts because for the IP a 10% cut is 10% cut whereas for the EP, it means that 10% fewer grants are funded. When a  lab cannot renew a grant, people lose their jobs. This problem is further exacerbated by medical schools loading up with “soft money” positions, where researchers must pay their own salaries from grants. The institutions also extract fairly large indirect costs from these grants, so in essence, the investigators write grants to both pay their salaries and fill university coffers. I often nervously joke that since the IP is about 10% of the NIH budget, an easy way to implement a 10% budget cut is to eliminate the IP.

However, I think there is value in having something like the IP where people have the financial security to take some risks. It is the closet thing we have these days to the old Bell Labs, where the transistor, information theory, C, and Unix were invented.  The IP has produced 18 Nobel Prizes and can be credited with breaking the genetic code (Marshall Nirenberg), the discovery of fluoride to prevent tooth decay, lithium for bipolar disorder, and vaccines against multiple diseases (see here for a list of past accomplishments). What the IP needs to ensure its survival is a more a rigorous and transparent procedure for entry into the IP where the EP participates. An IP position should be treated like a lifetime grant to which anyone at any stage in their career can apply. Not everyone may want to be here. Research groups are generally smaller and there are lots of rules and regulations to deal with, particularly for travel. But if someone just wants to close their door and do high risk high reward research, this is a pretty good place to be and they should get a shot at it.

The Stadtman Tenure-track Investigator program is a partial implementation of this idea. For the past five years, the IP has conducted institute-wide searches to identify young talent in a broad set of fields. I am co-chair of the Computational Biology search this year. We have invited five candidates to come to a “Stadtman Symposium”, which will be held tomorrow at NIH.  Details are here along with all the symposia. Candidates that strike the interest of individual scientific directors of the various institutes will be invited back for a more traditional interview. Most of the hires at NIH over the past five years have been through the Stadtman process. I think this has been a good idea and has brought some truly exceptional people to the IP. What I would do to make it even more transparent is to open up the search to people at all stages in the their career and to have EP people participate in the searches and eventual selection of the investigators.

 

 

Race against the machine

One of my favourite museums is the National Palace Museum (Gu Gong) in Taipei, Taiwan. It houses part of the Chinese imperial collection, which was taken to Taiwan in 1948 during the Chinese civil war by Chiang Kai-shek. Beijing has its own version but Chiang took the good stuff. He wasn’t much of a leader or military mind but he did know good art. When I view the incredible objects in that museum and others, I am somewhat saddened that the skill and know-how required to make such beautiful things either no longer exists or is rapidly vanishing. This loss of skill is apparent just walking around American cities much less those of Europe and Asia. The stone masons that carved the wonderful details on the Wrigley Building in Chicago are all gone, which brings me to this moving story about passing the exceedingly stringent test to be a London cabbie (story here).

In order to be an official London black cab driver, you must know how to get between any two points in London in as efficient a manner as possible. Aspiring cabbies often take years to attain the mastery required to pass their test. Neural imaging has found that their hippocampus, where memories are thought to be formed, is larger than normal and it even gets larger as they study. The man profiled in the story quit his job and studied full-time for three years to pass! They’ll ride around London on a scooter memorizing every possible landmark that a person may ask to be dropped off at. Currently, cabbies can outperform GPS and Google Maps (I’ve been led astray many a time by Google Maps) but it’s only a matter of time. I hope that the cabbie tradition lives on after that day just as I hope that stone masons make a comeback.

Crawford Prize

The SIAM activity group on dynamical systems is seeking nominations for the J.D. Crawford Prize. J.D. was a marvelous applied mathematician/theoretical physicist who tragically died in his forties the day before I started my job at the University of Pittsburgh. The deadline is November 15. Thus far, we (I’m on the committee) haven’t received many nominations so the odds are good. So please give us more work and send in your nominations. The information for where to send it is here.

How does the cortex compute?

Gary Marcus, Adam Marblestone, and Thomas Dean have an opinion piece in Science this week challenging the notion of the “canonical cortical circuit”. They have a longer and open version here. Their claim is that the cortex is probably doing a variety of different computations, which they list in their longer paper. The piece has prompted responses by a number of people including Terry Sejnowski and Stephen Grossberg on the connectionist listserv (Check the November archive here).

What’s wrong with neuroscience

Here is a cute parable in Frontiers in Neuroscience from cognitive scientist Joshua Brown at Indiana Univeristy.  It mirrors a lot of what I’ve been saying for the past few years:

 

The tale of the neuroscientists and the computer:  Why mechanistic theory matters

http://journal.frontiersin.org/Journal/10.3389/fnins.2014.00349/full

A little over a decade ago, a biologist asked the question “Can a biologist fix a radio?” (Lazebnik, 2002). That question framed an amusing yet profound discussion of which methods are most appropriate to understand the inner workings of a system, such as a radio. For the engineer, the answer is straightforward: you trace out the transistors, resistors, capacitors etc., and then draw an electrical circuit diagram. At that point you have understood how the radio works and have sufficient information to reproduce its function. For the biologist, as Lazebnik suggests, the answer is more complicated. You first get a hundred radios, snip out one transistor in each, and observe what happens. Perhaps the radio will make a peculiar buzzing noise that is statistically significant across the population of radios, which indicates that the transistor is necessary to make the sound normal. Or perhaps we should snip out a resistor, and then homogenize it to find out the relative composition of silicon, carbon, etc. We might find that certain compositions correlate with louder volumes, for example, or that if we modify the composition, the radio volume decreases. In the end, we might draw a kind of neat box-and-arrow diagram, in which the antenna feeds to the circuit board, and the circuit board feeds to the speaker, and the microphone feeds to the recording circuit, and so on, based on these empirical studies. The only problem is that this does not actually show how the radio works, at least not in any way that would allow us to reproduce the function of the radio given the diagram. As Lazebnik argues, even though we could multiply experiments to add pieces of the diagram, we still won’t really understand how the radio works. To paraphrase Feynmann, if we cannot recreate it, then perhaps we have not understood it (Eliasmith and Trujillo, 2014; Hawking, 2001).

Lazebnik’s argument should not be construed to disparage biological research in general. There are abundant examples of how molecular biology has led to breakthroughs, including many if not all of the pharmaceuticals currently on the market. Likewise, research in psychology has provided countless insights that have led to useful interventions, for instance in cognitive behavioral therapy (Rothbaum et al., 2000). These are valuable ends in and of themselves. Still, are we missing greater breakthroughs by not asking the right questions that would illuminate the larger picture? Within the fields of systems, cognitive, and behavioral neuroscience in particular, I fear we are in danger of losing the meaning of the Question “how does it work”? As the saying goes, if you have a hammer, everything starts to look like a nail. Having been trained in engineering as well as neuroscience and psychology, I find all of the methods of these disciplines useful. Still, many researchers are especially well-trained in psychology, and so the research questions focus predominantly on understanding which brain regions carry out which psychological or cognitive functions, following the established paradigms of psychological research. This has resulted in the question being often reframed as “what brain regions are active during what psychological processes”, or the more sophisticated “what networks are active”, instead of “what mechanisms are necessary to reproduce the essential cognitive functions and activity patterns in the system.” To illustrate the significance of this difference, consider a computer. How does it work?

**The Tale**

Once upon a time, a group of neuroscientists happened upon a computer (Carandini, 2012). Not knowing how it worked, they each decided to find out how it sensed a variety of inputs and generated the sophisticated output seen on its display. The EEG researcher quickly went to work, putting an EEG cap on the motherboard and measuring voltages at various points all over it, including on the outer case for a reference point. She found that when the hard disk was accessed, the disk controller showed higher voltages on average, and especially more power in the higher frequency bands. When there was a lot of computation, a lot of activity was seen around the CPU. Furthermore, the CPU showed increased activity in a way that is time-locked to computational demands. “See here,” the researcher declared, “we now have a fairly temporally precise picture of which regions are active, and with what frequency spectra.” But has she really understood how the computer works?

Next, the enterprising physicist and cognitive neuroscientist came along. “We don’t have enough spatial resolution to see inside the computer,” they said. So they developed a new imaging technique by which activity can be measured, called the Metabolic Radiation Imaging (MRI) camera, which now measures the heat (infrared) given off by each part of the computer in the course of its operations. At first, they found simply that lots of math operations lead to heat given off by certain parts of the CPU, and that memory storage involved the RAM, and that file operations engaged the hard disk. A flurry of papers followed, showing that the CPU and other areas are activated by a variety of applications such as word-processing, speech recognition, game play, display updating, storing new memories, retrieving from memory, etc.

Eventually, the MRI researchers gained a crucial insight, namely that none of these components can be understood properly in isolation; they must understand the network. Now the field shifts, and they begin to look at interactions among regions. Before long, a series of high profile papers emerge showing that file access does not just involve the disks. It involves a network of regions including the CPU, the RAM, the disk controller, and the disk. They know this because when they experimentally increase the file access, all of these regions show correlated increases in activity. Next, they find that the CPU is a kind of hub region, because its activity at various times correlates with activity in other regions, such as the display adapter, the disk controller, the RAM, and the USB ports, depending on what task they require the computer to perform.

Next, one of the MRI researchers has the further insight to study the computer while it is idle. He finds that there is a network involving the CPU, the memory, and the hard disk, as (unbeknownst to them) the idle computer occasionally swaps virtual memory on and off of the disk and monitors its internal temperature. This resting network is slightly different across different computers in a way that correlates with their processor speed, memory capacity, etc., and thus it is possible to predict various capacities and properties of a given computer by measuring its activity pattern when idle. Another flurry of publications results. In this way, the neuroscientists continue to refine their understanding of the network interactions among parts of the computer. They can in fact use these developments to diagnose computer problems. After studying 25 normal computers and comparing them against 25 computers with broken disk controllers, they find that the connectivity between the CPU and the disk controller is reduced in those with broken disk controllers. This allows them to use MRI to diagnose other computers with broken disk controllers. They conclude that the disk controller plays a key role in mediating disk access, and this is confirmed with a statistical mediation analysis. Someone even develops the technique of Directional Trunk Imaging (DTI) to characterize the structure of the ribbon cables (fiber tract) from the disk controller to the hard disk, and the results match the functional correlations between the hard disk and disk controller. But for all this, have they really understood how the computer works?

The neurophysiologist spoke up. “Listen here”, he said. “You have found the larger patterns, but you don’t know what the individual circuits are doing.” He then probes individual circuit points within the computer, measuring the time course of the voltage. After meticulously advancing a very fine electrode in 10 micron increments through the hard material (dura mater) covering the CPU, he finds a voltage. The particular region shows brief “bursts” of positive voltage when the CPU is carrying out math operations. As this is the math co-processor unit (unbeknownst to the neurophysiologist), the particular circuit path is only active when a certain bit of a floating point representation is active. With careful observation, the neurophysiologist identifies this “cell” as responding stochastically when certain numbers are presented for computation. The cell therefore has a relatively broad but weak receptive field for certain numbers. Similar investigations of nearby regions of the CPU yield similar results, while antidromic stimulation reveals inputs from related number-representing regions. In the end, the neurophysiologist concludes that the cells in this particular CPU region have receptive fields that respond to different kinds of numbers, so this must be a number representation area.

Finally the neuropsychologist comes along. She argues (quite reasonably) that despite all of these findings of network interactions and voltage signals, we cannot infer that a given region is necessary without lesion studies. The neuropsychologist then gathers a hundred computers that have had hammer blows to various parts of the motherboard, extension cards, and disks. After testing their abilities extensively, she carefully selects just the few that have a specific problem with the video output. She finds that among computers that don’t display video properly, there is an overlapping area of damage to the video card. This means of course that the video card is necessary for proper video monitor functioning. Other similar discoveries follow regarding the hard disks and the USB ports, and now we have a map of which regions are necessary for various functions. But for all of this, have the neuroscientists really understood how the computer works?

**The Moral**

As the above tale illustrates, despite all of our current sophisticated methods, we in neuroscience are still in a kind of early stage of scientific endeavor; we continue to discover many effects but lack a proportionally strong standard model for understanding how they all derive from mechanistic principles. There are nonetheless many individual mathematical and computational neural models. The Hodgkin-Huxley equations (Hodgkin and Huxley, 1952), Integrate-and-fire model (Izhikevich, 2003), Genesis (Bower and Beeman, 1994), SPAUN (Eliasmith et al., 2012), and Blue Brain project (Markram, 2006) are only a few examples of the models, modeling toolkits, and frameworks available, besides many others more focused on particular phenomena. Still, there are many different kinds of neuroscience models, and even many different frameworks for modeling. This means that there is no one theoretical lingua franca against which to evaluate empirical results, or to generate new predictions. Instead, there is a patchwork of models that treat some phenomena, and large gaps where there are no models relevant to existing phenomena. The moral of the story is not that the brain is a computer. The moral of the story is twofold: first, that we sorely need a foundational mechanistic, computational framework to understand how the elements of the brain work together to form functional units and ultimately generate the complex cognitive behaviors we study. Second, it is not enough for models to exist—their premises and implications must be understood by those on the front lines of empirical research.

**The Path Forward**

A more unified model shared by the community is not out of reach for neuroscience. Such exists in physics (e.g. the standard model), engineering (e.g. circuit theory), and chemistry. To move forward, we need to consider placing a similar level of value on theoretical neuroscience as for example the field of physics places on theoretical physics. We need to train neuroscientists and psychologists early in their careers in not just statistics, but also in mathematical and computational modeling, as well as dynamical systems theory and even engineering. Computational theories exist (Marr, 1982), and empirical neuroscience is advancing, but we need to develop the relationships between them. This is not to say that all neuroscientists should spend their time building computational models. Rather, every neuroscientist should at least possess literacy in modeling as no less important than, for example, anatomy. Our graduate programs generally need improvement on this front. For faculty, if one is in a soft money position or on the tenure clock and cannot afford the time to learn or develop theories, then why not collaborate with someone who can? If we really care about the question of how the brain works, we must not delude ourselves into thinking that simply collecting more empirical results will automatically tell us how the brain works any more than measuring the heat coming from computer parts will tell us how the computer works. Instead, our experiments should address the questions of what mechanisms might account for an effect, and how to test and falsify specific mechanistic hypotheses (Platt, 1964).

Selection of the week

The legendary twentieth century violinist David Oistrakh playing Claude Debussy’s Clair de Lune, from over 50 years ago.  Unfortunately, I can’t listen to this piece without being reminded of the 2001 film Ocean’s Eleven. It wasn’t a bad film per se but I certainly don’t want to have it be associated with one of the iconic pieces from the classical repertoire.

 

 

The Ebola response

The real failure of the Ebola response is not that a physician went bowling after returning from West Africa but that there are not more doctors over there containing the epidemic where it is needed. Infected patients do not shed virus particles until they become symptomatic and it is emitted in bodily fluids. The New York physician monitored his temperature daily and reported immediately to a designated Ebola hospital the moment he detected high fever. We should not be scape goating physicians who are trying to make a real difference in containing this outbreak and really protecting the rest of the world. This current outbreak was identified in the spring of 2014 but there was no international response until late summer. We know how to contain Ebola – identify patients and isolate them and this is what we should be doing instead of making  emotional and unhelpful policy decisions.