## Selection of the week

February 13, 2015

Yo Yo Ma and Itzhak Perlman play Antonin Dvorak’s Humoresque in G-flat minor with Seiji Ozawa and the Boston Symphony Orchestra.

## Plants: from roots to riches

February 11, 2015

I highly recommend this podcast series from BBC on the history and science of plants, narrated by Kathy Willis, director of science at Kew Gardens. I’ve been listening to it through podcasts of The Science Show on ABC radio.

## The tragic life of Walter Pitts

February 9, 2015

Everyone in computational neuroscience knows about the McCulloch-Pitts neuron model, which forms the foundation for neural network theory. However, I never knew anything about Warren McCulloch or Walter Pitts until I read this very interesting article in Nautilus. I had no idea that Pitts was a completely self-taught genius that impressed the likes of Bertrand Russell, Norbert Wiener and John von Neumann but was also a self-destructive alcoholic. One thing the article nicely conveys was the camaraderie and joie de vivre that intellectuals experienced in the past. Somehow this spirit seems missing now.

## Open source software for math and science

February 8, 2015

Here is a list of open source software that you may find useful.  Some, I use almost every day, some I have not yet used, and some may be so ubiquitous that you have even forgotten that it is software.

1. XPP/XPPAUT. Bard Ermentrout wrote XPP in the 1980’s as a dynamical systems tool for himself. It’s now the de facto tool for the Snowbird community.  I still find it to be the easiest and fastest way to simulate and visualize differential equations.  It includes the equally excellent bifurcation continuation software tool AUTO originally written by Eusebius Doedel with contributions from a who’s who list of mathematicians.  XPP is also available as an iPad and iPhone App.

2. Julia. I only learned about Julia this spring and now I use it for basically anything I used to use Matlab for.  It’s syntax is very similar to Matlab and it’s very fast. I think it is quickly gaining a large following and may be as comprehensive as Python some day.

3. Python often seems more like a way of life than a software tool. I would probably be using Python if it were not for Julia and the fact that Julia is faster. Python has packages for everything. There is SciPy and NumPy for scientific computing, Pandas for statistics, Matplotlib for making graphs, and many more that I don’t yet know about.  I must confess that I still don’t know my way around Python but my fellows all use it.

4. R. For statistics, look no further than R, which is what academic statisticians use. It’s big in Big Data.  So big that I heard that Microsoft is planning to write a wrapper for it. I also heard that billionaire mathematician James Simons’s hedge fund Renaissance Technologies uses it.  For Bayesian inference there is now Stan, which implements Hamilton Monte Carlo.  We tried using it for one of our projects and had trouble getting it to work but it’s improving very fast.

5. AMS-Latex. The great computer scientist Donald Knuth wrote the typesetting language TeX in 1978 and he changed scientific publication forever. If you have ever had to struggle putting equations into MS Word, you’ll realize what a genius Knuth is. Still TeX was somewhat technical and thus LaTeX was invented as a simplified interface for TeX with built-in environments that are commonly used. AMS-Latex is a form of LaTeX that includes commands for any mathematical symbol you’ll ever need. It also has very nice equation and matrix alignment tools.

6. Maxima. Before Mathematica and Maple there was Macsyma. It was a symbolic mathematics system developed over many years at MIT starting in the 60’s. It was written in the programming language Lisp (another great open source tool but I have never used it) and was licensed by MIT to a company called Symbolics that made dedicated Lisp machines that ran Macsyma.  My Thesis advisor at MIT bought one of these machines (I think it cost him something like 20 thousand dollars, which was a lot of money back then) and I used it for my thesis. I really loved Macysma and got quite adept at it. However, as you can imagine the Symbolics business plan really didn’t pan out and Macysma kind of languished after the company failed. However, after many trials and tribulations, Macsyma was reborn as the open source software tool Maxima and it’s great.  I’ve been running wmMaxima and it can do everything that I ever needed Mathematica for with the bonus that I don’t have to find and re-enter my license number every few months.

7. OpenOffice. I find it reprehensible that scientific journals force me to submit my papers in Microsoft Word. But MS Office is a monopoly and all my collaborators use it.  Data always comes to me in Excel and talks are in PowerPoint. For my talks, I use Apple Keynote, which is not open source. However, Apple likes to completely overhaul their software so my old talks are not even compatible with the most recent version. I also dislike the current version. The reason I went to Keynote is because I could embed PDFs of equations made in LaTeXiT (donation ware). However, the new version makes this less convenient. PDFs looked terrible in PowerPoint a decade ago. I have no idea if this has changed or not.  I have flirted with using OpenOffice for many years but it was never quite 100% compatible with MS Office so I could never fully dispense with Word.  However, in my push to open source, I may just write my next talk in OpenOffice.

8. Plink The standard GWAS analysis tool is Plink, originally written by Shaun Purcell.  It’s nice but kind of slow for some computations and was not being actively updated.  It also couldn’t do some of the calculations we wanted.  So in steps my collaborator Chris Chang who took it upon himself to write a software tool that could do all the calculations we needed. His code was so fast and good that we started to ask him to add more and more to it. Eventually, it did almost everything that Plink and gcta (tool for estimating heritability) could do and thus he asked Purcell if he could just call it Plink. It’s currently called Plink 1.9.

9. C/C++  We tend to forget that computer languages like C, Java, Javascript, Ruby, etc. are all open source software tools.

10. Inkscape is a very nice drawing program, an open source Adobe Illustrator if you will.

11. GNU Project. Computer scientist Richard Stallman kind of invented the concept of open software. He started the free software foundation and the GNU Project, which includes GNU/Linux, the editor emacs, gnuplot among many other things.

Probably the software tools you use most that are currently free (but may not be forever) are the browser and email. People forget how much these two ubiquitous things have completely changed our lives.  When was the last time you went to the library or wrote a letter in ink?

## Selection of the week

February 6, 2015

Yundi Li playing Frederic Chopin’s famous Fantaisie-Impromptu Op 66.

## Selection of the week

January 30, 2015

Here is “Siegfried’s Death and Funeral March” from Richard Wagner’s opera Gotterdammerung of the Ring Cycle played by the London Philharmonic conducted by Klaus Tennstedt.  This piece was used to great effect by director John Boorman in the movie Excalibur.

## Selection of the week

January 23, 2015

The twentieth century’s greatest pianist Vladimir Horowitz (arguments?) plays Domenico Scarlatti’s Keyboard Sonata in B minor, K. 87.  Baroque composers Scarlatti, George Frideric Handel, and JS Bach were all born in 1685.

## The demise of the American cappuccino

January 19, 2015

When I was a post doc at BU in the nineties, I used to go to a cafe on Commonwealth Ave just down the street from my office on Cummington Street. I don’t remember the name of the place but I do remember getting a cappuccino that looked something like this:Now, I usually get something that looks like this:   Instead of a light delicate layer of milk with a touch of foam floating on rich espresso, I get a lump of dry foam sitting on super acidic burnt quasi-espresso. How did this unfortunate circumstance occur? I’m not sure but I think it was because of Starbucks. Scaling up massively means you get what the average customer wants, or Starbucks thinks they want. This then sets a standard and other cafes have to follow suit because of consumer expectations. Also, making a real cappuccino takes training and a lot of practice and there is no way Starbucks could train enough baristas. Now, I’m not an anti-Starbucks person by any means. I think it is nice that there is always a fairly nice space with free wifi on every corner but I do miss getting a real cappuccino. I believe there is a real business opportunity out there for cafes to start offering better espresso drinks.

## Selection of the week

January 16, 2015

Here is a piece by turn of the last century British composer Samuel Coleridge-Taylor, named after the poet who wrote The Rime of the Ancient Mariner. Coleridge-Taylor met with some racism because he was of mixed African descent but had achieved some renown before dying at the young age of 37.

## Sebastian Seung and the Connectome

January 11, 2015

The New York Times Magazine has a nice profile on theoretical neuroscientist Sebastian Seung this week. I’ve known Sebastian since we were graduate students in Boston in the 1980’s. We were both physicists then and both ended up in biology though through completely different paths. The article focuses on his quest to map all the connections in the brain, which he terms the connectome. Near the end of the article, neuroscientist Eve Marder of Brandeis comments on the endeavor with the pithy remark that “If we want to understand the brain, the connectome is absolutely necessary and completely insufficient.”  To which the article ends with

Seung agrees but has never seen that as an argument for abandoning the enterprise. Science progresses when its practitioners find answers — this is the way of glory — but also when they make something that future generations rely on, even if they take it for granted. That, for Seung, would be more than good enough. “Necessary,” he said, “is still a pretty strong word, right?”

Personally, I am not sure if the connectome is necessary or sufficient although I do believe it is a worthy task. However, my hesitation is not because of what was proposed in the article, which is that we exist in a fluid world and the connectome is static. Rather, like Sebastian, I do believe that memories are stored in the connectome and I do believe that “your” connectome does capture much of the essence of “you”. Many years ago, the CPU on my computer died. Our IT person swapped out the CPU and when I turned my computer back on, it was like nothing had happened. This made me realize that everything about the computer that was important to me was stored on the hard drive. The CPU didn’t matter even though every thing a computer did relied on the CPU. I think the connectome is like the hard drive and trying to figure out how the brain works from it is like trying to reverse engineer the CPU from the hard drive. You can certainly get clues from it such as information is stored in binary form but I’m not sure if it is necessary or sufficient to figure out how a computer works by recreating an entire hard drive. Likewise, someday we may use the connectome to recover lost memories or treat some diseases but we may not need it to understand how a brain works.

## Implicit bias

January 7, 2015

The most dangerous form of bias is when you are unaware of it. Most people are not overtly racist but many have implicit biases that can affect their decisions.  In this week’s New York Times, Claudia Dreifus has a conversation with Stanford psychologist Jennifer Eberhardt, who has been studying implicit biases in people experimentally.  Among her many eye opening studies, she has found that convicted criminals whose faces people deem more “black” are more likely to be executed than those that are not. Chris Mooney has a longer article on the same topic in Mother Jones.  I highly recommend reading both articles.

## Journal Club

January 7, 2015

Here is the paper I’ll be covering in the Laboratory of Biological Modeling, NIDDK, Journal Club tomorrow

Morphological and population genomic evidence that human faces have evolved to signal individual identity

Abstract: Facial recognition plays a key role in human interactions, and there has been great interest in understanding the evolution of human abilities for individual recognition and tracking social relationships. Individual recognition requires sufficient cognitive abilities and phenotypic diversity within a population for discrimination to be possible. Despite the importance of facial recognition in humans, the evolution of facial identity has received little attention. Here we demonstrate that faces evolved to signal individual identity under negative frequency-dependent selection. Faces show elevated phenotypic variation and lower between-trait correlations compared with other traits. Regions surrounding face-associated single nucleotide polymorphisms show elevated diversity consistent with frequency-dependent selection. Genetic variation maintained by identity signalling tends to be shared across populations and, for some loci, predates the origin of Homo sapiens. Studies of human social evolution tend to emphasize cognitive adaptations, but we show that social evolution has shaped patterns of human phenotypic and genetic diversity as well.

## Selection of the week

January 2, 2015

Generally, any European music written before the age of Vivaldi and the Baroque Era is called Early Music.  It is often performed with instruments that are no longer in use in traditional symphony orchestras such as the viola da gamba, the lute, and the recorder.  Here is a nice piece performed by the Opus 5 Early Music Ensemble.

## The liquidity trap

December 28, 2014

The monetary base (i.e. amount of cash and demand deposits) has risen dramatically since the financial crisis and ensuing recession.

Immediately following the plunge in the economy in 2008, credit markets seized and no one could secure loans. The immediate response of the US Federal Reserve was to lower the interest rate it gives to large banks. Between January and December of 2008, the Fed discount rate dropped from around 4% to zero but the economy kept on tanking. The next move was to use unconventional monetary policy. The Fed implemented several programs of quantitative easing where they bought bonds of all sorts. When they do so, they create money out of thin air and trade it for bonds. This increases the money supply and is how the Fed “prints money.”

In the quantity theory of money, increasing the money supply should do nothing more than increase prices and people have been screaming about looming inflation for the past five years. However, inflation has remained remarkably low. The famous bond trader Bill Gross of Pimco essentially lost his job by betting on inflation and losing a lot of money. Keynesian theory predicts that increasing the money supply can cause a short-term surge in production because it takes time for prices to adjust (sticky prices) but not when interest rates are zero (at the zero lower bound). This is called a liquidity trap and there will be neither economic stimulus nor inflation. The reason is spelled out in the IS-LM model, invented by John Hicks to quantify Keynes’s theory. The Kahn Academy actually has a nice set of tutorials too. The idea is quite simple once you penetrate the economics jargon.

The IS-LM model looks at the relationship between interest rate r and the general price level/economic productivity (Y). It’s a very high level macroeconomic model of the entire economy. Even Hicks himself considered it to be just a toy model but it can give some very useful insights. Much of the second half of the twentieth century has been devoted to providing a microeconomic basis of macroeconomics in terms of interacting agents (microfoundations) to either support Keynesian models like IS-LM (New Keynesian models) or refute it (Real Business Cycle models). In may ways this tension between effective high level models and more detailed microscopic models mirrors that in biology (although it is much less contentious in biology). My take is that what model is useful depends on what question you are asking. When it comes to macroeconomics, simple effective models make sense to me.

The IS-LM model is analogous to the supply-demand model of microeconomics where the price and supply level of a product is set by the competing interests of consumers and producers. Supply increases with increasing price while demand decreases and the equilibrium is given by the intersection of these two curves. Instead of supply and demand curves, in the IS-LM model we have an Investment-Savings curve and a Liquidity-Preference-Money-supply curve. The IS curve specifies Y as an increasing function of interest rate. The rationale  that when interest rates are low, there will be more borrowing, spending, and investment and hence more goods and services will be made and sold, which increases Y.  In the LM curve, the interest rate is an increasing function of Y because as economic activity increases there will be a greater demand for money and this will allow banks to charge more for money (i.e. raise interest rates). The model shows how government or central bank intervention can increase Y. Increased government spending will shift the IS curve to the right and thus increase Y and the interest rate. It is also argued that as Y increases, employment will also increase. Here is the figure from Wikipedia:

Likewise, increasing the money supply amounts to shifting the LM curve to the right and this also increases Y and lowers interest rates. Increasing the money supply thus increases price levels as expected.

A liquidity trap occurs if instead of the above picture, the GDP is so low that we have a situation that looks like this (from Wikipedia):

Interest rates cannot go lower than zero because otherwise people will simply just hold money instead of putting it in banks. In this case, government spending can increase GDP but increasing the money supply will do nothing. The LM curve is horizontal at the intersection with the IS curve, so sliding it rightward will do nothing to Y. This explains why the monetary base can increase fivefold and not lead to inflation or economic improvement. However, there is a way to achieve negative interest rates and that is to spur inflation. Thus, in the Keynesian framework, the only way to get out of a liquidity trap is to increase government spending or induce inflation.

The IS-LM model is criticized for many things, one being that it doesn’t take into account of dynamics. In economics, dynamics are termed inter-temporal effects, which is what New Keynesian models incorporate (e.g. this paper by Paul Krugman on the liquidity trap). I think that economics would be much easier to understand if it were framed in terms of ODEs and dynamical systems language. The IS-LM model could then be written as

$\frac{dr}{dt} = [Y - F]_+ - r$

$\frac{dY}{dt} = c - r - d Y$

From here, we see that the IS-LM curves are just nullclines and obviously monetary expansion will do nothing when $Y-F <0$, which is the condition for the liquidity trap. The course of economics may have been very different if only Poincaré had dabble in it a century ago.

2104-12-29: Fixed some typos

## Selection of the week

December 26, 2014

Here is the incomparable Julia Fischer playing Vivaldi’s Winter from the Four Seasons (with bonus encore at the end).

## Code platform update

December 23, 2014

It’s a week away from 2015, and I have transitioned completely away from Matlab. Julia is winning the platform attention battle. It is very easy to code in and it is very fast. I just haven’t gotten around to learning python much less pyDSTool (sorry Rob). I kind of find the syntax of Python (with all the periods between words) annoying. Wally Xie and I have also been trying to implement some of our MCMC runs in Stan but we have had trouble making it work.  Our model requires integrating ODEs and the ODE solutions from Stan (using our own solver) do not match our Julia code or the gold standard XPP. Maybe we are missing something obvious in the Stan syntax but our code is really very simple. Thus, we are going back to doing our Bayesian posterior estimates in Julia. However, I do plan to revisit Stan if they (or we) can write a debugger for it.

## New paper in eLife

December 19, 2014

Kinetic competition during the transcription cycle results in stochastic RNA processing

Matthew L FergusonValeria de TurrisMurali PalangatCarson C ChowDaniel R Larson

Abstract

Synthesis of mRNA in eukaryotes involves the coordinated action of many enzymatic processes, including initiation, elongation, splicing, and cleavage. Kinetic competition between these processes has been proposed to determine RNA fate, yet such coupling has never been observed in vivo on single transcripts. In this study, we use dual-color single-molecule RNA imaging in living human cells to construct a complete kinetic profile of transcription and splicing of the β-globin gene. We find that kinetic competition results in multiple competing pathways for pre-mRNA splicing. Splicing of the terminal intron occurs stochastically both before and after transcript release, indicating there is not a strict quality control checkpoint. The majority of pre-mRNAs are spliced after release, while diffusing away from the site of transcription. A single missense point mutation (S34F) in the essential splicing factor U2AF1 which occurs in human cancers perturbs this kinetic balance and defers splicing to occur entirely post-release.

## Selection of the week

December 19, 2014

How about the Hallelujah Chorus from Handel’s Messiah?  Here is the Royal Choral Society.  Happy Holidays!