## Analytic continuation

February 21, 2014

I have received some skepticism that there are possibly other ways of assigning the sum of the natural numbers to a number other than -1/12 so I will try to be more precise. I thought it would be also useful to derive the analytic continuation of the zeta function, which I will do in a future post.  I will first give a simpler example to motivate the notion of analytic continuation. Consider the geometric series $1+s+s^2+s^3+\dots$. If $|s| < 1$ then we know that this series is equal to

$\frac{1}{1-s}$                (1)

Now, while the geometric series is only convergent and thus analytic inside the unit circle, (1) is defined everywhere in the complex plane except at $s=1$. So even though the sum doesn’t really exist outside of the domain of convergence, we can assign a number to it based on (1). For example, if we set $s=2$ we can make the assignment of $1 + 2 + 4 + 8 + \dots = -1$. So again, the sum of the powers of two doesn’t really equal -1, only (1) is defined at s=2. It’s just that the geometric series and (1) are the same function inside the domain of convergence. Now, it is true that the analytic continuation of a function is unique. However, although the value of -1 for $s=-1$ is the only value for the analytic continuation of the geometric series, that doesn’t mean that the sum of the powers of 2 needs to be uniquely assigned to negative one because the sum of the powers of 2 is not an analytic function. So if you could find some other series that is a function of some parameter $z$ that is analytic in some domain of convergence and happens to look like the sum of the powers of two for some $z$ value, and you can analytically continue the series to that value, then you would have another assignment.

Now consider my example from the previous post. Consider the series

$\sum_{n=1}^\infty \frac{n-1}{n^{s+1}}$  (2)

This series is absolutely convergent for $s>1$.  Also note that if I set s=-1, I get

$\sum_{n=1}^\infty (n-1) = 0 +\sum_{n'=1}^\infty n' = 1 + 2 + 3 + \dots$

which is the sum of then natural numbers. Now, I can write (2) as

$\sum_{n=1}^\infty\left( \frac{1}{n^s}-\frac{1}{n^{s+1}}\right)$

and when the real part of s is greater than 1,  I can further write this as

$\sum_{n=1}^\infty\frac{1}{n^s}-\sum_{n=1}^\infty\frac{1}{n^{s+1}}=\zeta(s)-\zeta(s+1)$  (3)

All of these operations are perfectly fine as long as I’m in the domain of absolute convergence.  Now, as I will show in the next post, the analytic continuation of the zeta function to the negative integers is given by

$\zeta (-k) = -\frac{B_{k+1}}{k+1}$

where $B_k$ are the Bernoulli numbers, which is given by the Taylor expansion of

$\frac{x}{e^x-1} = \sum B_n \frac{x^n}{n!}$   (4)

The first few Bernoulli numbers are $B_0=1, B_1=-1/2, B_2 = 1/6$. Thus using this in (4) gives $\zeta(-1)=-1/12$. A similar proof will give $\zeta(0)=-1/2$.  Using this in (3) then gives the desired result that the sum of the natural numbers is (also) 5/12.

Now this is not to say that all assignments have the same physical value. I don’t know the details of how -1/12 is used in bosonic string theory but it is likely that the zeta function is crucial to the calculation.

## Nonuniqueness of -1/12

February 11, 2014

I’ve been asked to give an example of how the sum of the natural numbers could lead to another value in the comments to my previous post so I thought it may be of general interest to more people. Consider again $S=1+2+3+4\dots$ to be the sum of the natural numbers.  The video in the previous slide gives a simple proof by combining divergent sums. In essence, the manipulation is doing renormalization by subtracting away infinities and the left over of this renormalization is -1/12. There is another video that gives the proof through analytic continuation of the Riemann zeta function

$\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}$

The zeta function is only strictly convergent when the real part of s is greater than 1. However, you can use analytic continuation to extract values of the zeta function to values where the sum is divergent. What this means is that the zeta function is no longer the “same sum” per se, but a version of the sum taken to a domain where it was not originally defined but smoothly (analytically) connected to the sum. Hence, the sum of the natural numbers is given by $\zeta(-1)$ and $\zeta(0)=\sum_{n=1}^\infty 1$, (infinite sum over ones). By analytic continuation, we obtain the values $\zeta(-1)=-1/12$ and $\zeta(0)=-1/2$.

Now notice that if I subtract the sum over ones from the sum over the natural numbers I still get the sum over the natural numbers, e.g.

$1+2+3+4\dots - (1+1+1+1\dots)=0+1+2+3+4\dots$.

Now, let me define a new function $\xi(s)=\zeta(s)-\zeta(s+1)$ so $\xi(-1)$ is the sum over the natural numbers and by analytic continuation $\xi(-1)=-1/12+1/2=5/12$ and thus the sum over the natural numbers is now 5/12. Again, if you try to do arithmetic with infinity, you can get almost anything. A fun exercise is to create some other examples.

## The sum of the natural numbers is -1/12?

February 10, 2014

This wonderfully entertaining video giving a proof for why the sum of the natural numbers  is -1/12 has been viewed over 1.5 million times. It just shows that there is a hunger for interesting and well explained math and science content out there. Now, we all know that the sum of all the natural numbers is infinite but the beauty (insidiousness) of infinite numbers is that they can be assigned to virtually anything. The proof for this particular assignment considers the subtraction of the divergent oscillating sum $S_1=1-2+3-4+5 \dots$ from the divergent sum of the natural numbers $S = 1 + 2 + 3+4+5\dots$ to obtain $4S$.  Then by similar trickery it assigns $S_1=1/4$. Solving for $S$ gives you the result $S = -1/12$.  Hence, what you are essentially doing is dividing infinity by infinity and that as any school child should know, can be anything you want. The most astounding thing to me about the video was learning that this assignment was used in string theory, which makes me wonder if the calculations would differ if I chose a different assignment.

Addendum: Terence Tao has a nice blog post on evaluating such sums.  In a “smoothed” version of the sum, it can be thought of as the “constant” in front of an asymptotic divergent term.  This constant is equivalent to the analytic continuation of the Riemann zeta function. Anyway, the -1/12 seems to be a natural way to assign a value to the divergent sum of the natural numbers.

## (Lack of) Progress in neuroscience

January 27, 2014

Here is what I just posted to the epic thread on Connectionists:

The original complaint in this thread seems to be that the main problem of (computational) neuroscience is that people do not build upon the work of others enough. In physics, no one reads the original works of Newton or Einstein, etc., anymore. There is a set canon of knowledge that everyone learns from classes and textbooks. Often if you actually read the original works you’ll find that what the early greats actually did and believed differs from the current understanding.  I think it’s safe to say that computational neuroscience has not reached that level of maturity.  Unfortunately, it greatly impedes progress if everyone tries to redo and reinvent what has come before.

The big question is why is this the case. This is really a search problem. It could be true that one of the proposed approaches in this thread or some other existing idea is optimal but the opportunity cost to follow it is great. How do we know it is the right one?  It is safer to just follow the path we already know. We simply all don’t believe enough in any one idea for all of us to pursue it.  It takes a massive commitment to learn any one thing much less everything on John Weng’s list. I don’t know too many people who could be fully conversant in math, AI, cognitive science, neurobiology, and molecular biology.  There are only so many John Von Neumanns, Norbert Wieners or Terry Taos out there. The problem actually gets worse with more interest and funding because there will be even more people and ideas to choose from. This is a classic market failure where too many choices destroys liquidity and accurate pricing. My prediction is that we will continue to argue over these points until one or a small set of ideas finally wins out.  But who is to say that thirty years is a long time. There were almost two millennia between Ptolemy and Kepler. However, once the correct idea took hold it was practically a blink of an eye to get from Kepler to Maxwell.  However, physics is so much simpler that neuroscience. In fact, my definition of physics is the field of easily model-able things. Whether or not a similar revolution will ever take place in neuroscience remains to be seen.

http://mailman.srv.cs.cmu.edu/pipermail/connectionists/2014-January/subject.html

## Brain discussion

January 26, 2014

There is an epic discussion on the Connectionist mailing list right now.  It started on Jan 23, when Juyang (John) Weng of Michigan State University criticized an announcement of the upcoming Workshop on Brain-like Computing that the workshop is really about neuron-like computing and there is a wide-gap between that and brain-like computing.  Another thing he seemed peeved about was that people in the field were not fully conversant in the literature, which is true.  People in neuroscience can be largely unaware of what people in robotics and cognitive science are doing and vice versa. The discussion really became lively when Jim Bower jumped in. It’s hard to summarize the entire thread but main themes include arguing about the worthiness of the big data approach to neuroscience, lamenting about the lack of progress for the past thirty years, and what we should be doing.

## The non-cynical view

January 23, 2014

Read the 2014 Gates letter to get the non-cynical view of progress for the poor. I’m actually on the more hopeful side of this issue, surprisingly. The data (e.g. see Hans Rosling) clearly show that health is improving in developing nations. We may have also reached “peak child” in that there are more children alive today then there will ever be. Extreme poverty is being dramatically reduced. Foreign aid does seem to work.  Here’s Bill:

By almost any measure, the world is better than it has ever been. People are living longer, healthier lives. Many nations that were aid recipients are now self-sufficient. You might think that such striking progress would be widely celebrated, but in fact, Melinda and I are struck by how many people think the world is getting worse. The belief that the world can’t solve extreme poverty and disease isn’t just mistaken. It is harmful. That’s why in this year’s letter we take apart some of the myths that slow down the work. The next time you hear these myths, we hope you will do the same.
- Bill Gates

## The Stephanie Event

January 14, 2014

You should read this article in Esquire about the advent of personalized cancer treatment for a heroic patient named Stephanie Lee.  Here is Steve Hsu’s blog post. The cost of sequencing is almost at the point where everyone can have their normal and tumor cells completely sequenced to look for mutations like Stephanie. The team at Mt.  Sinai Hospital in New York described in the article inserted some of the mutations into a fruit fly and then checked to see what drugs killed it. The Stephanie Event was the oncology board meeting at Sinai where the treatment for Stephanie Lee’s colon cancer, which had spread to the liver, was discussed. They decided on a standard protocol but would use the individualized therapy based on the fly experiments if the standard treatments failed.  The article was beautifully written, combining a compelling human story with science.

January 9, 2014

A fairly common presumption among biologists and adherents of paleo-diets is that since humans evolved on the African Savannah hundreds of thousands of years ago, we are not well adapted to the modern world. For example, Harvard biologist Daniel Lieberman has a new book out called “The Story of the Human Body” explaining through evolution why our bodies are the way they are. You can hear him speak about the book on Quirks and Quarks here. He talks about how our maladaptation to the modern world has led to widespread myopia, back pains, an obesity epidemic, and so forth.

This may all be true but the irony is that we as a species have never been more fit and adapted from an evolutionary point of view. In evolutionary theory, fitness is measured by the number of children or grandchildren we have. Thus, the faster a population grows the more fit and hence adapted to its environment it is. Since our population is growing the fastest it’s ever been (technically, we may have been fitter a few decades ago since our growth rate may actually be slowing slightly), we are the most fit we have ever been. In the developed world we certainly live longer and are healthier than we have ever been even when you account for the steep decline in infant death rates. It is true that heart disease and cancer has increased substantially but that is only because we (meaning the well-off in the developed world) no longer die young from infectious diseases, parasites, accidents, and violence.

One could claim that we are perfectly adapted to our modern world because we invented it. Those that live in warm houses are in better health than those  sitting in a damp cave. Those that get food at the local supermarket live longer than hunter-gatherers.  The average life expectancy of a sedentary individual is not different from an active person. One reason we have an obesity epidemic is that obesity isn’t very effective at killing people. An overweight or even obese person can live a long life.  So even though I type this peering through corrective eye wear while nursing a sore back, I can confidently say I am better adapted to my environment than my ancestors were thousands of years ago.

## The Bitcoin economy

December 24, 2013

The electronic currency Bitcoin has been in the news quite a bit lately since its value has risen from about $10 a year ago to over$650 today, hitting a peak of over \$1000 less than a month ago. I remember hearing Gavin Andresen, who is a Principal of the Bitcoin  Virtual Currency Project (no single person nor entity issues or governs Bitcoin) talk about Bitcoin on Econtalk two years ago and was astonished at how little he knew about basic economics much less monetary policy. Paul Krugman criticized Bitcoin today in his New York Times column and Timothy Lee responded in the Washington Post.

The principle behind Bitcoin is actually quite simple. There is a master list, called the block chain, which is an encrypted shared ledger in which all transactions are kept. The system uses public-key cryptography, where a public key can be used to encrypt a piece of information but a private key is required to decrypt it. Bitcoin owners each have a private key, and use it to update the ledger whenever a transaction takes place. The community at large then validates the transaction in a computationally intensive process called mining. The rewards for this work are Bitcoins, which are issued to the first computer to complete the computation. The intensive computations are integral to the system because it makes it difficult for attackers to falsify a transaction. As long as there are more honest participants than attackers then the attackers can never perform computations fast enough to falsify a transaction. The computations are also scaled so that Bitcoins are only issued every 10 minutes. Thus it does not matter how fast your computer is in absolute terms to mine Bitcoins, only that it is faster than everyone else’s computer. This article describes how people are creating businesses to mine Bitcoins.

Krugman’s post was about the ironic connection between Keynesian fiscal stimulus and gold. Although gold has some industrial use it is highly valued mostly because it is rare and difficult to dig up. Keyne’s theory of recessions and depressions is that there is a sudden collapse in aggregate demand, so the economy operates at below capacity, leading to excess unemployment. This situation was thought not to occur in classical economics because prices and wages should fall until equilibrium is restored and the economy operates at full capacity again. However, Keynes proposed that prices and wages are “sticky” and do not adjust very quickly. His solution was for the government to increase spending to take up the shortfall in demand and return the economy to full employment. He then jokingly proposed that the government could get the private sector to do the spending by burying money, which people could privately finance to dig out. He also noted that this was not that different from gold mining. Keyne’s point was that instead of wasting all that effort the government could simply print money and give it away or spend it. Krugman also points out that Adam Smith, often held up as a paragon of conservative principles, felt that paper money was much better for an economy to run smoothly than tying up resources in useless gold and silver. The connection between gold and Bitcoins is unmissable. Both have no intrinsic value and are a terrible waste of resources. Lee feels that Krugman misunderstands Bitcoin in that the intensive computations are integral to the functioning of the system and more importantly the main utility of Bitcoin is that it is a new form of payment network, which he feels is independent of monetary considerations.

Krugman and Lee have valid points but both are still slightly off the mark. I think we will definitely head towards some electronic monetary system in the future but it certainly won’t be Bitcoin in its current form. However, Bitcoin or at least something similar will also remain. The main problem with Bitcoin, as well as gold, is that its supply is constrained. The supply of Bitcoins is designed to cap out at 21 million with about half in circulation now. What this means is that the Bitcoin economy is subject to deflation. As the economy grows and costs fall, the price of goods denominated in Bitcoins must also fall. Andresen shockingly didn’t understand this important fact in the Econtalk podcast. The value of Bitcoins will always increase. Deflation is bad for economic growth because it encourages people to delay purchases and hoard Bitcoins. Of course if you don’t believe in economic growth then Bitcoins might be a good thing. Ideally, you want money to be neutral so the supply should grow along with the economy. This is way central banks target inflation around 2%. Hence, Bitcoin as it is currently designed will certainly fail as a currency and payment system but it would not take too much effort to fix its flaws. It may simply serve the role of the search engine Altavista to the eventual winner Google.

However, I think Bitcoin in its full inefficient glory and things like it will only proliferate. In our current era of high unemployment and slow growth, Bitcoin is serving as a small economic stimulus. As we get more economically efficient, fewer of us will be required for any particular sector of the economy. The only possible way to maintain full employment is to constantly invent new economic sectors. Bitcoin is economically useful because it is so completely useless.

## Symplectic Integrators

December 12, 2013

Dynamical systems can be divided into two basic types: conservative and dissipative.  In biology, we almost always model dissipative systems and thus if we want to computationally simulate the system almost any numerical solver will do the job (unless the problem is stiff, which I’ll leave to another post). However, when simulating a conservative system, we must take care to conserve the conserved quantities. Here, I will give a very elementary review of symplectic integrators for numerically solving conservative systems.

## What counts as science?

December 10, 2013

Ever since the financial crisis of 2008 there has been some discussion about whether or not economics is a science. Some, like Russ Roberts of Econtalk, channelling Friedrich Hayek, do not believe that economics is a science. They think it’s more like history where we come up with descriptive narratives that cannot be proven. I think that one thing that could clarify this debate is to separate the goal of a field from its practice. A field could be a science although its practice is not scientific.

To me what defines a science is whether or not it strives to ask questions that have unambiguous answers. In that sense, most of economics is a science. We may never know what caused the financial crisis of 2008 but that is still a scientific question. Now, it is quite plausible that the crisis of 2008 had no particular cause just like there is no particular cause for a winter storm. It could have been just the result of a collection of random events but knowing that would be extremely useful. In this sense, parts of history can also be considered to be a science. I do agree that the practice of economics and history are not always scientific and can never be as scientific as a field like physics because controlled experiments usually cannot be performed. We will likely never find the answer for what caused World War I but there certainly was a set of conditions and events that led to it.

There are parts of economics that are clearly not science such as what constitutes a fair system. Likewise in history, questions regarding who was the best president or military mind are certainly  not science. Like art and ethics these questions depend on value systems. I would also stress that a big part of science is figuring out what questions can be asked. If it is true that recessions are random like winter storms then the question of when the next crisis will hit does not have an answer. There may be a short time window for some predictability but no chance of a long range forecast. However, we could possibly find some necessary conditions for recessions just like cold weather is necessary for a snow storm.

## Fred Sanger 1918 – 2013

November 21, 2013

Perhaps the greatest biologist of the twentieth century and two-time Nobel prize winner, Fred Sanger, has died at the age of 95. He won his first Nobel in 1958 for determining the amino acid sequence of insulin and his second in 1980 for developing a method to sequence DNA.  An obituary can be found here.

## Michaelis-Menten kinetics

November 17, 2013

This year is the one hundred anniversary of the Michaelis-Menten equation, which was published in 1913 by German born biochemist Leonor Michaelis and Canadian physician Maud Menten. Menten was one of the first women to obtain a medical degree in Canada and travelled to Berlin to work with Michaelis because women were forbidden from doing research in Canada. After spending a few years in Europe she returned to the US to obtain a PhD from the University of Chicago and spent most of her career at the University of Pittsburgh. Michaelis also eventually moved to the US and had positions at Johns Hopkins University and the Rockefeller University.

The Michaelis-Menten equation is one of the first applications of mathematics to biochemistry and perhaps the most important. These days people, including myself, throw the term Michaelis-Menten around to generally mean any function of the form

$f(x)= \frac {Vx}{K+x}$

although its original derivation was to specify the rate of an enzymatic reaction.  In 1903, it had been discovered that enzymes, which catalyze reactions, work by binding to a substrate. Michaelis took up this line of research and Menten joined him. They focused on the enzyme invertase, which catalyzes the breaking down (i.e. hydrolysis) of the substrate sucrose (i.e. table sugar) into the simple sugars fructose and glucose. They modelled this reaction as

$E + S \overset{k_f}{\underset{k_r}{\rightleftharpoons}} ES \overset{k_c}{\rightarrow }E +P$

where the enzyme E binds to a substrate S to form a complex ES which releases the enzyme and forms a product P. The goal is to calculate the rate of the appearance of P.

## Talk in Taiwan

November 1, 2013

I’m currently at the National Center for Theoretical Sciences, Math Division, on the campus of the National Tsing Hua University, Hsinchu for the 2013 Conference on Mathematical Physiology.  The NCTS is perhaps the best run institution I’ve ever visited. They have made my stay extremely comfortable and convenient.

Here are the slides for my talk on Correlations, Fluctuations, and Finite Size Effects in Neural Networks.  Here is a list of references that go with the talk

E. Hildebrand, M.A. Buice, and C.C. Chow, Kinetic theory of coupled oscillators,’ Physical Review Letters 98 , 054101 (2007) [PRL Online] [PDF]

M.A. Buice and C.C. Chow, Correlations, fluctuations and stability of a finite-size network of coupled oscillators’. Phys. Rev. E 76 031118 (2007) [PDF]

M.A. Buice, J.D. Cowan, and C.C. Chow, ‘Systematic Fluctuation Expansion for Neural Network Activity Equations’, Neural Comp., 22:377-426 (2010) [PDF]

C.C. Chow and M.A. Buice, ‘Path integral methods for stochastic differential equations’, arXiv:1009.5966 (2010).

M.A. Buice and C.C. Chow, `Effective stochastic behavior in dynamical systems with incomplete incomplete information.’ Phys. Rev. E 84:051120 (2011).

MA Buice and CC Chow. Dynamic finite size effects in spiking neural networks. PLoS Comp Bio 9:e1002872 (2013).

MA Buice and CC Chow. Generalized activity equations for spiking neural networks. Front. Comput. Neurosci. 7:162. doi: 10.3389/fncom.2013.00162, arXiv:1310.6934.

Here is the link to relevant posts on the topic.

## New paper on neural networks

October 28, 2013

Michael Buice and I have a new paper in Frontiers in Computational Neuroscience as well as on the arXiv (the arXiv version has fewer typos at this point). This paper partially completes the series of papers Michael and I have written about developing generalized activity equations that include the effects of correlations for spiking neural networks. It combines two separate formalisms we have pursued over the past several years. The first was a way to compute finite size effects in a network of coupled deterministic oscillators (e.g. see here, herehere and here).  The second was to derive a set of generalized Wilson-Cowan equations that includes correlation dynamics (e.g. see here, here, and here ). Although both formalisms utilize path integrals, they are actually conceptually quite different. The first formalism adapted kinetic theory of plasmas to coupled dynamical systems. The second used ideas from field theory (i.e. a two-particle irreducible effective action) to compute self-consistent moment hierarchies for a stochastic system. This paper merges the two ideas to generate generalized activity equations for a set of deterministic spiking neurons.

## Paper on compressed sensing and genomics

October 14, 2013

New paper on the arXiv. The next step after the completion of the Human Genome Project, was the search for genes associated with diseases such as autism or diabetes. However, after spending hundreds of millions of dollars, we find that there are very few common variants of genes with large effects. This doesn’t mean that there aren’t genes with large effect. The growth hormone gene definitely has a large effect on height. It just means that variations of genes that are common among people have small effects on the phenotype. Given the results of Fisher, Wright, Haldane and colleagues, this was probably expected as the most likely scenario and recent results measuring narrow-sense heritability directly from genetic markers (e.g. see this) confirms this view.

Current GWAS microarrays consider about a million or two markers and this is increasing rapidly. Narrow-sense heritability refers to the additive or linear genetic variance, which means the phenotype is given by the linear model $y= Z\beta + \eta$, where $y$ is the phenotype vector, $Z$ is the genotype matrix, $\beta$ are all the genetic effects we want to recover, and $\eta$ are all the nonadditive components including environmental effects. This is a classic linear regression problem. The problem comes when the number of coefficients $\beta$ far exceeds the number of people in your sample, which is the case for genomics. Compressed sensing is a field of high dimensional statistics that addresses this specific problem. People such as David Donoho, Emmanuel Candes and Terence Tao have proven under fairly general conditions that if the number of nonzero coefficients are sparse compared to the number samples, then the effects can be completely recovered using L1 penalized optimization algorithms such as the lasso or approximate message passing. In this paper, we show that these ideas can be applied to genomics.

Here is Steve Hsu’s summary of the paper

Application of compressed sensing to genome wide association studies and genomic selection

(Submitted on 8 Oct 2013)

We show that the signal-processing paradigm known as compressed sensing (CS) is applicable to genome-wide association studies (GWAS) and genomic selection (GS). The aim of GWAS is to isolate trait-associated loci, whereas GS attempts to predict the phenotypic values of new individuals on the basis of training data. CS addresses a problem common to both endeavors, namely that the number of genotyped markers often greatly exceeds the sample size. We show using CS methods and theory that all loci of nonzero effect can be identified (selected) using an efficient algorithm, provided that they are sufficiently few in number (sparse) relative to sample size. For heritability h2 = 1, there is a sharp phase transition to complete selection as the sample size is increased. For heritability values less than one, complete selection can still occur although the transition is smoothed. The transition boundary is only weakly dependent on the total number of genotyped markers. The crossing of a transition boundary provides an objective means to determine when true effects are being recovered. For h2 = 0.5, we find that a sample size that is thirty times the number of nonzero loci is sufficient for good recovery.

 Comments: Main paper (27 pages, 4 figures) and Supplement (5 figures) combined Subjects: Genomics (q-bio.GN); Applications (stat.AP) Cite as: arXiv:1310.2264 [q-bio.GN] (or arXiv:1310.2264v1 [q-bio.GN] for this version)

## Happiness and divisive inhibition

October 9, 2013

The Wait But Why blog has an amusing post on why Generation Y yuppies (GYPSYS) are unhappy, which I found through the blog of Michigan economist  Miles Kimball. In short, it is because their expectations exceed reality and they are entitled. What caught my eye was that they defined happiness as “Reality-Expectations”. The key point being that this is a subtractive expression. My college friend Peter Lee, now Professor and Director of the University Manchester X-Ray imaging facility, used to define happiness as “desires fulfilled beyond expectations”. I always interpreted this as a divisive quantity, meaning “Reality/Expectations”.

Now, the definition does have implications if we actually try to use it as a model for how happiness would change with some quantity like money. For example, consider the model where reality and expectations are both proportional to money. Then happiness = a*money – b*money. As long as b is less than a, then money always buys happiness, but if a is less than b then more money brings more unhappiness. However, if we consider the divisive model of happiness then happiness = a*money/ b*money = a/b and happiness doesn’t depend on money at all.

However, the main reason I bring this up is because it is analogous to the two possible ways to model inhibition (or adaptation) in neuroscience. The neurons in the brain generally interact with each other through two types of synapses – excitatory and inhibitory. Excitatory synapses generally depolarize a neuron and make its potential get closer to threshold whereas inhibitory neurons hyperpolarize the neuron and make it farther from threshold (although there are ways this can be violated). For neurons receiving stationary asynchronous inputs, we can consider the firing rate to be some function of the excitatory E and inhibitory I inputs. In subtractive inhibition, the firing rate would have the abstract form f(E-I) whereas for divisive inhibition it would have the form f(E)/(I+C), where f is some thresholded gain function (i.e. zero below threshold, positive above threshold) and C is a constant to prevent the firing rate from reaching infinity. There are some critical differences between subtractive and divisive inhibition. Divisive inhibition works by reducing the gain of the neuron, i.e. it makes the slope of the gain function shallower while subtractive inhibition makes the threshold effectively higher. These properties have great computational significance, which I will get into in a future post.

## The cost of the shutdown and sequester

October 7, 2013

People may be wondering how the US government shutdown is affecting the NIH. I can’t speak for the rest of the institutes but I was instructed to not come to work and to not use my NIH email account or NIH resources. Two new fellows, who were supposed to begin on Oct 1, now have to wait and they will not be compensated for the missed time even if Congress does decides to give back pay to the furloughed employees. I really was hoping for them to start in August or September but that was pushed back because of the Sequester (have people forgotten about that?), which cut our budgets severely. In fact, because of the Sequester, I wasn’t able to hire one fellow because the salary requirements for their seniority exceeded my budget. We were just starting to get some really interesting psychophysics results on ambiguous stimuli but that had to be put on hold because we couldn’t immediately replace fellow Phyllis Thangaraj, who was running the experiments and left this summer to start her MD/PhD degree at Columbia. Now it will be delayed even further. I have several papers in the revision process that have also been delayed by the shutdown. All travel has been cancelled and I heard that people at conferences were ordered to return immediately, including those who were on planes on Oct 1. My quadrennial external review this week has now been postponed. All the flights for the committee and ad hoc members have to be cancelled and we now have to find another date where 20 or more people can agree on. All NIH seminars and the yearly NIH research festival has been cancelled. I was supposed to review an external NIH research proposal this week and that has been postponed indefinitely along with all other submitted proposals awaiting review. Academic labs, students and postdocs depending on their NIH grants this fiscal year will be without funding until the government is reopened. Personally, I will probably come out of this reasonably intact. However, I do worry how this will affect young people, who are the future.

## Heritability and additive genetic variance

October 4, 2013

Most people have an intuitive notion of heritability being the genetic component of why close relatives tend to resemble each other more than strangers. More technically, heritability is the fraction of the variance of a trait within a population that is due to genetic factors. This is the pedagogical post on heritability that I promised in a previous post on estimating heritability from genome wide association studies (GWAS).

One of the most important facts about uncertainty and something that everyone should know but often doesn’t is that when you add two imprecise quantities together, while the average of the sum is the sum of the averages of the individual quantities, the total error (i.e. standard deviation) is not the sum of the standard deviations but the square root of the sum of the square of the standard deviations or variances. In other words, when you add two uncorrelated noisy variables, the variance of the sum is the sum of the variances. Hence, the error grows as the square root of the number of quantities you add and not linearly as it had been assumed for centuries. There is a great article in the American Scientist from 2007 called The Most Dangerous Equation giving a history of some calamities that resulted from not knowing about how variances sum. The variance of a trait can thus be expressed as the sum of the genetic variance and environmental variance, where environment just means everything that is not correlated to genetics. The heritability is the ratio of the genetic variance to the trait variance.

## Government shutdown

October 1, 2013

As of today, I am officially furloughed without pay since the NIH is officially closed and nonessential employees like myself are barred from working without pay by the Antideficiency Act of 1884. However, given that blogging is not considered an official duty, I can continue to post to Scientific Clearing House. Those who are not up on American politics may be wondering why the US government has shutdown. The reason is that the US fiscal year begins on Oct 1 and according to the the US Constitution, only Congress can appropriate funds for the functioning of government and they did not pass a budget for the new fiscal year by midnight of September 30. Actually, Congress has not passed a budget on time in recent years but has passed Continuing Resolutions that to keep the government going. So why have they not passed a budget or a CR this year? Well, currently the US government is divided with the Democratic party controlling the Senate and Presidency and the Republican party controlling the House of Representatives. All three entities must agree for a law to pass. Three years ago, the Democrats controlled the Congress, which includes both the House and Senate, and passed the Affordable Care Act, also known as Obamacare, which the President signed into law.   The Republicans took control of the House in 2011 and have been trying to repeal the ACA ever since but have been stopped by the Senate. This year they decided to try a new tactic, which was to pass a budget that withholds funding for the ACA. The Senate did not agree, passed a budget with the ACA and sent it back to the House, which then took out funding for the ACA again with some modifications and sent it back. This went on back and forth without converging to an agreement and thus we are closed today.