Bruce Alberts, Marc Kirschner, Shirley Tilghman, and Harold Varmus have an opinion piece in PNAS (link here) summarizing their concerns for the future of US biomedical research and suggesting some fixes. Their major premise is that medical research is predicated on an ever continuing expansion and we’re headed for a crisis if we don’t change immediately. As an NIH intramural investigator, I am shielded from the intense grant writing requirements of those on the outside. However, I am well aware of the difficulties in obtaining grant support and more than cognizant of the fact that a simple way to resolve the recent 8% cut in NIH funding is to eliminate the NIH intramural program. I have also noticed that medical schools keep expanding and hiring faculty on “soft money”, which requires them to raise their own salaries through grants. Soft money faculty essentially run independent businesses who rent lab space from institutions. The problem is that the market is a monopsony, where the sole buyer is the NIH. In order to keep their businesses running, they need lots of low paid labour, in the form of grad students and postdocs, many of whom have no hope of ever becoming independent investigators. One of the proposed solutions is to increase the salary of post docs and increase the numbers of permanent staff scientist positions. The premise is that by increasing unit costs, a labour equilibrium can be achieved. There is much more in the article and anyone involved in science should read it.
I predicted that there would be an eventual push back on Big Data and it seems that it has begun. Gary Marcus and Ernest Davis of NYU had an op-ed in the Times yesterday outlining nine issues with Big Data. I think one way to encapsulate many of the critiques is that you will never be able to do true prior free data modeling. The number of combinations in a data set grows as the factorial of the number of elements, which grows faster than an exponential. Hence, Moore’s law can never catch up. At some point, someone will need to exercise some judgement in which case Big Data is not really different from the ordinary data that we deal with all the time.
If civilization succumbs to a deadly pandemic, we will all know what the vector was. Every physician, nurse, dentist, hygienist, and health care worker is bound to check their smartphone sometime during the day before, during, or after seeing a patient and they are not sterilizing it afterwards. The fully hands free smartphone could be the most important invention of the 21st century.
This Econtalk podcast with Frito-Lay executive Brendan O’Donohoe from 2011 gives a great account of how optimized the production and marketing system for potato chips and other salty snacks has become. The industry has a lot of very smart people trying to figure out how to ensure that you maximize food consumption from how to peel potatoes to how to stack store shelves with bags of chips. This increased efficiency is our hypothesis (e.g. see here) for the obesity epidemic. However, unlike before where I attributed the increase in food production to changes in agricultural policy, I now believe it is mostly due to the vastly increased efficiency of food production. This podcast shows the extent of the optimization after the produce leaves the farm but the efficiency improvements on the farm are just as dramatic. For example, farmers now use GPS to optimally line up their crops.
As I promised in my previous post, here is a derivation of the analytic continuation of the Riemann zeta function to negative integer values. There are several ways of doing this but a particularly simple way is given by Graham Everest, Christian Rottger, and Tom Ward at this link. It starts with the observation that you can write
if the real part of . You can then break the integral into pieces with
For , you can expand the integrand in a binomial expansion
Now substitute (2) into (1) to obtain
where the remainder is an analytic function when because the resulting series is absolutely convergent. Since the zeta function is analytic for , the right hand side is a new definition of that is analytic for aside from a simple pole at . Now multiply (3) by and take the limit as to obtain
which implies that
Taking the limit of going to zero from the right of (3′) gives
Hence, the analytic continuation of the zeta function to zero is -1/2.
The analytic domain of can be pushed further into the left hand plane by extending the binomial expansion in (2) to
Inserting into (1) yields
where is analytic for . Now let and extract out the last term of the sum with (4) to obtain
Rearranging (5) gives
where I have used
The righthand side of (6) is now defined for . Rewrite (6) as
Collecting terms, substituting for and multiplying by gives
Now, note that the Bernoulli numbers satisfy the condition . Hence, let
which using and gives the self-consistent condition
which is the analytic continuation of the zeta function for integers .
I have received some skepticism that there are possibly other ways of assigning the sum of the natural numbers to a number other than -1/12 so I will try to be more precise. I thought it would be also useful to derive the analytic continuation of the zeta function, which I will do in a future post. I will first give a simpler example to motivate the notion of analytic continuation. Consider the geometric series . If then we know that this series is equal to
Now, while the geometric series is only convergent and thus analytic inside the unit circle, (1) is defined everywhere in the complex plane except at . So even though the sum doesn’t really exist outside of the domain of convergence, we can assign a number to it based on (1). For example, if we set we can make the assignment of . So again, the sum of the powers of two doesn’t really equal -1, only (1) is defined at s=2. It’s just that the geometric series and (1) are the same function inside the domain of convergence. Now, it is true that the analytic continuation of a function is unique. However, although the value of -1 for is the only value for the analytic continuation of the geometric series, that doesn’t mean that the sum of the powers of 2 needs to be uniquely assigned to negative one because the sum of the powers of 2 is not an analytic function. So if you could find some other series that is a function of some parameter that is analytic in some domain of convergence and happens to look like the sum of the powers of two for some value, and you can analytically continue the series to that value, then you would have another assignment.
Now consider my example from the previous post. Consider the series
This series is absolutely convergent for . Also note that if I set s=-1, I get
which is the sum of then natural numbers. Now, I can write (2) as
and when the real part of s is greater than 1, I can further write this as
All of these operations are perfectly fine as long as I’m in the domain of absolute convergence. Now, as I will show in the next post, the analytic continuation of the zeta function to the negative integers is given by
where are the Bernoulli numbers, which is given by the Taylor expansion of
The first few Bernoulli numbers are . Thus using this in (4) gives . A similar proof will give . Using this in (3) then gives the desired result that the sum of the natural numbers is (also) 5/12.
Now this is not to say that all assignments have the same physical value. I don’t know the details of how -1/12 is used in bosonic string theory but it is likely that the zeta function is crucial to the calculation.
I’ve been asked to give an example of how the sum of the natural numbers could lead to another value in the comments to my previous post so I thought it may be of general interest to more people. Consider again to be the sum of the natural numbers. The video in the previous slide gives a simple proof by combining divergent sums. In essence, the manipulation is doing renormalization by subtracting away infinities and the left over of this renormalization is -1/12. There is another video that gives the proof through analytic continuation of the Riemann zeta function
The zeta function is only strictly convergent when the real part of s is greater than 1. However, you can use analytic continuation to extract values of the zeta function to values where the sum is divergent. What this means is that the zeta function is no longer the “same sum” per se, but a version of the sum taken to a domain where it was not originally defined but smoothly (analytically) connected to the sum. Hence, the sum of the natural numbers is given by and , (infinite sum over ones). By analytic continuation, we obtain the values and .
Now notice that if I subtract the sum over ones from the sum over the natural numbers I still get the sum over the natural numbers, e.g.
Now, let me define a new function so is the sum over the natural numbers and by analytic continuation and thus the sum over the natural numbers is now 5/12. Again, if you try to do arithmetic with infinity, you can get almost anything. A fun exercise is to create some other examples.
This wonderfully entertaining video giving a proof for why the sum of the natural numbers is -1/12 has been viewed over 1.5 million times. It just shows that there is a hunger for interesting and well explained math and science content out there. Now, we all know that the sum of all the natural numbers is infinite but the beauty (insidiousness) of infinite numbers is that they can be assigned to virtually anything. The proof for this particular assignment considers the subtraction of the divergent oscillating sum from the divergent sum of the natural numbers to obtain . Then by similar trickery it assigns . Solving for gives you the result . Hence, what you are essentially doing is dividing infinity by infinity and that as any school child should know, can be anything you want. The most astounding thing to me about the video was learning that this assignment was used in string theory, which makes me wonder if the calculations would differ if I chose a different assignment.
Addendum: Terence Tao has a nice blog post on evaluating such sums. In a “smoothed” version of the sum, it can be thought of as the “constant” in front of an asymptotic divergent term. This constant is equivalent to the analytic continuation of the Riemann zeta function. Anyway, the -1/12 seems to be a natural way to assign a value to the divergent sum of the natural numbers.
Here is what I just posted to the epic thread on Connectionists:
The original complaint in this thread seems to be that the main problem of (computational) neuroscience is that people do not build upon the work of others enough. In physics, no one reads the original works of Newton or Einstein, etc., anymore. There is a set canon of knowledge that everyone learns from classes and textbooks. Often if you actually read the original works you’ll find that what the early greats actually did and believed differs from the current understanding. I think it’s safe to say that computational neuroscience has not reached that level of maturity. Unfortunately, it greatly impedes progress if everyone tries to redo and reinvent what has come before.
The big question is why is this the case. This is really a search problem. It could be true that one of the proposed approaches in this thread or some other existing idea is optimal but the opportunity cost to follow it is great. How do we know it is the right one? It is safer to just follow the path we already know. We simply all don’t believe enough in any one idea for all of us to pursue it. It takes a massive commitment to learn any one thing much less everything on John Weng’s list. I don’t know too many people who could be fully conversant in math, AI, cognitive science, neurobiology, and molecular biology. There are only so many John Von Neumanns, Norbert Wieners or Terry Taos out there. The problem actually gets worse with more interest and funding because there will be even more people and ideas to choose from. This is a classic market failure where too many choices destroys liquidity and accurate pricing. My prediction is that we will continue to argue over these points until one or a small set of ideas finally wins out. But who is to say that thirty years is a long time. There were almost two millennia between Ptolemy and Kepler. However, once the correct idea took hold it was practically a blink of an eye to get from Kepler to Maxwell. However, physics is so much simpler that neuroscience. In fact, my definition of physics is the field of easily model-able things. Whether or not a similar revolution will ever take place in neuroscience remains to be seen.
Addendum: Thanks to Labrigger for finding the link to the thread:
There is an epic discussion on the Connectionist mailing list right now. It started on Jan 23, when Juyang (John) Weng of Michigan State University criticized an announcement of the upcoming Workshop on Brain-like Computing that the workshop is really about neuron-like computing and there is a wide-gap between that and brain-like computing. Another thing he seemed peeved about was that people in the field were not fully conversant in the literature, which is true. People in neuroscience can be largely unaware of what people in robotics and cognitive science are doing and vice versa. The discussion really became lively when Jim Bower jumped in. It’s hard to summarize the entire thread but main themes include arguing about the worthiness of the big data approach to neuroscience, lamenting about the lack of progress for the past thirty years, and what we should be doing.
Read the 2014 Gates letter to get the non-cynical view of progress for the poor. I’m actually on the more hopeful side of this issue, surprisingly. The data (e.g. see Hans Rosling) clearly show that health is improving in developing nations. We may have also reached “peak child” in that there are more children alive today then there will ever be. Extreme poverty is being dramatically reduced. Foreign aid does seem to work. Here’s Bill:
By almost any measure, the world is better than it has ever been. People are living longer, healthier lives. Many nations that were aid recipients are now self-sufficient. You might think that such striking progress would be widely celebrated, but in fact, Melinda and I are struck by how many people think the world is getting worse. The belief that the world can’t solve extreme poverty and disease isn’t just mistaken. It is harmful. That’s why in this year’s letter we take apart some of the myths that slow down the work. The next time you hear these myths, we hope you will do the same.
- Bill Gates
You should read this article in Esquire about the advent of personalized cancer treatment for a heroic patient named Stephanie Lee. Here is Steve Hsu’s blog post. The cost of sequencing is almost at the point where everyone can have their normal and tumor cells completely sequenced to look for mutations like Stephanie. The team at Mt. Sinai Hospital in New York described in the article inserted some of the mutations into a fruit fly and then checked to see what drugs killed it. The Stephanie Event was the oncology board meeting at Sinai where the treatment for Stephanie Lee’s colon cancer, which had spread to the liver, was discussed. They decided on a standard protocol but would use the individualized therapy based on the fly experiments if the standard treatments failed. The article was beautifully written, combining a compelling human story with science.
A fairly common presumption among biologists and adherents of paleo-diets is that since humans evolved on the African Savannah hundreds of thousands of years ago, we are not well adapted to the modern world. For example, Harvard biologist Daniel Lieberman has a new book out called “The Story of the Human Body” explaining through evolution why our bodies are the way they are. You can hear him speak about the book on Quirks and Quarks here. He talks about how our maladaptation to the modern world has led to widespread myopia, back pains, an obesity epidemic, and so forth.
This may all be true but the irony is that we as a species have never been more fit and adapted from an evolutionary point of view. In evolutionary theory, fitness is measured by the number of children or grandchildren we have. Thus, the faster a population grows the more fit and hence adapted to its environment it is. Since our population is growing the fastest it’s ever been (technically, we may have been fitter a few decades ago since our growth rate may actually be slowing slightly), we are the most fit we have ever been. In the developed world we certainly live longer and are healthier than we have ever been even when you account for the steep decline in infant death rates. It is true that heart disease and cancer has increased substantially but that is only because we (meaning the well-off in the developed world) no longer die young from infectious diseases, parasites, accidents, and violence.
One could claim that we are perfectly adapted to our modern world because we invented it. Those that live in warm houses are in better health than those sitting in a damp cave. Those that get food at the local supermarket live longer than hunter-gatherers. The average life expectancy of a sedentary individual is not different from an active person. One reason we have an obesity epidemic is that obesity isn’t very effective at killing people. An overweight or even obese person can live a long life. So even though I type this peering through corrective eye wear while nursing a sore back, I can confidently say I am better adapted to my environment than my ancestors were thousands of years ago.
The electronic currency Bitcoin has been in the news quite a bit lately since its value has risen from about $10 a year ago to over $650 today, hitting a peak of over $1000 less than a month ago. I remember hearing Gavin Andresen, who is a Principal of the Bitcoin Virtual Currency Project (no single person nor entity issues or governs Bitcoin) talk about Bitcoin on Econtalk two years ago and was astonished at how little he knew about basic economics much less monetary policy. Paul Krugman criticized Bitcoin today in his New York Times column and Timothy Lee responded in the Washington Post.
The principle behind Bitcoin is actually quite simple. There is a master list, called the block chain, which is an encrypted shared ledger in which all transactions are kept. The system uses public-key cryptography, where a public key can be used to encrypt a piece of information but a private key is required to decrypt it. Bitcoin owners each have a private key, and use it to update the ledger whenever a transaction takes place. The community at large then validates the transaction in a computationally intensive process called mining. The rewards for this work are Bitcoins, which are issued to the first computer to complete the computation. The intensive computations are integral to the system because it makes it difficult for attackers to falsify a transaction. As long as there are more honest participants than attackers then the attackers can never perform computations fast enough to falsify a transaction. The computations are also scaled so that Bitcoins are only issued every 10 minutes. Thus it does not matter how fast your computer is in absolute terms to mine Bitcoins, only that it is faster than everyone else’s computer. This article describes how people are creating businesses to mine Bitcoins.
Krugman’s post was about the ironic connection between Keynesian fiscal stimulus and gold. Although gold has some industrial use it is highly valued mostly because it is rare and difficult to dig up. Keyne’s theory of recessions and depressions is that there is a sudden collapse in aggregate demand, so the economy operates at below capacity, leading to excess unemployment. This situation was thought not to occur in classical economics because prices and wages should fall until equilibrium is restored and the economy operates at full capacity again. However, Keynes proposed that prices and wages are “sticky” and do not adjust very quickly. His solution was for the government to increase spending to take up the shortfall in demand and return the economy to full employment. He then jokingly proposed that the government could get the private sector to do the spending by burying money, which people could privately finance to dig out. He also noted that this was not that different from gold mining. Keyne’s point was that instead of wasting all that effort the government could simply print money and give it away or spend it. Krugman also points out that Adam Smith, often held up as a paragon of conservative principles, felt that paper money was much better for an economy to run smoothly than tying up resources in useless gold and silver. The connection between gold and Bitcoins is unmissable. Both have no intrinsic value and are a terrible waste of resources. Lee feels that Krugman misunderstands Bitcoin in that the intensive computations are integral to the functioning of the system and more importantly the main utility of Bitcoin is that it is a new form of payment network, which he feels is independent of monetary considerations.
Krugman and Lee have valid points but both are still slightly off the mark. I think we will definitely head towards some electronic monetary system in the future but it certainly won’t be Bitcoin in its current form. However, Bitcoin or at least something similar will also remain. The main problem with Bitcoin, as well as gold, is that its supply is constrained. The supply of Bitcoins is designed to cap out at 21 million with about half in circulation now. What this means is that the Bitcoin economy is subject to deflation. As the economy grows and costs fall, the price of goods denominated in Bitcoins must also fall. Andresen shockingly didn’t understand this important fact in the Econtalk podcast. The value of Bitcoins will always increase. Deflation is bad for economic growth because it encourages people to delay purchases and hoard Bitcoins. Of course if you don’t believe in economic growth then Bitcoins might be a good thing. Ideally, you want money to be neutral so the supply should grow along with the economy. This is way central banks target inflation around 2%. Hence, Bitcoin as it is currently designed will certainly fail as a currency and payment system but it would not take too much effort to fix its flaws. It may simply serve the role of the search engine Altavista to the eventual winner Google.
However, I think Bitcoin in its full inefficient glory and things like it will only proliferate. In our current era of high unemployment and slow growth, Bitcoin is serving as a small economic stimulus. As we get more economically efficient, fewer of us will be required for any particular sector of the economy. The only possible way to maintain full employment is to constantly invent new economic sectors. Bitcoin is economically useful because it is so completely useless.
Dynamical systems can be divided into two basic types: conservative and dissipative. In biology, we almost always model dissipative systems and thus if we want to computationally simulate the system almost any numerical solver will do the job (unless the problem is stiff, which I’ll leave to another post). However, when simulating a conservative system, we must take care to conserve the conserved quantities. Here, I will give a very elementary review of symplectic integrators for numerically solving conservative systems.
Ever since the financial crisis of 2008 there has been some discussion about whether or not economics is a science. Some, like Russ Roberts of Econtalk, channelling Friedrich Hayek, do not believe that economics is a science. They think it’s more like history where we come up with descriptive narratives that cannot be proven. I think that one thing that could clarify this debate is to separate the goal of a field from its practice. A field could be a science although its practice is not scientific.
To me what defines a science is whether or not it strives to ask questions that have unambiguous answers. In that sense, most of economics is a science. We may never know what caused the financial crisis of 2008 but that is still a scientific question. Now, it is quite plausible that the crisis of 2008 had no particular cause just like there is no particular cause for a winter storm. It could have been just the result of a collection of random events but knowing that would be extremely useful. In this sense, parts of history can also be considered to be a science. I do agree that the practice of economics and history are not always scientific and can never be as scientific as a field like physics because controlled experiments usually cannot be performed. We will likely never find the answer for what caused World War I but there certainly was a set of conditions and events that led to it.
There are parts of economics that are clearly not science such as what constitutes a fair system. Likewise in history, questions regarding who was the best president or military mind are certainly not science. Like art and ethics these questions depend on value systems. I would also stress that a big part of science is figuring out what questions can be asked. If it is true that recessions are random like winter storms then the question of when the next crisis will hit does not have an answer. There may be a short time window for some predictability but no chance of a long range forecast. However, we could possibly find some necessary conditions for recessions just like cold weather is necessary for a snow storm.
Perhaps the greatest biologist of the twentieth century and two-time Nobel prize winner, Fred Sanger, has died at the age of 95. He won his first Nobel in 1958 for determining the amino acid sequence of insulin and his second in 1980 for developing a method to sequence DNA. An obituary can be found here.
This year is the one hundred anniversary of the Michaelis-Menten equation, which was published in 1913 by German born biochemist Leonor Michaelis and Canadian physician Maud Menten. Menten was one of the first women to obtain a medical degree in Canada and travelled to Berlin to work with Michaelis because women were forbidden from doing research in Canada. After spending a few years in Europe she returned to the US to obtain a PhD from the University of Chicago and spent most of her career at the University of Pittsburgh. Michaelis also eventually moved to the US and had positions at Johns Hopkins University and the Rockefeller University.
The Michaelis-Menten equation is one of the first applications of mathematics to biochemistry and perhaps the most important. These days people, including myself, throw the term Michaelis-Menten around to generally mean any function of the form
although its original derivation was to specify the rate of an enzymatic reaction. In 1903, it had been discovered that enzymes, which catalyze reactions, work by binding to a substrate. Michaelis took up this line of research and Menten joined him. They focused on the enzyme invertase, which catalyzes the breaking down (i.e. hydrolysis) of the substrate sucrose (i.e. table sugar) into the simple sugars fructose and glucose. They modelled this reaction as
where the enzyme E binds to a substrate S to form a complex ES which releases the enzyme and forms a product P. The goal is to calculate the rate of the appearance of P.
I’m currently at the National Center for Theoretical Sciences, Math Division, on the campus of the National Tsing Hua University, Hsinchu for the 2013 Conference on Mathematical Physiology. The NCTS is perhaps the best run institution I’ve ever visited. They have made my stay extremely comfortable and convenient.
Here are the slides for my talk on Correlations, Fluctuations, and Finite Size Effects in Neural Networks. Here is a list of references that go with the talk
M.A. Buice and C.C. Chow, `Correlations, fluctuations and stability of a finite-size network of coupled oscillators’. Phys. Rev. E 76 031118 (2007) [PDF]
M.A. Buice, J.D. Cowan, and C.C. Chow, ‘Systematic Fluctuation Expansion for Neural Network Activity Equations’, Neural Comp., 22:377-426 (2010) [PDF]
C.C. Chow and M.A. Buice, ‘Path integral methods for stochastic differential equations’, arXiv:1009.5966 (2010).
M.A. Buice and C.C. Chow, `Effective stochastic behavior in dynamical systems with incomplete incomplete information.’ Phys. Rev. E 84:051120 (2011).
MA Buice and CC Chow. Dynamic finite size effects in spiking neural networks. PLoS Comp Bio 9:e1002872 (2013).
MA Buice and CC Chow. Generalized activity equations for spiking neural networks. Front. Comput. Neurosci. 7:162. doi: 10.3389/fncom.2013.00162, arXiv:1310.6934.
Here is the link to relevant posts on the topic.
Michael Buice and I have a new paper in Frontiers in Computational Neuroscience as well as on the arXiv (the arXiv version has fewer typos at this point). This paper partially completes the series of papers Michael and I have written about developing generalized activity equations that include the effects of correlations for spiking neural networks. It combines two separate formalisms we have pursued over the past several years. The first was a way to compute finite size effects in a network of coupled deterministic oscillators (e.g. see here, here, here and here). The second was to derive a set of generalized Wilson-Cowan equations that includes correlation dynamics (e.g. see here, here, and here ). Although both formalisms utilize path integrals, they are actually conceptually quite different. The first formalism adapted kinetic theory of plasmas to coupled dynamical systems. The second used ideas from field theory (i.e. a two-particle irreducible effective action) to compute self-consistent moment hierarchies for a stochastic system. This paper merges the two ideas to generate generalized activity equations for a set of deterministic spiking neurons.