Integrated Information Theory

June 2, 2014

Neuroscientist Giulio Tononi has proposed that consciousness is integrated information and can be measured by a quantity called \phi, which is a measure of the amount of information that involves the entire system as a whole. I have never really found this theory to be entirely compelling. While I think that consciousness probably does require some amount of integrated information, I am skeptical that it is the only relevant measure. See here and here for some of my previous thoughts on the topic. One of the reasons that Tononi has proposed a single measure is because it is a way to sidestep what is known as “the hard problem of consciousness”. Instead of trying to explain how a collection of neurons would be endowed with a sense of self-awareness, he posits that consciousness is a property of information and the more \phi one has, the more conscious you become. So in this theory, rocks are not conscious but thermostats are minimally conscious.

Theoretical computer scientist Scott Aaronson has now weighed in on the topic (see here and here). In his inimitable style, Aaronson shows essentially that a large grid of XOR gates could have arbitrarily large \phi and hence be even more conscious than you or me.  He finds this to be highly implausible. Tononi then produced a 14 page response where he essentially doubles down on IIT and claims that indeed a planar array of XOR gates is conscious and we should not be surprised it is so. Aaronson also proposes that we try to solve the “pretty hard problem of consciousness”, which is to come up with a theory or means for deciding when something has consciousness. To me, the fact that we can’t come up with an empirical way to tell whether something is conscious is the best argument for dualism we have. It may even be plausible that the PHPC is undecidable in that solving it would entail the solution of the halting problem. I agree with philosopher David Chalmers (see here) that there are only two possible consistent theories of consciousness. The first is that it is an emergent property of the brain but it has no “causal influence” on events. In other words, consciousness is an epiphenomenon that just allows “us” to be an audience for the dynamical evolution of the universe. The second is that we live in a dualistic world of mind and matter. It is definitely worth reading the posts and the comments, where Chalmers chimes in.

Did microbes cause the Great Dying?

May 24, 2014

In one of my very first posts almost a decade ago, I wrote about the end-Permian extinction 250 million years ago, which was the greatest mass extinction thus far. In that post I covered research that had ruled out an asteroid impact and found evidence of global warming, possibly due to volcanos, as a cause. Now, a recent paper in PNAS proposes that a horizontal gene transfer event from bacteria to archaea may have been the main cause for the increase of methane and CO2. This paper is one of the best papers I have read in a long time, combining geological field work, mathematical modeling, biochemistry, metabolism, and evolutionary phylogenetic analysis to make a compelling argument for their hypothesis.

Their case hinges on several pieces of evidence. The first comes from well-dated carbon isotopic records from China.  The data shows a steep plunge in the isotopic ratio (i.e ratio between the less abundant but heavier carbon 13 and the lighter more abundant carbon 12) in the inorganic carbonate reservoir with a moderate increase in the organic reservoir. In the earth’s carbon cycle, the organic reservoir comes from the conversion of atmospheric CO2 into carbohydrates via photosynthesis, which prefers carbon 12 to carbon 13. Organic carbon is returned to inorganic form through oxidation by animals eating photosynthetic organisms or by the burning of stored carbon like trees or coal. A steep drop in the isotopic ratio means that there was an extra surge of carbon 12 into the inorganic reservoir. Using a mathematical model, the authors show that in order to explain the steep drop, the inorganic reservoir must have grown superexponentially (faster than exponential). This requires some runaway positive feedback loop that is difficult to explain by geological processes such as volcanic activity, but is something that life is really good at.

The increased methane would have been oxidized to CO2 by other microbes, which would have lowered the oxygen concentration. This would allow for more efficient fermentation and thus more acetate fuel for the archaea to make more methane. The authors showed in another simple mathematical model how this positive feedback loop could lead to superexponential growth. Methane and CO2 are both greenhouse gases and their increase would have caused significant global warming. Anaerobic methane oxidation could also lead to the release of poisonous hydrogen sulfide.

They then considered what microbe could have been responsible. They realized that during the late Permian, a lot of organic material was being deposited in the sediment. The organic reservoir (i.e. fossil fuels, methane hydrates, soil organic matter, peat, etc) was much larger back then than today, as if someone or something used it up at some point. One of the end products of fermentation of this matter would be acetate and that is something archaea like to eat and convert to methane. There are two types of archaea that can do this and one is much more efficient than the other at high acetate concentrations. This increased efficiency was also shown recently to have arisen by a horizontal gene transfer event from a bacterium. A phylogenetic analysis of all known archaea showed that the progenitor of the efficient methanogenic one likely arose 250 million years ago.

The final piece of evidence is that the archaea need nickel to make methane. The authors then looked at the nickel concentrations in their Chinese geological samples and found a sharp increase in nickel immediately before the steep drop in the isotopic ratio. They postulate that the source of the nickel was the massive Siberian volcano eruptions at that time (and previously proposed as the cause of the increased methane and CO2).

This scenario required the unlikely coincidence of several events –  lots of excess organic fuel, low oxygen (and sulfate), increased nickel, and a horizontal gene transfer event. If any of these were missing, the Great Dying may not have taken place. However, given that there have been only 5 mass extinctions, although we may be currently inducing the 6th, low probability events may be required for such calamitous events. This paper should also give us some pause about introducing genetically modified organisms into the environment. While most will probably be harmless, you never know when one will be the match that lights the fire.

 

 

What is the difference between math, science and philsophy?

May 16, 2014

I’ve been listening to the Philosophy Bites podcast recently. One from a few years ago consisted of answers from philosopher’s to the question posed on the spot and without time for deep reflection: What is Philosophy? Some managed to give precise answers, but many struggled. I think one source of conflict they faced as they answered was that they didn’t know how to separate the question of what philosophers actually do from they should be doing. However, I think that a clear distinction between science, math and philosophy as methodologies can be specified precisely. I also think that this is important because practitioner’s in each subject should be aware of what methodology they are actually using and what is appropriate for whatever problem they are working on.

Here are my definitions: Math explores the consequences of rules or assumptions, science is the empirical study of measurable things, and philosophy examines things that cannot be resolved by mathematics or empiricism. With these definitions, practitioner’s of any discipline may use either math, science, or philosophy to help answer whatever question they may be addressing. Scientists need mathematics to work out the consequences of their assumptions and philosophy to help delineate phenomena. Mathematicians need science and philosophy to provide assumptions or rules to analyze. Philosophers need mathematics to sort out arguments and science to test hypotheses experimentally.

Those skeptical of philosophy may suggest that anything that cannot be addressed by math or science has no practical value. However, with these definitions, even the most hardened mathematician or scientist may be practicing philosophy without even knowing it. Atheists like Richard Dawkins should realize that part of their position is based on philosophy and not science. The only truly logical position to take with respect to God is agnosticism. It may be probable that there is not a God that intervenes directly in our lives and that probability may be high but it is not a provable fact. To be an atheist is to put some cutoff on the posterior probability for the existence of God and that cutoff is based on philosophy not science.

While most scientists and mathematicians are cognizant that moral issues may be pertinent to their work (e.g. animal experimentation), they may be less cognizant of what I believe is an equally important philosophical issue , which is the ontological question. Ontology is a philosophical term for the study of what exists. To many pragmatically minded people, this may sound like an ethereal topic (or worse adjective) that has no place in the hard sciences. However, as I pointed out in an earlier post, we can put labels on at most a countably infinite number of things out of an uncountable number of possibilities and for most purposes, our ontological list of things is finite. We thus have to choose and although some of these choices are guided by how we as human agents interact with the world, others will be arbitrary. Determining ontology will involve aspects of philosophy, science and math.

Mathematicians face the ontological problem daily when they decide on what areas to work in and what theorems to prove. The possibilities in mathematics are infinite so it is almost certain that if we were to rerun history some if not many fields would not be reinvented. While scientists may have fewer degrees of freedom to choose from they are also making choices and these choices tend to be confined by history. The ontological problem shows up anytime we try to define a phenomenon. The classification of cognitive disorders is a pure exercise in ontology. Authors of the DSM IV have attempted to be as empirical and objective as possible but there is still plenty of philosophy in their designations of psychiatric conditions. While most string theorists accept that their discipline is mostly mathematical, they should also realize that it is very philosophical. A theory of everything includes the ontology by definition.

Subjects traditionally within the realm of philosophy also have mathematical and scientific aspects. Our morals and values have certainly been shaped by evolution and biological constraints. We should completely rethink our legal philosophy based on what we now know about neuroscience (e.g. see here). The same goes for any discussion of consciousness, the mind-body problem, and free will. To me the real problem with free will isn’t whether or not it exists but rather who or what exactly is exercising that free will and this can be looked at empirically.

So next time when you sit down to solve a problem, think about whether it is one of mathematics, science or philosophy.

The blinking-dot paradox of consciousness

May 6, 2014

Suppose you could measure the activity of every neuron in the brain of an awake and behaving person, including all sensory and motor neurons. You could then represent the firing pattern of these neurons on a screen with a hundred billion pixels (or as many as needed). Each pixel would be identified with a neuron and the activity of the brain would be represented by blinking dots of light. The question then is whether or not the array of blinking dots is conscious (provided the original person was conscious). If you believe that everything about consciousness is represented by neuronal spikes, then you would be forced to answer yes. On the other hand, you must then acknowledge that a television screen simply outputting entries from a table is also conscious.

There are several layers to this possible paradox. The first is whether or not all the information required to fully decode the brain and emulate consciousness is in the spiking patterns of the neurons in the brain. It could be that you need the information contained in all the physical processes in the brain such as the movement of  ions and water molecules, conformational changes of ion channels, receptor trafficking, blood flow, glial cells, and so forth. The question is then what resolution is required. If there is some short distance cut-off so you could discretize the events then you could always construct a bigger screen with trillions of trillions of pixels and be faced with the same question. But suppose that there is no cut-off so you need an uncountable amount of information. Then consciousness would not be a computable phenomenon and there is no hope in ever understanding it. Also, at a small enough scale (Planck length) you would be forced to include quantum gravity effects as well, in which case Roger Penrose may have been on to something after all.

The second issue is whether or not there is a difference between a neural computation and reading from a table. Presumably, the spiking events in the brain are due to the extremely complex dynamics of synaptically coupled neurons in the presence of environmental inputs. Is there something intrinsically different between a numerical simulation of a brain model from reading the entries of a list? Would one exhibit consciousness while the other not? To make matters even more confusing, suppose you have a computer running a simulation of a brain. The firing of the neurons are now encoded by the states of various electronic components like transistors. Does this means that the circuits in the computer become conscious when the simulation is running? What if the computer were simultaneously running other programs, like a web browser, or even another brain simulation?  In a computer, the execution of a program is not tied to specific electronic components.  Transistors just change states as instructions arrive so when a computer is running multiple programs, the transistors simulating the brain are not conserved.  How then do they stay coherent to form a conscious perception?  In a normal computer operation, the results are fed to an output, which is then interpreted by us.  In a simulation of the brain, there is no output, there is just the simulation. Questions like these make me question my once unwavering faith in the monistic (i.e. not dualistic) theory of the brain.

New paper on genomics

April 22, 2014

James Lee and I have a new paper out: Lee and Chow, Conditions for the validity of SNP-based heritability estimation, Human Genetics, 2014. As I summarized earlier (e.g. see here and here), heritability is a measure of the proportion of the variance of some trait (like height or cholesterol levels) due to genetic factors. The classical way to estimate heritability is to regress standardized (mean zero, standard deviation one) phenotypes of close relatives against each other. In 2010, Jian Yang, Peter Visscher and colleagues developed a way to estimate heritability directly from the data obtained in Genome Wide Association Studies (GWAS), sometimes called GREML.  Shashaank Vattikuti and I quickly adopted this method and computed the heritability of metabolic syndrome traits as well as the genetic correlations between the traits (link here). Unfortunately, our methods section has a lot of typos but the corrected Methods with the Matlab code can be found here. However, I was puzzled by the derivation of the method provided by the Yang et al. paper.  This paper is our resolution.  The technical details are below the fold.

 

Read the rest of this entry »

Saving US biomedical research

April 15, 2014

Bruce Alberts, Marc Kirschner, Shirley Tilghman, and Harold Varmus have an opinion piece in PNAS (link here) summarizing their concerns for the future of US biomedical research and suggesting some fixes. Their major premise is that medical research is predicated on an ever continuing expansion and we’re headed for a crisis if we don’t change immediately. As an NIH intramural investigator, I am shielded from the intense grant writing requirements of those on the outside. However, I am well aware of the difficulties in obtaining grant support and more than cognizant of the fact that a simple way to resolve the recent 8% cut in NIH funding is to eliminate the NIH intramural program. I have also noticed that medical schools keep expanding and hiring faculty on “soft money”, which requires them to raise their own salaries through grants. Soft money faculty essentially run independent businesses who rent lab space from institutions. The problem is that the market is a monopsony, where the sole buyer is the NIH. In order to keep their businesses running, they need lots of low paid labour, in the form of grad students and postdocs, many of whom have no hope of ever becoming independent investigators. One of the proposed solutions is to increase the salary of post docs and increase the numbers of permanent staff scientist positions. The premise is that by increasing unit costs, a labour equilibrium can be achieved. There is much more in the article and anyone involved in science should read it.

Big Data backlash

April 7, 2014

I predicted that there would be an eventual push back on Big Data and it seems that it has begun. Gary Marcus and Ernest Davis of NYU had an op-ed in the Times yesterday outlining nine issues with Big Data. I think one way to encapsulate many of the critiques is that you will never be able to do true prior free data modeling. The number of combinations in a data set grows as the factorial of the number of elements, which grows faster than an exponential. Hence, Moore’s law can never catch up. At some point, someone will need to exercise some judgement in which case Big Data is not really different from the ordinary data that we deal with all the time.

The ultimate pathogen vector

March 31, 2014

If civilization succumbs to a deadly pandemic, we will all know what the vector was. Every physician, nurse, dentist, hygienist, and health care worker is bound to check their smartphone sometime during the day before, during, or after seeing a patient and they are not sterilizing it afterwards.  The fully hands free smartphone could be the most important invention of the 21st century.

Optimizing food delivery

March 25, 2014

This Econtalk podcast with Frito-Lay executive Brendan O’Donohoe from 2011 gives a great account of how optimized the production and marketing system for potato chips and other salty snacks has become. The industry has a lot of very smart people trying to figure out how to ensure that you maximize food consumption from how to peel potatoes to how to stack store shelves with bags of chips. This increased efficiency is our hypothesis (e.g. see here) for the obesity epidemic. However, unlike before where I attributed the increase in food production to changes in agricultural policy, I now believe it is mostly due to the vastly increased efficiency of food production. This podcast shows the extent of the optimization after the produce leaves the farm but the efficiency improvements on the farm are just as dramatic. For example, farmers now use GPS to optimally line up their crops.

Analytic continuation continued

March 9, 2014

As I promised in my previous post, here is a derivation of the analytic continuation of the Riemann zeta function to negative integer values. There are several ways of doing this but a particularly simple way is given by Graham Everest, Christian Rottger, and Tom Ward at this link. It starts with the observation that you can write

\int_1^\infty x^{-s} dx = \frac{1}{s-1}

if the real part of s>0. You can then break the integral into pieces with

\frac{1}{s-1}=\int_1^\infty x^{-s} dx =\sum_{n=1}^\infty\int_n^{n+1} x^{-s} dx

=\sum_{n=1}^\infty \int_0^1(n+x)^{-s} dx=\sum_{n=1}^\infty\int_0^1 \frac{1}{n^s}\left(1+\frac{x}{n}\right)^{-s} dx      (1)

For x\in [0,1], you can expand the integrand in a binomial expansion

\left(1+\frac{x}{n}\right)^{-s} = 1 +\frac{sx}{n}+sO\left(\frac{1}{n^2}\right)   (2)

Now substitute (2) into (1) to obtain

\frac{1}{s-1}=\zeta(s) -\frac{s}{2}\zeta(s+1) - sR(s)  (3)

or

\zeta(s) =\frac{1}{s-1}+\frac{s}{2}\zeta(s+1) +sR(s)   (3′)

where the remainder R is an analytic function when Re s > -1 because the resulting series is absolutely convergent. Since the zeta function is analytic for Re s >1, the right hand side is a new definition of \zeta that is analytic for s >0 aside from a simple pole at s=1. Now multiply (3) by s-1 and take the limit as s\rightarrow 1 to obtain

\lim_{s\rightarrow 1} (s-1)\zeta(s)=1

which implies that

\lim_{s\rightarrow 0} s\zeta(s+1)=1     (4)

Taking the limit of s going to zero from the right of (3′) gives

\zeta(0^+)=-1+\frac{1}{2}=-\frac{1}{2}

Hence, the analytic continuation of the zeta function to zero is -1/2.

The analytic domain of \zeta can be pushed further into the left hand plane by extending the binomial expansion in (2) to

\left(1+\frac{x}{n}\right)^{-s} = \sum_{r=0}^{k+1} \left(\begin{array}{c} -s\\r\end{array}\right)\left(\frac{x}{n}\right)^r + (s+k)O\left(\frac{1}{n^{k+2}}\right)

 Inserting into (1) yields

\frac{1}{s-1}=\zeta(s)+\sum_{r=1}^{k+1} \left(\begin{array}{c} -s\\r\end{array}\right)\frac{1}{r+1}\zeta(r+s) + (s+k)R_{k+1}(s)

where R_{k+1}(s) is analytic for Re s>-(k+1).  Now let s\rightarrow -k^+ and extract out the last term of the sum with (4) to obtain

\frac{1}{-k-1}=\zeta(-k)+\sum_{r=1}^{k} \left(\begin{array}{c} k\\r\end{array}\right)\frac{1}{r+1}\zeta(r-k) - \frac{1}{(k+1)(k+2)}    (5)

Rearranging (5) gives

\zeta(-k)=-\sum_{r=1}^{k} \left(\begin{array}{c} k\\r\end{array}\right)\frac{1}{r+1}\zeta(r-k) -\frac{1}{k+2}     (6)

where I have used

\left( \begin{array}{c} -s\\r\end{array}\right) = (-1)^r \left(\begin{array}{c} s+r -1\\r\end{array}\right)

The righthand side of (6) is now defined for Re s > -k.  Rewrite (6) as

\zeta(-k)=-\sum_{r=1}^{k} \frac{k!}{r!(k-r)!} \frac{\zeta(r-k)(k-r+1)}{(r+1)(k-r+1)}-\frac{1}{k+2}

=-\sum_{r=1}^{k} \left(\begin{array}{c} k+2\\ k-r+1\end{array}\right) \frac{\zeta(r-k)(k-r+1)}{(k+1)(k+2)}-\frac{1}{k+2}

=-\sum_{r=1}^{k-1} \left(\begin{array}{c} k+2\\ k-r+1\end{array}\right) \frac{\zeta(r-k)(k-r+1)}{(k+1)(k+2)}-\frac{1}{k+2} - \frac{\zeta(0)}{k+1}

Collecting terms, substituting for \zeta(0) and multiplying by (k+1)(k+2)  gives

(k+1)(k+2)\zeta(-k)=-\sum_{r=1}^{k-1} \left(\begin{array}{c} k+2\\ k-r+1\end{array}\right) \zeta(r-k)(k-r+1) - \frac{k}{2}

Reindexing gives

(k+1)(k+2)\zeta(-k)=-\sum_{r'=2}^{k} \left(\begin{array}{c} k+2\\ r'\end{array}\right) \zeta(-r'+1)r'-\frac{k}{2}

Now, note that the Bernoulli numbers satisfy the condition \sum_{r=0}^{N-1} B_r = 0.  Hence,  let \zeta(-r'+1)=-\frac{B_r'}{r'}

and obtain

(k+1)(k+2)\zeta(-k)=\sum_{r'=0}^{k+1} \left(\begin{array}{c} k+2\\ r'\end{array}\right) B_{r'}-B_0-(k+2)B_1-(k+2)B_{k+1}-\frac{k}{2}

which using B_0=1 and B_1=-1/2 gives the self-consistent condition

\zeta(-k)=-\frac{B_{k+1}}{k+1},

which is the analytic continuation of the zeta function for integers k\ge 1.

Analytic continuation

February 21, 2014

I have received some skepticism that there are possibly other ways of assigning the sum of the natural numbers to a number other than -1/12 so I will try to be more precise. I thought it would be also useful to derive the analytic continuation of the zeta function, which I will do in a future post.  I will first give a simpler example to motivate the notion of analytic continuation. Consider the geometric series 1+s+s^2+s^3+\dots. If |s| < 1 then we know that this series is equal to

\frac{1}{1-s}                (1)

Now, while the geometric series is only convergent and thus analytic inside the unit circle, (1) is defined everywhere in the complex plane except at s=1. So even though the sum doesn’t really exist outside of the domain of convergence, we can assign a number to it based on (1). For example, if we set s=2 we can make the assignment of 1 + 2 + 4 + 8 + \dots = -1. So again, the sum of the powers of two doesn’t really equal -1, only (1) is defined at s=2. It’s just that the geometric series and (1) are the same function inside the domain of convergence. Now, it is true that the analytic continuation of a function is unique. However, although the value of -1 for s=-1 is the only value for the analytic continuation of the geometric series, that doesn’t mean that the sum of the powers of 2 needs to be uniquely assigned to negative one because the sum of the powers of 2 is not an analytic function. So if you could find some other series that is a function of some parameter z that is analytic in some domain of convergence and happens to look like the sum of the powers of two for some z value, and you can analytically continue the series to that value, then you would have another assignment.

Now consider my example from the previous post. Consider the series

\sum_{n=1}^\infty \frac{n-1}{n^{s+1}}  (2)

This series is absolutely convergent for s>1.  Also note that if I set s=-1, I get

\sum_{n=1}^\infty (n-1) = 0 +\sum_{n'=1}^\infty n' = 1 + 2 + 3 + \dots

which is the sum of then natural numbers. Now, I can write (2) as

\sum_{n=1}^\infty\left( \frac{1}{n^s}-\frac{1}{n^{s+1}}\right)

and when the real part of s is greater than 1,  I can further write this as

\sum_{n=1}^\infty\frac{1}{n^s}-\sum_{n=1}^\infty\frac{1}{n^{s+1}}=\zeta(s)-\zeta(s+1)  (3)

All of these operations are perfectly fine as long as I’m in the domain of absolute convergence.  Now, as I will show in the next post, the analytic continuation of the zeta function to the negative integers is given by

\zeta (-k) = -\frac{B_{k+1}}{k+1}

where B_k are the Bernoulli numbers, which is given by the Taylor expansion of

\frac{x}{e^x-1} = \sum B_n \frac{x^n}{n!}   (4)

The first few Bernoulli numbers are B_0=1, B_1=-1/2, B_2 = 1/6. Thus using this in (4) gives \zeta(-1)=-1/12. A similar proof will give \zeta(0)=-1/2.  Using this in (3) then gives the desired result that the sum of the natural numbers is (also) 5/12.

Now this is not to say that all assignments have the same physical value. I don’t know the details of how -1/12 is used in bosonic string theory but it is likely that the zeta function is crucial to the calculation.

Nonuniqueness of -1/12

February 11, 2014

I’ve been asked to give an example of how the sum of the natural numbers could lead to another value in the comments to my previous post so I thought it may be of general interest to more people. Consider again S=1+2+3+4\dots to be the sum of the natural numbers.  The video in the previous slide gives a simple proof by combining divergent sums. In essence, the manipulation is doing renormalization by subtracting away infinities and the left over of this renormalization is -1/12. There is another video that gives the proof through analytic continuation of the Riemann zeta function

\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}

The zeta function is only strictly convergent when the real part of s is greater than 1. However, you can use analytic continuation to extract values of the zeta function to values where the sum is divergent. What this means is that the zeta function is no longer the “same sum” per se, but a version of the sum taken to a domain where it was not originally defined but smoothly (analytically) connected to the sum. Hence, the sum of the natural numbers is given by \zeta(-1) and \zeta(0)=\sum_{n=1}^\infty 1, (infinite sum over ones). By analytic continuation, we obtain the values \zeta(-1)=-1/12 and \zeta(0)=-1/2.

Now notice that if I subtract the sum over ones from the sum over the natural numbers I still get the sum over the natural numbers, e.g.

1+2+3+4\dots - (1+1+1+1\dots)=0+1+2+3+4\dots.

Now, let me define a new function \xi(s)=\zeta(s)-\zeta(s+1) so \xi(-1) is the sum over the natural numbers and by analytic continuation \xi(-1)=-1/12+1/2=5/12 and thus the sum over the natural numbers is now 5/12. Again, if you try to do arithmetic with infinity, you can get almost anything. A fun exercise is to create some other examples.

The sum of the natural numbers is -1/12?

February 10, 2014

This wonderfully entertaining video giving a proof for why the sum of the natural numbers  is -1/12 has been viewed over 1.5 million times. It just shows that there is a hunger for interesting and well explained math and science content out there. Now, we all know that the sum of all the natural numbers is infinite but the beauty (insidiousness) of infinite numbers is that they can be assigned to virtually anything. The proof for this particular assignment considers the subtraction of the divergent oscillating sum S_1=1-2+3-4+5 \dots from the divergent sum of the natural numbers S = 1 + 2 + 3+4+5\dots to obtain 4S.  Then by similar trickery it assigns S_1=1/4. Solving for S gives you the result S = -1/12.  Hence, what you are essentially doing is dividing infinity by infinity and that as any school child should know, can be anything you want. The most astounding thing to me about the video was learning that this assignment was used in string theory, which makes me wonder if the calculations would differ if I chose a different assignment.

Addendum: Terence Tao has a nice blog post on evaluating such sums.  In a “smoothed” version of the sum, it can be thought of as the “constant” in front of an asymptotic divergent term.  This constant is equivalent to the analytic continuation of the Riemann zeta function. Anyway, the -1/12 seems to be a natural way to assign a value to the divergent sum of the natural numbers.

(Lack of) Progress in neuroscience

January 27, 2014

Here is what I just posted to the epic thread on Connectionists:

The original complaint in this thread seems to be that the main problem of (computational) neuroscience is that people do not build upon the work of others enough. In physics, no one reads the original works of Newton or Einstein, etc., anymore. There is a set canon of knowledge that everyone learns from classes and textbooks. Often if you actually read the original works you’ll find that what the early greats actually did and believed differs from the current understanding.  I think it’s safe to say that computational neuroscience has not reached that level of maturity.  Unfortunately, it greatly impedes progress if everyone tries to redo and reinvent what has come before.

The big question is why is this the case. This is really a search problem. It could be true that one of the proposed approaches in this thread or some other existing idea is optimal but the opportunity cost to follow it is great. How do we know it is the right one?  It is safer to just follow the path we already know. We simply all don’t believe enough in any one idea for all of us to pursue it.  It takes a massive commitment to learn any one thing much less everything on John Weng’s list. I don’t know too many people who could be fully conversant in math, AI, cognitive science, neurobiology, and molecular biology.  There are only so many John Von Neumanns, Norbert Wieners or Terry Taos out there. The problem actually gets worse with more interest and funding because there will be even more people and ideas to choose from. This is a classic market failure where too many choices destroys liquidity and accurate pricing. My prediction is that we will continue to argue over these points until one or a small set of ideas finally wins out.  But who is to say that thirty years is a long time. There were almost two millennia between Ptolemy and Kepler. However, once the correct idea took hold it was practically a blink of an eye to get from Kepler to Maxwell.  However, physics is so much simpler that neuroscience. In fact, my definition of physics is the field of easily model-able things. Whether or not a similar revolution will ever take place in neuroscience remains to be seen.

Addendum:  Thanks to Labrigger for finding the link to the thread:

http://mailman.srv.cs.cmu.edu/pipermail/connectionists/2014-January/subject.html

Brain discussion

January 26, 2014

There is an epic discussion on the Connectionist mailing list right now.  It started on Jan 23, when Juyang (John) Weng of Michigan State University criticized an announcement of the upcoming Workshop on Brain-like Computing that the workshop is really about neuron-like computing and there is a wide-gap between that and brain-like computing.  Another thing he seemed peeved about was that people in the field were not fully conversant in the literature, which is true.  People in neuroscience can be largely unaware of what people in robotics and cognitive science are doing and vice versa. The discussion really became lively when Jim Bower jumped in. It’s hard to summarize the entire thread but main themes include arguing about the worthiness of the big data approach to neuroscience, lamenting about the lack of progress for the past thirty years, and what we should be doing.

The non-cynical view

January 23, 2014

Read the 2014 Gates letter to get the non-cynical view of progress for the poor. I’m actually on the more hopeful side of this issue, surprisingly. The data (e.g. see Hans Rosling) clearly show that health is improving in developing nations. We may have also reached “peak child” in that there are more children alive today then there will ever be. Extreme poverty is being dramatically reduced. Foreign aid does seem to work.  Here’s Bill:

By almost any measure, the world is better than it has ever been. People are living longer, healthier lives. Many nations that were aid recipients are now self-sufficient. You might think that such striking progress would be widely celebrated, but in fact, Melinda and I are struck by how many people think the world is getting worse. The belief that the world can’t solve extreme poverty and disease isn’t just mistaken. It is harmful. That’s why in this year’s letter we take apart some of the myths that slow down the work. The next time you hear these myths, we hope you will do the same.
– Bill Gates

 

The Stephanie Event

January 14, 2014

You should read this article in Esquire about the advent of personalized cancer treatment for a heroic patient named Stephanie Lee.  Here is Steve Hsu’s blog post. The cost of sequencing is almost at the point where everyone can have their normal and tumor cells completely sequenced to look for mutations like Stephanie. The team at Mt.  Sinai Hospital in New York described in the article inserted some of the mutations into a fruit fly and then checked to see what drugs killed it. The Stephanie Event was the oncology board meeting at Sinai where the treatment for Stephanie Lee’s colon cancer, which had spread to the liver, was discussed. They decided on a standard protocol but would use the individualized therapy based on the fly experiments if the standard treatments failed.  The article was beautifully written, combining a compelling human story with science.

The myth of maladaptation

January 9, 2014

A fairly common presumption among biologists and adherents of paleo-diets is that since humans evolved on the African Savannah hundreds of thousands of years ago, we are not well adapted to the modern world. For example, Harvard biologist Daniel Lieberman has a new book out called “The Story of the Human Body” explaining through evolution why our bodies are the way they are. You can hear him speak about the book on Quirks and Quarks here. He talks about how our maladaptation to the modern world has led to widespread myopia, back pains, an obesity epidemic, and so forth.

This may all be true but the irony is that we as a species have never been more fit and adapted from an evolutionary point of view. In evolutionary theory, fitness is measured by the number of children or grandchildren we have. Thus, the faster a population grows the more fit and hence adapted to its environment it is. Since our population is growing the fastest it’s ever been (technically, we may have been fitter a few decades ago since our growth rate may actually be slowing slightly), we are the most fit we have ever been. In the developed world we certainly live longer and are healthier than we have ever been even when you account for the steep decline in infant death rates. It is true that heart disease and cancer has increased substantially but that is only because we (meaning the well-off in the developed world) no longer die young from infectious diseases, parasites, accidents, and violence.

One could claim that we are perfectly adapted to our modern world because we invented it. Those that live in warm houses are in better health than those  sitting in a damp cave. Those that get food at the local supermarket live longer than hunter-gatherers.  The average life expectancy of a sedentary individual is not different from an active person. One reason we have an obesity epidemic is that obesity isn’t very effective at killing people. An overweight or even obese person can live a long life.  So even though I type this peering through corrective eye wear while nursing a sore back, I can confidently say I am better adapted to my environment than my ancestors were thousands of years ago.

The Bitcoin economy

December 24, 2013

The electronic currency Bitcoin has been in the news quite a bit lately since its value has risen from about $10 a year ago to over $650 today, hitting a peak of over $1000 less than a month ago. I remember hearing Gavin Andresen, who is a Principal of the Bitcoin  Virtual Currency Project (no single person nor entity issues or governs Bitcoin) talk about Bitcoin on Econtalk two years ago and was astonished at how little he knew about basic economics much less monetary policy. Paul Krugman criticized Bitcoin today in his New York Times column and Timothy Lee responded in the Washington Post.

The principle behind Bitcoin is actually quite simple. There is a master list, called the block chain, which is an encrypted shared ledger in which all transactions are kept. The system uses public-key cryptography, where a public key can be used to encrypt a piece of information but a private key is required to decrypt it. Bitcoin owners each have a private key, and use it to update the ledger whenever a transaction takes place. The community at large then validates the transaction in a computationally intensive process called mining. The rewards for this work are Bitcoins, which are issued to the first computer to complete the computation. The intensive computations are integral to the system because it makes it difficult for attackers to falsify a transaction. As long as there are more honest participants than attackers then the attackers can never perform computations fast enough to falsify a transaction. The computations are also scaled so that Bitcoins are only issued every 10 minutes. Thus it does not matter how fast your computer is in absolute terms to mine Bitcoins, only that it is faster than everyone else’s computer. This article describes how people are creating businesses to mine Bitcoins.

Krugman’s post was about the ironic connection between Keynesian fiscal stimulus and gold. Although gold has some industrial use it is highly valued mostly because it is rare and difficult to dig up. Keyne’s theory of recessions and depressions is that there is a sudden collapse in aggregate demand, so the economy operates at below capacity, leading to excess unemployment. This situation was thought not to occur in classical economics because prices and wages should fall until equilibrium is restored and the economy operates at full capacity again. However, Keynes proposed that prices and wages are “sticky” and do not adjust very quickly. His solution was for the government to increase spending to take up the shortfall in demand and return the economy to full employment. He then jokingly proposed that the government could get the private sector to do the spending by burying money, which people could privately finance to dig out. He also noted that this was not that different from gold mining. Keyne’s point was that instead of wasting all that effort the government could simply print money and give it away or spend it. Krugman also points out that Adam Smith, often held up as a paragon of conservative principles, felt that paper money was much better for an economy to run smoothly than tying up resources in useless gold and silver. The connection between gold and Bitcoins is unmissable. Both have no intrinsic value and are a terrible waste of resources. Lee feels that Krugman misunderstands Bitcoin in that the intensive computations are integral to the functioning of the system and more importantly the main utility of Bitcoin is that it is a new form of payment network, which he feels is independent of monetary considerations.

Krugman and Lee have valid points but both are still slightly off the mark. I think we will definitely head towards some electronic monetary system in the future but it certainly won’t be Bitcoin in its current form. However, Bitcoin or at least something similar will also remain. The main problem with Bitcoin, as well as gold, is that its supply is constrained. The supply of Bitcoins is designed to cap out at 21 million with about half in circulation now. What this means is that the Bitcoin economy is subject to deflation. As the economy grows and costs fall, the price of goods denominated in Bitcoins must also fall. Andresen shockingly didn’t understand this important fact in the Econtalk podcast. The value of Bitcoins will always increase. Deflation is bad for economic growth because it encourages people to delay purchases and hoard Bitcoins. Of course if you don’t believe in economic growth then Bitcoins might be a good thing. Ideally, you want money to be neutral so the supply should grow along with the economy. This is way central banks target inflation around 2%. Hence, Bitcoin as it is currently designed will certainly fail as a currency and payment system but it would not take too much effort to fix its flaws. It may simply serve the role of the search engine Altavista to the eventual winner Google.

However, I think Bitcoin in its full inefficient glory and things like it will only proliferate. In our current era of high unemployment and slow growth, Bitcoin is serving as a small economic stimulus. As we get more economically efficient, fewer of us will be required for any particular sector of the economy. The only possible way to maintain full employment is to constantly invent new economic sectors. Bitcoin is economically useful because it is so completely useless.

Symplectic Integrators

December 12, 2013

Dynamical systems can be divided into two basic types: conservative and dissipative.  In biology, we almost always model dissipative systems and thus if we want to computationally simulate the system almost any numerical solver will do the job (unless the problem is stiff, which I’ll leave to another post). However, when simulating a conservative system, we must take care to conserve the conserved quantities. Here, I will give a very elementary review of symplectic integrators for numerically solving conservative systems.

Read the rest of this entry »


Follow

Get every new post delivered to your Inbox.

Join 115 other followers