It has been a rather tragic week around the world. Here is the incomparable Herbert von Karajan conducting the Vienna Philharmonic Orchestra and Vienna Singverein in a 1986 rendition of Mozart’s Requiem.
Kevin Hall’s long awaited paper on what I dubbed “the land sub” experiment, where subjects were sequestered for two months, is finally in print (see here). This was the study funded by Gary Taube’s organization Nusi. The idea was to do a fully controlled study comparing low carb to a standard high carb diet to test the hypothesis that high carbs lead to weight gain through increased insulin. See here for a summary of the hypothesis. The experiment showed very little effect and refutes the carbohydrate-insulin model of weight gain. Kevin was so frustrated with dealing with Nusi that he opted out of any follow up study. Taubes did not support the conclusions of the paper and claimed that the diet used (which Nusi approved) wasn’t high enough in carbs. This is essentially positing that the carb effect is purely nonlinear – it only shows up if you are just eating white bread and rice all day. Even if this were true it would still mean that carbs could not explain the increase in average body weight over the past three decades since there is a wide range of carb consumption over the general population. It is not as if only the super carb lovers were getting obese. There were some weird effects that warrant further study. One is that study participants seemed to burn 500 more Calories outside of a metabolic chamber compared to inside. This was why the participants lost weight on the lead-in stabilizing diet. These missing Calories far swamped any effect of macronutrient composition.
I was asked by the newly formatted SIAM news website to write occasional pieces. My first post on AlphaGo just went up. Read it here.
Elon Musk, of Space X, Tesla, and Solar City fame, recently mentioned that he thought the the odds of us not living in a simulation were a billion to one. His reasoning was based on extrapolating the rate of improvement in video games. He suggests that soon it will be impossible to distinguish simulations from reality and in ten thousand years there could easily be billions of simulations running. Thus there are a billion more simulated universes than real ones.
This simulation argument was first quantitatively formulated by philosopher Nick Bostrom. He even has an entire website devoted to the topic (see here). In his original paper, he proposed a Drake-like equation for the fraction of all “humans” living in a simulation:
where is the fraction of human level civilizations that attain the capability to simulate a human populated civilization, is the fraction of these civilizations interested in running civilization simulations, and is the average number of simulations running in these interested civilizations. He then argues that if is large, then either or . Musk believes that it is highly likely that is large and is not small so, ergo, we must be in a simulation. Bostrom says his gut feeling is that is around 20%. Steve Hsu mocks the idea (I think). Here, I will show that we have absolutely no way to estimate our probability of being in a simulation.
The reason is that Bostrom’s equation obscures the possibility of two possible divergent quantities. This is more clearly seen by rewriting his equation as
where is the number of non-sim civilizations and is the number of sim civilizations. (Re-labeling and as people or universes does not change the argument). Bostrom and Musk’s observation is that once a civilization attains simulation capability then the number of sims can grow exponentially (people in sims can run sims and so forth) and thus can overwhelm and ergo, you’re in a simulation. However, this is only true in a world where is not growing or growing slowly. If is also growing exponentially then we can’t say anything at all about the ratio of to .
I can give a simple example. Consider the following dynamics
is being created by but both are both growing exponentially. The interesting property of exponentials is that a solution to these equations for is
where I have chosen convenient initial conditions that don’t affect the results. Even though is growing exponentially on top of an exponential process, the growth rates of and are the same. The probability of being in a simulation is then
and we have no way of knowing what this is. The analogy is that you have a goose laying eggs and each daughter lays eggs, which also lay eggs. It would seem like there would be more eggs from the collective progeny than the original mother. However, if the rate of egg laying by the original mother goose is increasing exponentially then the number of mother eggs can grow as fast as the number of daughter, granddaughter, great…, eggs. This is just another example of how thinking quantitatively can give interesting (and sometimes counterintuitive) results. Until we have a better idea about the physics underlying our universe, we can say nothing about our odds of being in a simulation.
Addendum: One of the predictions of this simple model is that there should be lots of pre-sim universes. I have always found it interesting that the age of the universe is only about three times that of the earth. Given that the expansion rate of the universe is actually increasing, the lifetime of the universe is likely to be much longer than the current age. So, why is it that we are alive at such an early stage of our universe? Well, one reason may be that the rate of universe creation is very high and so the probability of being in a young universe is higher than being in an old one.
Addendum 2: I only gave a specific solution to the differential equation. The full solution has the form . However, as long as , the first term will dominate.
Addendum 3: I realized that I didn’t make it clear that the civilizations don’t need to be in the same universe. Multiverses with different parameters are predicted by string theory. Thus, even if there is less than one civilization per universe, universes could be created at an exponentially increasing rate.
I’ve taken a short break form posting recently but I plan to restart soon.
The third movement of Felix Mendelssohn’s violin concerto played by Swedish prodigy Daniel Lozakovitj at age 10 with the Tchaikovsky Symphony Orchestra at the Tchaikovsky Concert Hall in 2011.
Here is the version by international superstar and former violin prodigy Sarah Chang with Kurt Masur and the New York Philharmonic in 1995 when she was about 15.
I have read two essays in the past month on the brain and consciousness and I think both point to examples of why consciousness per se and the “problem of consciousness” are both so confusing and hard to understand. The first article is by philosopher Galen Strawson in The Stone series of the New York Times. Strawson takes issue with the supposed conventional wisdom that consciousness is extremely mysterious and cannot be easily reconciled with materialism. He argues that the problem isn’t about consciousness, which is certainly real, but rather matter, for which we have no “true” understanding. We know what consciousness is since that is all we experience but physics can only explain how matter behaves. We have no grasp whatsoever of the essence of matter. Hence, it is not clear that consciousness is at odds with matter since we don’t understand matter.
I think Strawson’s argument is mostly sound but he misses on the crucial open question of consciousness. It is true that we don’t have an understanding of the true essence of matter and we probably never will but that is not why consciousness is mysterious. The problem is that we do now know whether the rules that govern matter, be they classical mechanics, quantum mechanics, statistical mechanics, or general relativity, could give rise to a subjective conscious experience. Our understanding of the world is good enough for us to build bridges, cars, computers and launch a spacecraft 4 billion kilometers to Pluto, take photos, and send them back. We can predict the weather with great accuracy for up to a week. We can treat infectious diseases and repair the heart. We can breed super chickens and grow copious amounts of corn. However, we have no idea how these rules can explain consciousness and more importantly we do not know whether these rules are sufficient to understand consciousness or whether we need a different set of rules or reality or whatever. One of the biggest lessons of the twentieth century is that knowing the rules does not mean you can predict the outcome of the rules. Not even taking into the computability and decidability results of Turing and Gödel, it is still not clear how to go from the microscopic dynamics of molecules to the Navier-Stokes equation for macroscopic fluid flow and how to get from Navier-Stokes to the turbulent flow of a river. Likewise, it is hard to understand how the liver works, much less the brain, starting from molecules or even cells. Thus, it is possible that consciousness is an emergent phenomenon of the rules that we already know, like wetness or a hurricane. We simply do not know and are not even close to knowing. This is the hard problem of consciousness.
The second article is by psychologist Robert Epstein in the online magazine Aeon. In this article, Epstein rails against the use of computers and information processing as a metaphor for how the brain works. He argues that this type of restricted thinking is why we can’t seem to make any progress understanding the brain or consciousness. Unfortunately, Epstein seems to completely misunderstand what computers are and what information processing means.
Firstly, a computation does not necessarily imply a symbolic processing machine like a von Neumann computer with a central processor, memory, inputs and outputs. A computation in the Turing sense is simply about finding or constructing a desired function from one countable set to another. Now, the brain certainly performs computations; any time we identify an object in an image or have a conversation, the brain is performing a computation. You can couch it in whatever language you like but it is a computation. Additionally, the whole point of a universal computer is that it can perform any computation. Computations are not tied to implementations. I can always simulate whatever (computable) system you want on a computer. Neural networks and deep learning are not symbolic computations per se but they can be implemented on a von Neumann computer. We may not know what the brain is doing but it certainly involves computation of some sort. Any thing that can sense the environment and react is making a computation. Bacteria can compute. Molecules compute. However, that is not to say that everything a brain does can be encapsulated by Turing universal computation. For example, Penrose believes that the brain is not computable although as I argued in a previous post, his argument is not very convincing. It is possible that consciousness is beyond the realm of computation and thus would entail very different physics. However, we have yet to find an example of a real physical phenomenon that is not computable.
Secondly, the brain processes information by definition. Information in both the Shannon and Fisher senses is a measure of uncertainty reduction. For example, in order to meet someone for coffee you need at least two pieces of information, where and when. Before you received that information your uncertainty was huge since there were so many possible places and times the meeting could take place. After receiving the information your uncertainty was eliminated. Just knowing it will be on Thursday is already a big decrease in uncertainty and an increase in information. Much of the brain’s job at least for cognition is about uncertainly reduction. When you are searching for your friend in the crowded cafe, you are eliminating possibilities and reducing uncertainty. The big mistake that Epstein makes is conflating an example with the phenomenon. Your brain does not need to function like your smartphone to perform computations or information processing. Computation and information theory are two of the most important mathematical tools we have for analyzing cognition.