The Drake equation and the Cambrian explosion

This summer billionaire Yuri Milner announced that he would spend upwards of 100 million dollars to search for extraterrestrial intelligent life (here is the New York Times article). This quest to see if we have company started about fifty years ago when Frank Drake pointed a radio telescope at some stars. To help estimate the number of possible civilizations, N, Drake wrote down his celebrated equation,

N = R_*f_p n_e f_l f_i f_c L

where R_* is the rate of star formation, f_p is the fraction of stars with planets, n_e is the average number of planets per star that could support life, f_l fraction of planets that develop life, f_i fraction of those planets that develop intelligent life, f_c fraction of civilizations that emit signals, and L is the length of time civilizations emit signals.

The past few years have demonstrated that planets in the galaxy are likely to be plentiful and although the technology to locate earth-like planets does not yet exist, my guess is that they will also be plentiful. So does that mean that it is just a matter of time before we find ET? I’m going to come on record here and say no. My guess is that life is rare and intelligent life may be so rare that there could only be one civilization at a time in any given galaxy.

While we are now filling in the numbers for the left side of Drake’s equation, we have absolutely no idea about the right side of the equation. However, I have good reason to believe that it is astronomically small and that reason is statistical independence. Although Drake characterized the probability of intelligent life into the probability of life forming times the probability it goes on to develop extra-planetary communication capability, there are actually a lot of factors in between. One striking example is the probability of the formation of multi-cellular life. In earth’s history, for the better part of three and a half billion years we had mostly single cellular life and maybe a smattering of multicellular experiments. Then suddenly about half a billion years ago, we had the Cambrian Explosion where multicellular animal life from which we are descended suddenly came onto the scene. This implies that forming multicellular life is extremely difficult and it is easy to envision an earth where it never formed at all.

We can continue. If it weren’t for an asteroid impact, the dinosaurs may never have gone extinct and mammals may not have developed. Even more recently, there seem to have been many species of advanced primates yet only one invented radios. Agriculture only developed ten thousand years ago, which meant that modern humans took about a hundred thousand years to discover it and only in one place. I think it is equally plausible that humans could have gone extinct like all of our other australopithecus and homo cousins. Life in the sea has existed much longer than life on land and there is no technologically advanced sea creature although I do think octopuses, dolphins and whales are intelligent.

We have around 100 billion stars in the galaxy and let’s just say that each has a habitable planet. Well, if the probability of each stage of life is one in a billion and if we need say three stages to attain technology then the probability of finding ET is one in 10^{16}. I would say that this is an optimistic estimate. Probabilities get small really quickly when you multiply them together. The probability of single cellular life will be much higher. It is possible that there could be hundred planets in our galaxy that have life but the chance that one of those is within a hundred light years will again be very low. However, I do think it is a worthwhile exercise to look for extracellular life, especially for oxygen or other life emitting gases in the atmosphere of exoplanets. It could tell us a lot about biology on earth.

2015-10-1: I corrected a factor of 10 error in some of the numbers.

The philosophy of Thomas the Tank Engine

My toddler loves to watch the television show Thomas and Friends based on the The Railway Series books by the Rev. Wilbert Audry. The show tells the story of sentient trains on a mythical island off the British coast called Sodor. Each episode is a morality play where one of the trains causes some problem because of a character flaw like arrogance or vanity that eventually comes to the attention of the avuncular head of the railroad, Sr. Topham Hatt (called The Fat Controller in the UK). He mildly chastises the train, who becomes aware of his foolishness (it’s almost always a he) and remedies the situation.

While I think the show has some educational value for small children, it also brings up some interesting ethical and metaphysical questions that could be very relevant for our near future. For one, although the trains are sentient and seem to have full control over their actions, some of them also have human drivers. What are these drivers doing? Are they simply observers or are they complicit in the ill-judged actions of the trains? Should they be held responsible for the mistakes of the train? Who has true control, the driver or the train? Can one over-ride the other? These questions will be on everyone’s minds when the first self-driving cars hit the mass market in a few years.

An even more relevant ethical dilemma regards the place the trains have in society. Are they employees or indentured servants of the railroad company? Are they free to leave the railroad if they want? Do they own possessions? When the trains break down they are taken to the steam works, which is run by a train named Victor. However, humans effect the repairs. Do they take orders from Victor? Presumably, the humans get paid and are free to change jobs so is this a situation where free beings are supervised by slaves?

The highest praise a train can receive from Sir Topham Hatt is that he or she was “very useful.” This is not something one would say to a human employee in a modern corporation. You might say you were very helpful or that your action was very useful but it sounds dehumanizing to say “you are useful.” Thus, Sir Topham Hatt at least, does not seem to consider the trains to be humans. Perhaps, he considers them to be more like domesticated animals. However, these are animals that clearly have aspirations, goals, and feelings of self-worth. It seems to me that they should be afforded the full rights of any other citizen of Sodor. As machines become more and more integrated into our lives, it may well be useful to probe the philosophical quandaries of Thomas and Friends.





Sebastian Seung and the Connectome

The New York Times Magazine has a nice profile on theoretical neuroscientist Sebastian Seung this week. I’ve known Sebastian since we were graduate students in Boston in the 1980’s. We were both physicists then and both ended up in biology though through completely different paths. The article focuses on his quest to map all the connections in the brain, which he terms the connectome. Near the end of the article, neuroscientist Eve Marder of Brandeis comments on the endeavor with the pithy remark that “If we want to understand the brain, the connectome is absolutely necessary and completely insufficient.”  To which the article ends with

Seung agrees but has never seen that as an argument for abandoning the enterprise. Science progresses when its practitioners find answers — this is the way of glory — but also when they make something that future generations rely on, even if they take it for granted. That, for Seung, would be more than good enough. “Necessary,” he said, “is still a pretty strong word, right?”

Personally, I am not sure if the connectome is necessary or sufficient although I do believe it is a worthy task. However, my hesitation is not because of what was proposed in the article, which is that we exist in a fluid world and the connectome is static. Rather, like Sebastian, I do believe that memories are stored in the connectome and I do believe that “your” connectome does capture much of the essence of “you”. Many years ago, the CPU on my computer died. Our IT person swapped out the CPU and when I turned my computer back on, it was like nothing had happened. This made me realize that everything about the computer that was important to me was stored on the hard drive. The CPU didn’t matter even though every thing a computer did relied on the CPU. I think the connectome is like the hard drive and trying to figure out how the brain works from it is like trying to reverse engineer the CPU from the hard drive. You can certainly get clues from it such as information is stored in binary form but I’m not sure if it is necessary or sufficient to figure out how a computer works by recreating an entire hard drive. Likewise, someday we may use the connectome to recover lost memories or treat some diseases but we may not need it to understand how a brain works.

Linear and nonlinear thinking

A linear system is one where the whole is precisely the sum of its parts. You can know how different parts will act together by simply knowing how they act in isolation. A nonlinear function lacks this nice property. For example, consider a linear function f(x). It satisfies the property that f(a x + b y) = a f(x) + b f(y). The function of the sum is the sum of the functions. One important point to note is that what is considered to be the paragon of linearity, namely a line on a graph, i.e. f(x) = mx + b is not linear since f(x + y) = m (x + y) + b \ne f(x)+ f(y). The y-intercept b destroys the linearity of the line. A line is instead affine, which is to say a linear function shifted by a constant. A linear differential equation has the form

\frac{dx}{dt} = M x

where x can be in any dimension.  Solutions of a linear differential equation can be multiplied by any constant and added together.

Linearity is thus essential for engineering. If you are designing a bridge then you simply add as many struts as you need to support the predicted load. Electronic circuit design is also linear in the sense that you combine as many logic circuits as you need to achieve your end. Imagine if bridge mechanics were completely nonlinear so that you had no way to predict how a bunch of struts would behave when assembled together. You would then have to test each combination to see how they work. Now, real bridges are not entirely linear but the deviations from pure linearity are mild enough that you can make predictions or have rules of thumb of what will work and what will not.

Chemistry is an example of a system that is highly nonlinear. You can’t know how a compound will act just based on the properties of its components. For example, you can’t simply mix glass and steel together to get a strong and hard transparent material. You need to be clever in coming up with something like gorilla glass used in iPhones. This is why engineering new drugs is so hard. Although organic chemistry is quite sophisticated in its ability to synthesize various compounds there is no systematic way to generate molecules of a given shape or potency. We really don’t know how molecules will behave until we create them. Hence, what is usually done in drug discovery is to screen a large number of molecules against specific targets and hope. I was at a computer-aided drug design Gordon conference a few years ago and you could cut the despair and angst with a knife.

That is not to say that engineering is completely hopeless for nonlinear systems. Most nonlinear systems act linearly if you perturb them gently enough. That is why linear regression is so useful and prevalent. Hence, even though the global climate system is a highly nonlinear system, it probably acts close to linear for small changes. Thus I feel confident that we can predict the increase in temperature for a 5% or 10% change in the concentration of greenhouse gases but much less confident in what will happen if we double or treble them. How linear a system will act depends on how close they are to a critical or bifurcation point. If the climate is very far from a bifurcation then it could act linearly over a large range but if we’re near a bifurcation then who knows what will happen if we cross it.

I think biology is an example of a nonlinear system with a wide linear range. Recent research has found that many complex traits and diseases like height and type 2 diabetes depend on a large number of linearly acting genes (see here). Their genetic effects are additive. Any nonlinear interactions they have with other genes (i.e. epistasis) are tiny. That is not to say that there are no nonlinear interactions between genes. It only suggests that common variations are mostly linear. This makes sense from an engineering and evolutionary perspective. It is hard to do either in a highly nonlinear regime. You need some predictability if you make a small change. If changing an allele had completely different effects depending on what other genes were present then natural selection would be hard pressed to act on it.

However, you also can’t have a perfectly linear system because you can’t make complex things. An exclusive OR logic circuit cannot be constructed without a threshold nonlinearity. Hence, biology and engineering must involve “the linear combination of nonlinear gadgets”. A bridge is the linear combination of highly nonlinear steel struts and cables. A computer is the linear combination of nonlinear logic gates. This occurs at all scales as well. In biology, you have nonlinear molecules forming a linear genetic code. Two nonlinear mitochondria may combine mostly linearly in a cell and two liver cells may combine mostly linearly in a liver.  This effective linearity is why organisms can have a wide range of scales. A mouse liver is thousands of times smaller than a human one but their functions are mostly the same. You also don’t need very many nonlinear gadgets to have extreme complexity. The genes between organisms can be mostly conserved while the phenotypes are widely divergent.

The morality of watching (American) football

The question in this week’s New York Times Ethicist column is whether it is wrong to watch football because of the inherent dangers to the players. The ethicist, Chuck Klosterman, says that it is ethical to watch football because the players made the decision to play freely with full knowledge of the risks. Although I think Klosterman has a valid point and I do not judge anyone who enjoys football, I have personally decided to forgo watching it. I simply could no longer stomach watching player after player going down with serious injuries each week. In Klosterman’s article, he goes on to say that even if football were the only livelihood the players had, we should still watch football so that they could have a livelihood. This is where I disagree. Aside from the fact that we shouldn’t have a society where the only chance to have a decent livelihood is through sports, football need not be that sport. If football did not exist, some other sport, including a modified safer football, would take its place. Soccer is the most popular sport in the rest of the world. Football exists in its current form because the fans support it. If that support moved to another sport, the players would move too.

What is the difference between math, science and philsophy?

I’ve been listening to the Philosophy Bites podcast recently. One from a few years ago consisted of answers from philosopher’s to the question posed on the spot and without time for deep reflection: What is Philosophy? Some managed to give precise answers, but many struggled. I think one source of conflict they faced as they answered was that they didn’t know how to separate the question of what philosophers actually do from they should be doing. However, I think that a clear distinction between science, math and philosophy as methodologies can be specified precisely. I also think that this is important because practitioner’s in each subject should be aware of what methodology they are actually using and what is appropriate for whatever problem they are working on.

Here are my definitions: Math explores the consequences of rules or assumptions, science is the empirical study of measurable things, and philosophy examines things that cannot be resolved by mathematics or empiricism. With these definitions, practitioner’s of any discipline may use either math, science, or philosophy to help answer whatever question they may be addressing. Scientists need mathematics to work out the consequences of their assumptions and philosophy to help delineate phenomena. Mathematicians need science and philosophy to provide assumptions or rules to analyze. Philosophers need mathematics to sort out arguments and science to test hypotheses experimentally.

Those skeptical of philosophy may suggest that anything that cannot be addressed by math or science has no practical value. However, with these definitions, even the most hardened mathematician or scientist may be practicing philosophy without even knowing it. Atheists like Richard Dawkins should realize that part of their position is based on philosophy and not science. The only truly logical position to take with respect to God is agnosticism. It may be probable that there is not a God that intervenes directly in our lives and that probability may be high but it is not a provable fact. To be an atheist is to put some cutoff on the posterior probability for the existence of God and that cutoff is based on philosophy not science.

While most scientists and mathematicians are cognizant that moral issues may be pertinent to their work (e.g. animal experimentation), they may be less cognizant of what I believe is an equally important philosophical issue , which is the ontological question. Ontology is a philosophical term for the study of what exists. To many pragmatically minded people, this may sound like an ethereal topic (or worse adjective) that has no place in the hard sciences. However, as I pointed out in an earlier post, we can put labels on at most a countably infinite number of things out of an uncountable number of possibilities and for most purposes, our ontological list of things is finite. We thus have to choose and although some of these choices are guided by how we as human agents interact with the world, others will be arbitrary. Determining ontology will involve aspects of philosophy, science and math.

Mathematicians face the ontological problem daily when they decide on what areas to work in and what theorems to prove. The possibilities in mathematics are infinite so it is almost certain that if we were to rerun history some if not many fields would not be reinvented. While scientists may have fewer degrees of freedom to choose from they are also making choices and these choices tend to be confined by history. The ontological problem shows up anytime we try to define a phenomenon. The classification of cognitive disorders is a pure exercise in ontology. Authors of the DSM IV have attempted to be as empirical and objective as possible but there is still plenty of philosophy in their designations of psychiatric conditions. While most string theorists accept that their discipline is mostly mathematical, they should also realize that it is very philosophical. A theory of everything includes the ontology by definition.

Subjects traditionally within the realm of philosophy also have mathematical and scientific aspects. Our morals and values have certainly been shaped by evolution and biological constraints. We should completely rethink our legal philosophy based on what we now know about neuroscience (e.g. see here). The same goes for any discussion of consciousness, the mind-body problem, and free will. To me the real problem with free will isn’t whether or not it exists but rather who or what exactly is exercising that free will and this can be looked at empirically.

So next time when you sit down to solve a problem, think about whether it is one of mathematics, science or philosophy.

The blinking-dot paradox of consciousness

Suppose you could measure the activity of every neuron in the brain of an awake and behaving person, including all sensory and motor neurons. You could then represent the firing pattern of these neurons on a screen with a hundred billion pixels (or as many as needed). Each pixel would be identified with a neuron and the activity of the brain would be represented by blinking dots of light. The question then is whether or not the array of blinking dots is conscious (provided the original person was conscious). If you believe that everything about consciousness is represented by neuronal spikes, then you would be forced to answer yes. On the other hand, you must then acknowledge that a television screen simply outputting entries from a table is also conscious.

There are several layers to this possible paradox. The first is whether or not all the information required to fully decode the brain and emulate consciousness is in the spiking patterns of the neurons in the brain. It could be that you need the information contained in all the physical processes in the brain such as the movement of  ions and water molecules, conformational changes of ion channels, receptor trafficking, blood flow, glial cells, and so forth. The question is then what resolution is required. If there is some short distance cut-off so you could discretize the events then you could always construct a bigger screen with trillions of trillions of pixels and be faced with the same question. But suppose that there is no cut-off so you need an uncountable amount of information. Then consciousness would not be a computable phenomenon and there is no hope in ever understanding it. Also, at a small enough scale (Planck length) you would be forced to include quantum gravity effects as well, in which case Roger Penrose may have been on to something after all.

The second issue is whether or not there is a difference between a neural computation and reading from a table. Presumably, the spiking events in the brain are due to the extremely complex dynamics of synaptically coupled neurons in the presence of environmental inputs. Is there something intrinsically different between a numerical simulation of a brain model from reading the entries of a list? Would one exhibit consciousness while the other not? To make matters even more confusing, suppose you have a computer running a simulation of a brain. The firing of the neurons are now encoded by the states of various electronic components like transistors. Does this means that the circuits in the computer become conscious when the simulation is running? What if the computer were simultaneously running other programs, like a web browser, or even another brain simulation?  In a computer, the execution of a program is not tied to specific electronic components.  Transistors just change states as instructions arrive so when a computer is running multiple programs, the transistors simulating the brain are not conserved.  How then do they stay coherent to form a conscious perception?  In a normal computer operation, the results are fed to an output, which is then interpreted by us.  In a simulation of the brain, there is no output, there is just the simulation. Questions like these make me question my once unwavering faith in the monistic (i.e. not dualistic) theory of the brain.