# Forming a consistent political view

In view of the current US presidential election, I think it would be a useful exercise to see if I could form a rational political view that is consistent with what I actually know and believe from my training as a scientist. From my knowledge of dynamical systems and physics, I believe in the inherent unpredictability of complex nonlinear systems. Uncertainty is a fundamental property of the universe at all scales. From neuroscience, I know that people are susceptible to errors, do not always make optimal choices, and are inconsistent. People are motivated by whatever triggers dopamine to be released. From genetics, I know that many traits are highly heritable and that includes height, BMI, IQ and the Big Five personality traits. There is lots of human variance. People are motivated by different things, have various aptitudes, and have various levels of honesty and trustworthiness. However, from evolution theory, I know that genetic variance is also essential for any species to survive. Variety is not just the spice of life, it is also the meat. From ecology, I know that the world is a linked ecosystem. Everything is connected. From computer science, I know that there are classes of problems that are easy to solve, classes that are hard to solve, and classes that are impossible to solve and no amount of computing power can change that. From physics and geology, I fully accept that greenhouse gases will affect the energy balance on earth and that the climate is changing. However, given the uncertainty of dynamical systems, while I do believe that current climate models are pretty good, there does exist the possibility that they are missing something. I believe that the physical laws that govern our lives are computable and this includes consciousness. I believe everything is fallible and that includes people, markets and government.

So how would that translate into a political view? Well, it would be a mishmash of what might be considered socialist, liberal, conservative, and libertarian ideas. Since I think randomness and luck is a large part of life, including who your parents are, I do not subscribe to the theory of just desserts. I don’t think those with more “talents” deserve all the wealth they can acquire. However, I also do realize that we are motivated by dopamine and part of what triggers dopamine is reaping the rewards of our efforts so we must leave incentives in place. We should not try to make society completely equal but redistributive taxation is necessary and justified.

Since I think people are basically incompetent and don’t always make good choices, people sometimes need to be protected from themselves. We need some nanny state regulations such as building codes, water and air quality standards, transportation safety, and toy safety. I don’t believe that all drugs should be legalized because some drugs can permanently damage brains, especially those of children. Amphetamines and opioids should definitely be illegal. Marijuana is probably okay but not for children. Pension plans should be defined benefit (rather than defined contribution) schemes. Privatizing social security would be a disaster. However, we should not over regulate.  I would deregulate a lot of land use especially density requirements. We should eliminate all regulations that enforce monopolies including some professional requirements that deliberately restrict supply. We should not try to pick winners in any industry.

I believe that people will try to game the system so we should design welfare and tax systems that minimize the possibility of cheating. The current disability benefits program needs to be fixed. I do not believe in means testing for social programs as it gives room to cheat. Cheating not only depletes the system but also engenders resentment in others who do not cheat. Part of the anger of the working class is that they see people around them gaming the system. The way out is to replace the entire welfare system with a single universal basic income. People have argued that it makes no sense for Bill Gates and Warren Buffet to get a basic income. In actuality, they would end up paying most of it back in taxes. In biology, this is called a futile cycle but it has utility since it is easier to just give everyone the same benefits and tax according to one rule then having exceptions for everything as we have now. We may not be able to afford a basic income now but we eventually will.

Given our lack of certainty and incompetence, I would be extremely hesitant about any military interventions on foreign soil. We are as likely to make things worse as we are to make the better. I think free trade is in net a good thing because it does lead to higher efficiency and helps people in lower income countries. However, it will absolutely hurt some segment of the population in the higher income country. Since income is correlated with demand for your skills, in a globalized world those with skills below the global median will be losers. If a lot of people will do your job for less then you will lose your job or get paid less. For the time being, there should be some wage support for low wage people but eventually this should transition to the basic income.

Since I believe the brain is computable, this means that any job a human can do, a robot will eventually do as well or better. No job is safe. I do not know when the mass displacement of work will take place but I am sure it will come. As I wrote in my AlphaGo piece, not everyone can be a “knowledge” worker, media star, or CEO. People will find things to do but they won’t all be able to earn a living off of it in our current economic model. Hence, in the robot world, everyone would get a basic income and guaranteed health care and then be free to do whatever they want to supplement that income including doing nothing. I romantically picture a simulated 18th century world with people indulging in low productivity work but it could be anything. This will be financed by taxing the people who are still making money.

As for taxes, I think we need to go a system that de-emphasizes income taxes, which can be gamed and disincentivizes work, to one that taxes the use of shared resources (i.e. economic rents). This includes land rights, mineral rights, water rights, air rights, solar rights, wind rights, monopoly rights, eco system rights, banking rights, genetic rights, etc. These are resources that belong to everyone. We could use a land value tax model. When people want to use a resource, like land to build a house, they would pay the intrinsic value of that resource. They would keep any value they added. This would incentivize efficient utility of the resource while not telling anyone how to use it.

We could use an auction system to value these resources and rights. Hence, we need not regulate wall street firms per se but we would tax them according to the land they use and what sort of monopoly influence they exploit. We wouldn’t need to force them to obey capital requirements, we would tax them for the right to leverage debt. We wouldn’t need Glass-Steagall or Too Big to Fail laws for banks. We’ll just tax them for the right to do these things. We would also not need a separate carbon tax. We’ll tax the right to extract fossil fuels at a level equal to the resource value and the full future cost to the environment. The climate change debate would then shift to be about the discount rate. Deniers would argue for a large rate and alarmists for a small one. Sports leagues and teams would be taxed for their monopolies. The current practice of preventing cities from owning teams would be taxed.

The patent system needs serious reform. Software patents should be completely eliminated. Instead of giving someone arbitrary monopoly rights for a patent, patent holders should be taxed at some level that increases with time. This would force holders to commercialize, sell or relinquish the patent when they could no longer bear the tax burden and this would eliminate patent trolling.

We must accept that there is no free will per se so that crime and punishment must be reinterpreted. We should only evaluate whether offenders are dangerous to society and the seriousness of the crime. Motive should no longer be important. Only dangerous offenders would be institutionalized or incarcerated. Non-dangerous ones should repay the cost of the crime plus a penalty. We should also do a Manhattan project for nonlethal weapons so the police can carry them.

Finally, under the belief that nothing is certain, laws and regulations should be regularly reviewed including the US Constitution and the Bill of Rights. In fact, I propose that the 28th Amendment be that all laws and regulations must be reaffirmed or they will expire in some set amount of time.

# The simulation argument made quantitative

Elon Musk, of Space X, Tesla, and Solar City fame, recently mentioned that he thought the the odds of us not living in a simulation were a billion to one. His reasoning was based on extrapolating the rate of improvement in video games. He suggests that soon it will be impossible to distinguish simulations from reality and in ten thousand years there could easily be billions of simulations running. Thus there are a billion more simulated universes than real ones.

This simulation argument was first quantitatively formulated by philosopher Nick Bostrom. He even has an entire website devoted to the topic (see here). In his original paper, he proposed a Drake-like equation for the fraction of all “humans” living in a simulation:

$f_{sim} = \frac{f_p f_I N_I}{f_p f_I N_I + 1}$

where $f_p$ is the fraction of human level civilizations that attain the capability to simulate a human populated civilization, $f_I$ is the fraction of these civilizations interested in running civilization simulations, and $N_I$ is the average number of simulations running in these interested civilizations. He then argues that if $N_I$ is large, then either $f_{sim}\approx 1$ or $f_p f_I \approx 0$. Musk believes that it is highly likely that $N_I$ is large and $f_p f_I$ is not small so, ergo, we must be in a simulation. Bostrom says his gut feeling is that $f_{sim}$ is around 20%. Steve Hsu mocks the idea (I think). Here, I will show that we have absolutely no way to estimate our probability of being in a simulation.

The reason is that Bostrom’s equation obscures the possibility of two possible divergent quantities. This is more clearly seen by rewriting his equation as

$f_{sim} = \frac{y}{x+y} = \frac{y/x}{y/x+1}$

where $x$ is the number of non-sim civilizations and $y$ is the number of sim civilizations. (Re-labeling $x$ and $y$ as people or universes does not change the argument). Bostrom and Musk’s observation is that once a civilization attains simulation capability then the number of sims can grow exponentially (people in sims can run sims and so forth) and thus $y$ can overwhelm $x$ and ergo, you’re in a simulation. However, this is only true in a world where $x$ is not growing or growing slowly. If $x$ is also growing exponentially then we can’t say anything at all about the ratio of $y$ to $x$.

I can give a simple example.  Consider the following dynamics

$\frac{dx}{dt} = ax$

$\frac{dy}{dt} = bx + cy$

$y$ is being created by $x$ but both are both growing exponentially. The interesting property of exponentials is that a solution to these equations for $a > c$ is

$x = \exp(at)$

$y = \frac{b}{a-c}\exp(at)$

where I have chosen convenient initial conditions that don’t affect the results. Even though $y$ is growing exponentially on top of an exponential process, the growth rates of $x$ and $y$ are the same. The probability of being in a simulation is then

$f_{sim} = \frac{b}{a+b-c}$

and we have no way of knowing what this is. The analogy is that you have a goose laying eggs and each daughter lays eggs, which also lay eggs. It would seem like there would be more eggs from the collective progeny than the original mother. However, if the rate of egg laying by the original mother goose is increasing exponentially then the number of mother eggs can grow as fast as the number of daughter, granddaughter, great…, eggs. This is just another example of how thinking quantitatively can give interesting (and sometimes counterintuitive) results. Until we have a better idea about the physics underlying our universe, we can say nothing about our odds of being in a simulation.

Addendum: One of the predictions of this simple model is that there should be lots of pre-sim universes. I have always found it interesting that the age of the universe is only about three times that of the earth. Given that the expansion rate of the universe is actually increasing, the lifetime of the universe is likely to be much longer than the current age. So, why is it that we are alive at such an early stage of our universe? Well, one reason may be that the rate of universe creation is very high and so the probability of being in a young universe is higher than being in an old one.

Addendum 2: I only gave a specific solution to the differential equation. The full solution has the form $Y_1\exp(at) + Y_2 \exp(ct)$.  However, as long as $a >c$, the first term will dominate.

Addendum 3: I realized that I didn’t make it clear that the civilizations don’t need to be in the same universe. Multiverses with different parameters are predicted by string theory.  Thus, even if there is less than one civilization per universe, universes could be created at an exponentially increasing rate.

I have read two essays in the past month on the brain and consciousness and I think both point to examples of why consciousness per se and the “problem of consciousness” are both so confusing and hard to understand. The first article is by philosopher Galen Strawson in The Stone series of the New York Times. Strawson takes issue with the supposed conventional wisdom that consciousness is extremely mysterious and cannot be easily reconciled with materialism. He argues that the problem isn’t about consciousness, which is certainly real, but rather matter, for which we have no “true” understanding. We know what consciousness is since that is all we experience but physics can only explain how matter behaves. We have no grasp whatsoever of the essence of matter. Hence, it is not clear that consciousness is at odds with matter since we don’t understand matter.

I think Strawson’s argument is mostly sound but he misses on the crucial open question of consciousness. It is true that we don’t have an understanding of the true essence of matter and we probably never will but that is not why consciousness is mysterious. The problem is that we do now know whether the rules that govern matter, be they classical mechanics, quantum mechanics, statistical mechanics, or general relativity, could give rise to a subjective conscious experience. Our understanding of the world is good enough for us to build bridges, cars, computers and launch a spacecraft 4 billion kilometers to Pluto, take photos, and send them back. We can predict the weather with great accuracy for up to a week. We can treat infectious diseases and repair the heart. We can breed super chickens and grow copious amounts of corn. However, we have no idea how these rules can explain consciousness and more importantly we do not know whether these rules are sufficient to understand consciousness or whether we need a different set of rules or reality or whatever. One of the biggest lessons of the twentieth century is that knowing the rules does not mean you can predict the outcome of the rules. Not even taking into the computability and decidability results of Turing and Gödel, it is still not clear how to go from the microscopic dynamics of molecules to the Navier-Stokes equation for macroscopic fluid flow and how to get from Navier-Stokes to the turbulent flow of a river. Likewise, it is hard to understand how the liver works, much less the brain, starting from molecules or even cells. Thus, it is possible that consciousness is an emergent phenomenon of the rules that we already know, like wetness or a hurricane. We simply do not know and are not even close to knowing. This is the hard problem of consciousness.

The second article is by psychologist Robert Epstein in the online magazine Aeon. In this article, Epstein rails against the use of computers and information processing as a metaphor for how the brain works. He argues that this type of restricted thinking is why we can’t seem to make any progress understanding the brain or consciousness. Unfortunately, Epstein seems to completely misunderstand what computers are and what information processing means.

Firstly, a computation does not necessarily imply a symbolic processing machine like a von Neumann computer with a central processor, memory, inputs and outputs. A computation in the Turing sense is simply about finding or constructing a desired function from one countable set to another. Now, the brain certainly performs computations; any time we identify an object in an image or have a conversation, the brain is performing a computation. You can couch it in whatever language you like but it is a computation. Additionally, the whole point of a universal computer is that it can perform any computation. Computations are not tied to implementations. I can always simulate whatever (computable) system you want on a computer. Neural networks and deep learning are not symbolic computations per se but they can be implemented on a von Neumann computer. We may not know what the brain is doing but it certainly involves computation of some sort. Any thing that can sense the environment and react is making a computation. Bacteria can compute. Molecules compute. However, that is not to say that everything a brain does can be encapsulated by Turing universal computation. For example, Penrose believes that the brain is not computable although as I argued in a previous post, his argument is not very convincing. It is possible that consciousness is beyond the realm of computation and thus would entail very different physics. However, we have yet to find an example of a real physical phenomenon that is not computable.

Secondly, the brain processes information by definition. Information in both the Shannon and Fisher senses is a measure of uncertainty reduction. For example, in order to meet someone for coffee you need at least two pieces of information, where and when. Before you received that information your uncertainty was huge since there were so many possible places and times the meeting could take place. After receiving the information your uncertainty was eliminated. Just knowing it will be on Thursday is already a big decrease in uncertainty and an increase in information. Much of the brain’s job at least for cognition is about uncertainly reduction. When you are searching for your friend in the crowded cafe, you are eliminating possibilities and reducing uncertainty. The big mistake that Epstein makes is conflating an example with the phenomenon. Your brain does not need to function like your smartphone to perform computations or information processing. Computation and information theory are two of the most important mathematical tools we have for analyzing cognition.

# Chomsky on The Philosopher’s Zone

Listen to MIT Linguistics Professor Noam Chomsky on ABC’s radio show The Philosopher’s Zone (link here).  Even at 87, he is still as razor sharp as ever. I’ve always been an admirer of Chomsky although I think I now mostly disagree with his ideas about language. I do remember being completely mesmerized by the few talks I attended when I was a graduate student.

Chomsky is the father of modern linguistics. He turned it into a subfield of computer science and mathematics. People still use Chomsky Normal Form and the Chomsky Hierarchy in computer science. Chomsky believes that the language ability is universal among all humans and is genetically encoded. He comes to this conclusion because in his mathematical analysis of language he found what he called “deep structures”, which are embedded rules that we are consciously unaware of when we use language. He was adamantly opposed to the idea that language could be acquired via a probabilistic machine learning algorithm. His most famous example is that we know that the sentence “Colorless green ideas sleep furiously” makes grammatical sense but is nonsensical while the sentence “Furiously sleep ideas green colorless”, is nongrammatical. Since, neither of these sentences had ever been spoken nor written he surmised that no statistical algorithm could ever learn the difference between the two. I think it is pretty clear now that Chomsky was incorrect and machine learning can learn to parse language and classify these sentences. There has also been field work that seems to indicate that there do exist languages in the Amazon that are qualitatively different form the universal set. It seems that the brain, rather than having an innate ability for grammar and language, may have an innate ability to detect and learn deep structure with a very small amount of data.

The host Joe Gelonesi, who has filled in admirably for the sadly departed Alan Saunders, asks Chomsky about the hard problem of consciousness near the end of the program. Chomsky, in his typical fashion of invoking 17th and 18th century philosophy, dismisses it by claiming that science itself and physics in particular has long dispensed with the equivalent notion. He says that the moment that Newton wrote down the equation for gravitational force, which requires action at a distance, physics stopped being about making the universe intelligible and became about creating predictive theories. He thus believes that we will eventually be able to create a theory of consciousness although it may not be intelligible to humans. He also seems to subscribe to panpsychism, where consciousness is a property of matter like mass, an idea championed by Christof Koch and Giulio Tononi. However, as I pointed out before, panpsychism is dualism. If it does exist, then it exists apart from the way we currently describe the universe. Lately, I’ve come to believe and accept the fact that consciousness is an epiphenomenon and has no causal consequence in the universe. I must credit David Chalmers (e.g. see previous post) for making it clear that this is the only recourse to dualism. We are no more nor less than automata caroming through the universe, with the ability to spectate a few tens of milliseconds after the fact.

Addendum: As pointed out in the comments, there are monoistic theories, as espoused by Bishop Berkeley, where only ideas are real.  My point about the only recourse to dualism is epiphenomena for consciousness, is if one adheres to materialism.

# The nature of evil

In our current angst over terrorism and extremism, I think it is important to understand the motivation of the agents behind the evil acts if we are ever to remedy the situation. The observable element of evil (actus reus) is the harm done to innocent individuals. However, in order to prevent evil acts, we must understand the motivation behind the evil (mens rea). The Radiolab podcast “The Bad Show” gives an excellent survey of the possible varieties of evil. I will categorize evil into three types, each with increasing global impact. The first is the compulsion or desire within an individual to harm another. This is what motivates serial killers like the one described in the show. Generally, such evilness will be isolated and the impact will be limited albeit grisly. The second is related to what philosopher Hannah Arendt called “The Banality of Evil.” This is an evil where the goal of the agent is not to inflict harm per se as in the first case but in the process of pursuing some other goal, there is no attempt to avoid possible harm to others. This type of sociopathic evil is much more dangerous and widespread as is most recently seen in Volkswagen’s fraudulent attempt to pass emission standards. Although there are sociopathic individuals that really have no concern for others, I think many perpetrators in this category are swayed by cultural norms or pressures to conform. The third type of evil is when the perpetrator believes the act is not evil at all but a means to a just and noble end. This is the most pernicious form of evil because when it is done by “your side” it is not considered evil. For example, the dropping of atomic bombs on Japan was considered to be a necessary sacrifice of a few hundred thousand lives to end WWII and save many more lives.

I think it is important to understand that the current wave of terrorism and unrest in the Middle East is motivated by the third type. Young people are joining ISIS not because they particularly enjoy inflicting harm on others or they don’t care how their actions affect others, but because they are rallying to a cause they believe to be right and important. Many if not most suicide bombers come from middle class families and many are women. They are not merely motivated by a promise of a better afterlife or by a dire economic situation as I once believed. They are doing this because they believe in the cause and the feeling that they are part of something bigger than themselves. The same unwavering belief and hubris that led people to Australia fifty thousand years ago is probably what motivates ISIS today. They are not nihilists as many in the west believe. They have an entirely different value system and they view the west as being as evil as the west sees them. Until we fully acknowledge this we will not be able to end it.

# The Drake equation and the Cambrian explosion

This summer billionaire Yuri Milner announced that he would spend upwards of 100 million dollars to search for extraterrestrial intelligent life (here is the New York Times article). This quest to see if we have company started about fifty years ago when Frank Drake pointed a radio telescope at some stars. To help estimate the number of possible civilizations, $N$, Drake wrote down his celebrated equation,

$N = R_*f_p n_e f_l f_i f_c L$

where $R_*$ is the rate of star formation, $f_p$ is the fraction of stars with planets, $n_e$ is the average number of planets per star that could support life, $f_l$ fraction of planets that develop life, $f_i$ fraction of those planets that develop intelligent life, $f_c$ fraction of civilizations that emit signals, and $L$ is the length of time civilizations emit signals.

The past few years have demonstrated that planets in the galaxy are likely to be plentiful and although the technology to locate earth-like planets does not yet exist, my guess is that they will also be plentiful. So does that mean that it is just a matter of time before we find ET? I’m going to come on record here and say no. My guess is that life is rare and intelligent life may be so rare that there could only be one civilization at a time in any given galaxy.

While we are now filling in the numbers for the left side of Drake’s equation, we have absolutely no idea about the right side of the equation. However, I have good reason to believe that it is astronomically small and that reason is statistical independence. Although Drake characterized the probability of intelligent life into the probability of life forming times the probability it goes on to develop extra-planetary communication capability, there are actually a lot of factors in between. One striking example is the probability of the formation of multi-cellular life. In earth’s history, for the better part of three and a half billion years we had mostly single cellular life and maybe a smattering of multicellular experiments. Then suddenly about half a billion years ago, we had the Cambrian Explosion where multicellular animal life from which we are descended suddenly came onto the scene. This implies that forming multicellular life is extremely difficult and it is easy to envision an earth where it never formed at all.

We can continue. If it weren’t for an asteroid impact, the dinosaurs may never have gone extinct and mammals may not have developed. Even more recently, there seem to have been many species of advanced primates yet only one invented radios. Agriculture only developed ten thousand years ago, which meant that modern humans took about a hundred thousand years to discover it and only in one place. I think it is equally plausible that humans could have gone extinct like all of our other australopithecus and homo cousins. Life in the sea has existed much longer than life on land and there is no technologically advanced sea creature although I do think octopuses, dolphins and whales are intelligent.

We have around 100 billion stars in the galaxy and let’s just say that each has a habitable planet. Well, if the probability of each stage of life is one in a billion and if we need say three stages to attain technology then the probability of finding ET is one in $10^{16}$. I would say that this is an optimistic estimate. Probabilities get small really quickly when you multiply them together. The probability of single cellular life will be much higher. It is possible that there could be hundred planets in our galaxy that have life but the chance that one of those is within a hundred light years will again be very low. However, I do think it is a worthwhile exercise to look for extracellular life, especially for oxygen or other life emitting gases in the atmosphere of exoplanets. It could tell us a lot about biology on earth.

2015-10-1: I corrected a factor of 10 error in some of the numbers.

# The philosophy of Thomas the Tank Engine

My toddler loves to watch the television show Thomas and Friends based on the The Railway Series books by the Rev. Wilbert Audry. The show tells the story of sentient trains on a mythical island off the British coast called Sodor. Each episode is a morality play where one of the trains causes some problem because of a character flaw like arrogance or vanity that eventually comes to the attention of the avuncular head of the railroad, Sr. Topham Hatt (called The Fat Controller in the UK). He mildly chastises the train, who becomes aware of his foolishness (it’s almost always a he) and remedies the situation.

While I think the show has some educational value for small children, it also brings up some interesting ethical and metaphysical questions that could be very relevant for our near future. For one, although the trains are sentient and seem to have full control over their actions, some of them also have human drivers. What are these drivers doing? Are they simply observers or are they complicit in the ill-judged actions of the trains? Should they be held responsible for the mistakes of the train? Who has true control, the driver or the train? Can one over-ride the other? These questions will be on everyone’s minds when the first self-driving cars hit the mass market in a few years.

An even more relevant ethical dilemma regards the place the trains have in society. Are they employees or indentured servants of the railroad company? Are they free to leave the railroad if they want? Do they own possessions? When the trains break down they are taken to the steam works, which is run by a train named Victor. However, humans effect the repairs. Do they take orders from Victor? Presumably, the humans get paid and are free to change jobs so is this a situation where free beings are supervised by slaves?

The highest praise a train can receive from Sir Topham Hatt is that he or she was “very useful.” This is not something one would say to a human employee in a modern corporation. You might say you were very helpful or that your action was very useful but it sounds dehumanizing to say “you are useful.” Thus, Sir Topham Hatt at least, does not seem to consider the trains to be humans. Perhaps, he considers them to be more like domesticated animals. However, these are animals that clearly have aspirations, goals, and feelings of self-worth. It seems to me that they should be afforded the full rights of any other citizen of Sodor. As machines become more and more integrated into our lives, it may well be useful to probe the philosophical quandaries of Thomas and Friends.