How many different universes can there be?

If there are an infinite number of universes or even if a single universe is infinite in extent then each person (or thing) should have an infinite number of doppelgangers, each with a slight variation. The argument is that if the universe is infinite, and most importantly does not repeat itself, then all possible configurations of matter and energy (or bits of stuff) can and will occur. There should be an infinite number of other universe or other parts of our universe that contain another solar system, with a sun and an earth and a you except that maybe one molecule in one of your cells is in a different position or moving at a different velocity. A simple way to think about this is to imagine an infinite black and white TV screen where each pixel can be either black or white. If the screen is nonperiodic then any configuration of pixels can be found somewhere on the screen. This is kind of like how any sequence of numbers can be found in the digits of Pi or an infinite number of monkeys typing will eventually type out Hamlet. This generalizes to changing or time dependent universes where any sequence of flickering pixels will exist somewhere on the screen.

Not all universes are possible if you include any type of universal rule in your universe. Universes that violate the rule are excluded. If the pixels obeyed Newton’s law of motion then arbitrary sequences of pixels could no longer occur because the configuration of pixels in the next moment of time depends on the previous moment. However, we can have all possible worlds if we assume that rules are not universal and can change over different parts of the universe.

Some universes are also excluded if we introduce rational belief. For example, it is possible that there is another universe, like in almost every movie these days, that is like ours but slightly different. However, it is impossible for a purely rational person in a given universe to believe arbitrary things. Rational belief is as strong a constraint on the universe as any law of motion. One cannot believe in the conservation of energy and the Incredible Hulk (who can increase in mass by a factor of a thousand within seconds) at the same time. Energy is not universally conserved in the Marvel Cinematic Universe. (Why these supposedly super-smart scientists in those universes don’t invent a perpetual motion machine is a mystery.) Rationality does not even have to be universal. Just having a single rational person excludes certain universes. Science is impossible in a totally random universe in which nothing is predictable. However, if a person merely believed they were rational but were actually not then any possible universe is again possible.

Ultimately, this boils down to the question of what exactly exists? I for one believe that concepts such as rationality, beauty, and happiness exist as much as matter and energy exists. Thus for me, all possible universes cannot exist. There does not exist a universe where I am happy and there is so much suffering and pain in the world.

2023-12-28: Corrected a typo.

Autocracy and Star Trek

Like many youth of my generation, I watched the original Star Trek in reruns and Next Generation and Deep Space Nine in real time. I enjoyed the shows but can’t really claim to be a Trekkie. I was already in graduate school when Next Generation began so I could not help but to scrutinize the shows for scientific accuracy. I was impressed that the way they discovered life in a baby universe created in one episode was by detecting localized entropy reduction, which is quite sophisticated scientifically. I bristled each time the star ship was on the brink of total failure and about to explode but the artificial gravity system still didn’t fail. I celebrated the one episode that actually had an artificial gravity failure and people actually floated in space! I thought it was ridiculous that almost every single planet they visited was always at room temperature with a breathable atmosphere. That doesn’t even describe many parts of earth. I mostly let these inaccuracies slide in the interest of story but I could never let go of one thing that always left me feeling somewhat despondent about the human condition, which was that even in a supposed super advanced egalitarian democratic society where material shortages no longer existed, Star Fleet was still an absolute autocracy. Many of the episodes dealt with strictly obeying the chain of command and never disobeying direct orders. A world with a democratic federation of planets, transporters and faster than light travel still believed that autocracy was the most efficient way to run an organization.

For most people throughout history and including today, the difference between autocracy and democracy is mostly abstract. People go to jobs where a boss tells them what to do. Virtually no one questions that corporations should be run autocratically. Authoritarian CEO’s are celebrated. Religion is generally autocratic. It only makes sense that the military backs autocrats given that autocracy is already the governing principle of their enterprise. Julius Caesar crossed the Rubicon and became the Dictator of Rome (he was never actually made Emperor) because he had the biggest army and it was loyal to him, not the Roman Republic. The only real question is how democracies even persist. People may care about freedom but do they really care all that much about democracy?

The inherent conflict of liberalism

Liberalism, as a philosophy, arose during the European Enlightenment of the 17th century. It’s basic premise is that people should be free to choose how they live, have a government that is accountable to them, and be treated equally under the law. It was the founding principle of the American and French revolutions and the basic premise of western liberal democracies. However, liberalism is inherently conflicted because when I exercise my freedom to do something (e.g. not wear a mask), I infringe on your freedom from the consequence of that thing (e.g. not be infected) and there is no rational resolution to this conflict. This conflict led to the split of liberalism into left and right branches. In the United States, the term liberal is exclusively applied to the left branch, which mostly focuses on the ‘freedom from’ part of liberalism. Those in the right branch, who mostly emphasize the ‘freedom to’ part, refer to themselves as libertarian, classical liberal, or (sometimes and confusingly to me) conservative. (I put neo-liberalism, which is a fundamentalist belief in free markets, into the right camp although it has adherents on both the left and right.) Both of these viewpoints are offspring of the same liberal tradition and here I will use the term liberal in the general sense.

Liberalism has never operated in a vacuum. The conflicts between “freedom to” and “freedom from” have always been settled by prevailing social norms, which in the Western world was traditionally dominated by Christian values. However, neither liberalism nor social norms have ever been sufficient to prevent bad outcomes. Slavery existed and was promoted by liberal Christian states. Genocide of all types and scales have been perpetrated by liberal Christian states. The battle to overcome slavery and to give equal rights to all peoples was a long and hard fought battle over slowly changing social norms rather than laws per se. Thus, while liberalism is the underlying principle behind Western governments, it is only part of the fabric that holds society together. Even though we have just emerged from the Dark Years, Western Liberalism is on its shakiest footing since the Second World War. The end of the Cold War did not bring on a permanent era of liberal democracy but may have spelled it’s eventual demise. What will supplant liberalism is up to us.

It is often perceived that the American Democratic party is a disorganized mess of competing interests under a big tent while the Republicans are much more cohesive but in fact the opposite is true. While the Democrats are often in conflict they are in fact a fairly unified center-left liberal party that strives to advocate for the marginalized. Their conflicts are mostly to do with which groups should be considered marginalized and prioritized. The Republicans on the other hand are a coalition of libertarians and non-liberal conservatives united only by their desire to minimize the influence of the federal government. The libertarians long for unfettered individualism and unregulated capitalism while the conservatives, who do not subscribe to all the tenets of liberalism, wish to halt encroaching secularism and a government that no longer serves their interests.

The unlikely Republican coalition that has held together for four decades is now falling apart. It came together because the more natural association between religious conservatism and a large federal bureaucracy fractured after the Civil Rights movements in the 1960’s when the Democrats no longer prioritized the concerns of the (white) Christian Right. (I will discuss the racial aspects in a future post). The elite pro-business neo-liberal libertarians could coexist with the religious conservatives as long as their concerns did not directly conflict but this is no longer true. The conservative wing of the Republican party have discovered their new found power and that there is an untapped population of disaffected individuals who are inclined to be conservative and also want a larger and more intrusive government that favors them. Prominent conservatives like Adrian Vermeule of Harvard and Senator Josh Hawley are unabashedly anti-liberal.

This puts the neo-liberal elites in a real bind. The Democratic party since Bill Clinton had been moving right with a model of pro-market neo-liberalism but with a safety net. However they were punished time and time again by the neo-liberal right. Instead of partnering with Obama, who was highly favorable towards neoliberalism, they pursued a scorched earth policy against him. Hilary Clinton ran on a pretty moderate safety-net-neo-liberal platform and got vilified as an un-American socialist. Now, both the Republicans and Democrats are trending away from neo-liberalism. The neo-liberals made a strategic blunder. They could have hedged their bets but now have lost influence in both parties.

While the threat of authoritarianism looms large, this is also an opportunity to accept the limits of liberalism and begin to think about what will take its place – something that still respects the basic freedoms afforded by liberalism but acknowledges that it is not sufficient. Conservative intellectuals like Leo Strauss have valid points. There is indeed a danger of liberalism lapsing into total moral relativism or nihilism. Guardrails against such outcomes must be explicitly installed. There is value in preserving (some) traditions, especially ancient ones that are the result of generations of human engagement. There will be no simple solution. No single rule or algorithm. We will need to explicitly delineate what we will accept and what we will not on a case by case basis.

Math solved Zeno’s paradox

On some rare days when the sun is shining and I’m enjoying a well made kouign-amann (my favourite comes from b.patisserie in San Francisco but Patisserie Poupon in Baltimore will do the trick), I find a brief respite from my usual depressed state and take delight, if only for a brief moment, in the fact that mathematics completely resolved Zeno’s paradox. To me, it is the quintessential example of how mathematics can fully solve a philosophical problem and it is a shame that most people still don’t seem to know or understand this monumental fact. Although there are probably thousands of articles on Zeno’s paradox on the internet (I haven’t bothered to actually check), I feel like visiting it again today even without a kouign-amann in hand.

I don’t know what the original statement of the paradox is but they all involve motion from one location to another like walking towards a wall or throwing a javelin at a target. When you walk towards a wall, you must first cross half the distance, then half the remaining distance, and so on forever. The paradox is thus: How then can you ever reach the wall, or a javelin reach its target, if it must traverse an infinite number of intervals? This paradox is completely resolved by the concept of the mathematical limit, which Newton used to invent calculus in the seventeenth century. I think understanding the limit is the greatest leap a mathematics student must take in all of mathematics. It took mathematicians two centuries to fully formalize it although we don’t need most of that machinery to resolve Zeno’s paradox. In fact, you need no more than middle school math to solve one of history’s most famous problems.

The solution to Zeno’s paradox stems from the fact that if you move at constant velocity then it takes half the time to cross half the distance and the sum of an infinite number of intervals that are half as long as the previous interval adds up to a finite number. That’s it! It doesn’t take forever to get anywhere because you are adding an infinite number of things that get infinitesimally smaller. The sum of a bunch of terms is called a series and the sum of an infinite number of terms is called an infinite series. The beautiful thing is that we can compute this particular infinite series exactly, which is not true of all series.

Expressed mathematically, the total time t it takes for an object traveling at constant velocity to reach its target is

t = \frac{d}{v}\left( \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots\right)

which can be rewritten as

t = \frac{d}{v}\sum_{n=1}^\infty \frac{1}{2^n}

where d is the distance and v is the velocity. This infinite series is technically called a geometric series because the ratio of two subsequent terms in the series is always the same. The terms are related geometrically like the volumes of n-dimensional cubes when you have halve the length of the sides (e.g. 1-cube (line and volume is length), 2-cube (square and volume is area), 3-cube (good old cube and volume), 4-cube ( hypercube and hypervolume), etc) .

For simplicity we can take d/v = 1. So to compute the time it takes to travel the distance, we must compute:

t = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16}\cdots

To solve this sum, the first thing is to notice that we can factor out 1/2 and obtain

t = \frac{1}{2}\left(1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8}\cdots\right)

The quantity inside the bracket is just the original series plus 1, i.e.

1 + t = 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16}\cdots

and thus we can substitute this back into the original expression for t and obtain

t = \frac{1}{2}(1 + t)

Now, we simply solve for t and I’ll actually go over all the algebraic steps. First multiply both sides by 2 and get

2 t = 1 +t

Now, subtract t from both sides and you get the beautiful answer that t = 1. We then have the amazing fact that

t = \sum_{n=1}^\infty \frac{1}{2^n} = 1

I never get tired of this. In fact this generalizes to any geometric series

\sum_{n=1}^\infty \frac{1}{a^n} = \frac{1}{1-a} - 1

for any a that is less than 1. The more compact way to express this is

\sum_{n=0}^\infty \frac{1}{a^n} = \frac{1}{1-a}

Now, notice that in this formula if you set a = 1, you get 1/0, which is infinity. Since 1^n= 1 for any n, this tells you that if you try to add up an infinite number of ones, you’ll get infinity. Now if you set a > 1 you’ll get a negative number. Does this mean that the sum of an infinite number of positive numbers greater than 1 is a negative number? Well no because the series is only defined for a < 1, which is called the domain of convergence. If you go outside of the domain, you can still get an answer but it won’t be the answer to your question. You always need to be careful when you add and subtract infinite quantities. Depending on the circumstance it may or may not give you sensible answers. Getting that right is what math is all about.

The fear is real

When I was in graduate school, my friends and I would jokingly classify the utility of research in terms of the order the researcher would be killed after the revolution. So, for physics, if you were working on say galaxy formation in the early universe you would be killed before someone working on the properties of hydrogen at low temperatures, who would be killed before someone working on building a fusion reactor. This was during the cold war and thus the prospect of Stalin and Mao still loomed large. We did not joke this way with fear or disdain but rather with a somewhat bemused acknowledgment that we were afforded the luxury to work on esoteric topics, while much of the world still did not have running water. In those days, the left-right divide was between the small government neoliberals (conservatives in those days who advocated for freer and more deregulated markets) and the bigger government New Deal liberals (those for more government action to address economic inequities). We certainly had fierce debates but they were always rather abstract. We never thought our lives would really change that much.

By the time I had finished and started my academic career, it was clear that the neoliberals had prevailed. The Soviet Union had collapsed, AT&T was broken up, and the Democratic president proclaimed the era of big government was over. Francis Fukuyama wrote “The End of History and the Last Man” arguing that western liberal democracy had triumphed over communism and would be the last form of government. I was skeptical then because I thought we could do better but I really didn’t consider that it could get worse.

But things got worse. We had the bursting of the dot com bubble, 9/11, the endless wars, the great recession, and now perhaps the twilight of democracy as Anne Applebaum laments in her most recent book. We can find blame everywhere – globalization, automation, the rise of China, out of touch elites, the greedy 1%, cynical politicians, the internet, social media, and so forth. Whatever the reason, this is an era where no one is happy and everyone is fearful.

The current divide in the United States is very real and there is fear on both sides. On one side, there is fear that an entire way of life is being taken away – a life of a good secure job, a nuclear family with well defined roles, a nice house, neighbors who share your values and beliefs, a government that mostly stays out of the way but helps when you are in need, the liberty to own a firearm, and a sense of community and shared sacrifice. On the other side, there is the fear that progress is being halted, that a minority will forever suppress a majority, that social, racial, and economic justice will never be achieved, that democracy itself is in peril, and that a better future will always be just out of reach.

What is most frustrating to me is that these points of view are not necessarily mutually exclusive. I don’t know how we can reconcile these differences but my biases and priors incline me to believe that we could alleviate some of the animosity and fear if we addressed income insecurity. While I think income inequality is a real problem, I think a more pressing concern is that a large segment of the population on both sides of the the divide lives continuously on a precipice of economic ruin, which has been made unavoidably apparent by our current predicament. I really think we need to consider a universal basic income. I also think it has to be universal because suspicion of fraud and resentment is a real issue. Everyone gets the check and those with sufficient incomes and wealth simply pay it back in taxes.

How science dies

Nietzsche famously wrote:

“God is dead. God remains dead. And we have killed him.”

This quote is often used as an example of Nietzsche’s nihilism but it is much more complicated. These words are actually spoken by a madman in Nietzsche’s book The Gay Science. According to philosopher Simon Critchley, the quote is meant to be a descriptive rather than a normative statement. What Nietzshe was getting at is that Christianity is a religion that values provable truth and as a result of this truth seeking, science arose. Science in turn generated skepticism of revealed truth and the concept of God. Thus, the end of Christianity was built into Christianity.

Borrowing from this analysis, science may also have have a built-in mechanism for its own doom. An excellent article in this month’s Technology Review describes the concept of epistemic dependence, where science and technology is so complicated now that no single person can understand all of it. In my own work, I could not reproduce a single experiment of my collaborators. Our collaborations work because we trust each other. I don’t really know how scientists identify new species of insects, or how paleontologists can tell what species a bone fragment belongs to, or all the details of the proof of the Poincare conjecture. However, I do understand how science and math works and trust that the results are based on those methods.

But what about people who are not trained in science? If you tell them that the universe was formed 14 billion years ago in a Big Bang and that 99% of all the stuff in the universe is completely invisible, why would they believe you. Why is that more believable then the earth being formed six thousand years ago in seven days? In both cases, knowledge is transferred to them from an authority. Sure you can say because of science, we live longer, have refrigerators, cell phones, and Netflix so we should believe scientists. On the other hand, a charismatic conman could tell them that they have those things because they were gifted from super advanced aliens. Depending on the sales job and one’s priors, it is not clear to me which would be more convincing.

So perhaps we need more science education? Well, in half a century of focus on science education, science literacy is not really very high in the general public. I doubt many people could explain how a refrigerator works much less the second law of thermodynamics and forget about quantum mechanics. Arthur C. Clarke’s third law that “All sufficiently advanced technology is indistinguishable from magic” is more applicable then ever. While it is true that science has delivered on producing better stuff it does not necessarily make us more fulfilled or happier. I can easily see a future where a large fragment of the population simply turns away from science with full knowledge of what they are doing. That would be the good outcome. The bad one is that people start to turn against science and scientists because someone has convinced them that all of their problems (and none of the good stuff) are due to science and scientists. They would then go and destroy the world as we know it without really intending to. I can see this happening too.

The battle over academic freedom

In the wake of George Floyd’s death, almost all of institutional America put out official statements decrying racism and some universities initiated policies governing allowable speech and research. This was followed by the expected dissent from those who worry that academic freedom is being suppressed (see here, here, and here for some examples). Then there is the (in)famous Harper’s Magazine open letter decrying Cancel Culture, which triggered a flurry of counter responses (e.g. see here and here).

While some faculty members in the humanities and (non-life) sciences are up in arms over the thought of a committee of their peers judging what should be allowable research, I do wish to point out that their colleagues over on the Medical campus have had to get approval for human and animal research for decades. Research on human subjects must first pass through an Institutional Review Board (IRB) while animal experiments must clear the Institutional Animal Care and Use Committee (IACUC). These panels ensure that the proposed work is ethical, sound, and justified. Even research that is completely noninvasive, such as analysis of genetic data, must pass scrutiny to ensure the data is not misused and subject identies are strongly protected. Almost all faculty members would agree that this step is important and necessary. History is rife of questionable research that range from careless to criminal. Is it so unreasonable to extend such a concept to the rest of campus?

Duality and computation in the MCU

I  took my kindergartener to see Avengers: Endgame recently. My son was a little disappointed, complaining that the film had too much talking and not enough fighting. To me, the immense popularity of the Marvel Cinematic Universe series and so-called science fiction/fantasy in general is an indicator of how people think they like science but really want magic. Popular science-fictiony franchises like MCU and Star Wars are couched in scientism but are often at odds with actual science as practiced today. Arthur C Clarke famously stated in his third law that “Any sufficiently advanced technology is indistinguishable from magic.” A sentiment captured in these films.

Science fiction should extrapolate from current scientific knowledge to the possible. Otherwise, it should just be called fiction. There have been a handful of films that try to do this like 2001: A Space Odyssey or more recently Interstellar and The Martian. I think there is a market for these types of films but they are certainly not as popular as the fantasy films. To be fair, neither Marvel nor Star Wars (both now owned by Disney) market themselves as science fiction as I defined it. They are intended to be mythologies a la Joseph Campbell’s Hero’s Journey. However, they do have a scientific aesthetic with worlds dominated by advanced technology.

Although I find the MCU films not overly compelling, they do bring up two interesting propositions. The first is dualism. The superhero character Ant-Man has a suit that allows him to change size and even shrink to sub-atomic scales, called the quantum realm in the films. (I won’t bother to discuss whether energy is conserved in these near instantaneous size changes, an issue that affects the Hulk as well). The film was advised by physicist Spiros Michalakis and is rife with physics terminology and concepts like quantum entanglement. One crucial concept it completely glosses over is how Ant-man maintains his identity as a person, much less his shape, when he is smaller than an atom. Even if one were to argue that one’s consciousness could be transferred to some set of quantum states at the sub-atomic scale, it would be overwhelmed by quantum fluctuations. The only self-consistent premise of Ant-Man is that the essence or soul if you wish of a person is not material. The MCU takes a definite stand for dualism on the mind-body problem, a sentiment with which I presume the public mostly agrees. 

The second is that magic has immense computational power. In the penultimate Avengers movie, the villain Thanos snaps his fingers while in possession of the complete set of infinity stones and eliminates half of all living things. (Setting aside the issue that Thanos clearly does not understand the the concept of exponential growth. If you are concerned about overpopulation, it is pointless to shrink the population and do nothing else because it will just return to its original size in short time.) What I’d like to know is who or what does the computation to carry out the command. There are at least two hard computational problems that must be solved. The first is to identify all lifeforms.  This is clearly no easy task as we to this day have no precise definition of life. Do viruses get culled by the snap? Do the population of silicon-based lifeforms of Star Trek get halved or is it only biochemical life? What algorithm does the snap use to find all the life forms? Living things on earth range in size from single cells (or viruses if you count them) all the way to 35 metre behemoths, which are comprised of over 10^{23} numbers of atoms. How do the stones know what scales they span in the MCU? Do photosynthetic lifeforms get spared since they don’t use many resources? What about fungi? Is the MCU actually a simulated universe where there is a continually updated census of all life? How accurate is the algorithm? Was it perfect? Did it aim for high specificity (i.e. reduce false positives so you only kill lifeforms and not non lifeforms) or high sensitivity (i.e. reduce false negatives and thus don’t miss any lifeforms). I think it probably favours sensitivity over specificity – who cares if a bunch of ammonia molecules accidentally get killed. The find-all-life problem is made much easier by proposition 1 because if all life were material then the only way to detect them would be to look for multiscale correlations between atoms (or find organic molecules if you only care about biochemical life). If each lifeform has a soul then you can simply search for “soulfulness”. The lifeforms were not erased instantly but only after a brief delay. What was happening over this delay. Is magic propagation limited by the speed of light or some other constraint? Or did the computation take time? In Endgame, the Hulk restores all the Thanos erased lifeforms and Tony Stark then snaps away Thanos and all of his allies. Where were the lifeforms after they were erased? In Heaven? In a soul repository somewhere? Is this one of the Nine Realms of the MCU? How do the stones know who is a Thanos ally? The second computation is to then decide which half to extinguish. The movie seems to imply that the choice was random so where did the randomness come from? Do the infinity stones generate random numbers? Do they rely on quantum fluctuations? Finally, in a world with magic, why is there also science? Why does the universe follow the laws of physics sometimes and magic other times. Is magic a finite resource as in Larry Niven’s The Magic Goes Away. So many questions, so few answers.

AI and authoritarianism

Much of the discourse on the future of AI , such as this one, has focused on people being displaced by machines. While this is certainly a worthy concern, these analyses sometimes fall into the trap of linear thinking because the displaced workers are also customers. The revenues of companies like Google and Facebook depend almost entirely on selling advertisements to a consumer base that has disposable income to spend. What happens when this base dwindles to a tiny fraction of the world’s population? The progression forward will also most likely not be monotonic because as people initially start to be replaced by machines, those left with jobs may actually get increased compensation and thus drive more consumerism. The only thing that is certain is that the end point of a world where no one has work is one where capitalism as we know it will no longer exist.

Historian and author Yuval Harari argues that in the pre-industrial world, to have power is to have land (I would add slaves and I strongly recommend visiting the National Museum of African American History and Culture for a sobering look at how America became so powerful). In the industrial world, the power shifted to those who own the machines (although land won’t hurt) while in the post-industrial world, power falls to those with the data. Harari was extrapolating our current world where large corporations can track us continually and use machine learning to monopolize our attention and get us to do what they desire. However, data on people is only useful as long as they have resources you want. If people truly become irrelevant then their data is also irrelevant.

It’s anyone’s guess as to what will happen in the future. I proposed an optimistic scenario here but here is a darker one. Henry Ford supposedly wanted to pay his employees a decent wage because he realized that they were also the customers for his product. In the early twentieth century, the factory workers formed the core of the burgeoning middle class that would drive demand for consumer products made in the very factories where they toiled. It was in the interest of industrialists that the general populace be well educated and healthy because they were the source of their wealth. This link began to fray at the end of the twentieth century with the rise of the service economy, globalisation, and automation. After the second World War, post-secondary education became available to a much larger fraction of the population. These college educated people did not go to work on the factory floor but fed the expanding ranks of middle management and professionals. They became managers and accountants and dentists and lawyers and writers and consultants and doctors and educators and scientists and engineers and administrators. They started new businesses and new industries and helped drive the economy to greater prosperity. They formed an upper middle class that slowly separated from the working class and the rest of the middle class. They also started to become a self-sustaining entity that did not rely so much on the rest of the population. Globalisation and automation made labor plentiful and cheap so there was less of an incentive to have a healthy educated populace. The wealth of the elite no longer depended on the working class and thus their desire to invest in them declined. I agree with the thesis that the abandonment of the working class in Western liberal democracies is the main driver of the recent rise of authoritarianism and isolationism around the world.

However, authoritarian populist regimes, such as those in Venezuela and Hungary, stay in power because the disgruntled class that supports them is a larger fraction of the population than the opposing educated upper middle class that are the winners in a contemporary liberal democracy. In the US, the disgruntled class is still a minority so thus far it seems like authoritarianism will be held at bay by the majority coalition of immigrants, minorities, and costal liberals. However, this coalition could be short lived. Up to now, AI and machine learning has not been taking jobs away from the managerial and professional classes. But as I wrote about before, the people most at risk for losing jobs to machines may not be those doing jobs that are simple for humans to master but those that are difficult. It may take awhile before professionals start to be replaced but once it starts it could go swiftly. Once a machine learning algorithm is trained, it can be deployed everywhere instantly. As the ranks of the upper middle class dwindle, support for a liberal democracy could weaken and a new authoritarian regime could rise.

Ironically, a transition to a consumer authoritarianism would be smoothed and possibly quickened by a stronger welfare state. A possible jobless economy would be one where the state provides a universal basic income that is funded by taxation on existing corporations, which would then compete for those very same dollars. Basically, the future incarnations of Apple, Netflix, Facebook, Amazon, and Google would give money to an idle population and then try to win it back. Although, this is not a world I would choose to live in, it would be preferable to a socialistic model where the state would decide on what goods and services to provide. It would actually be in the interest of the corporations and their elite owners to lobby for high taxes and to not form monopolies and allow for competition to provide better goods and services. The tax rate would not matter much because in a steady state loop, any wealth inequality is stable regardless of the flux. It is definitely in their interest to keep the idle population happy.

Technology and inference

In my previous post, I gave an example of how fake news could lead to a scenario of no update of posterior probabilities. However, this situation could occur just from the knowledge of technology. When I was a child, fantasy and science fiction movies always had a campy feel because the special effects were unrealistic looking. When Godzilla came out of Tokyo Harbour it looked like little models in a bathtub. The Creature from the Black Lagoon looked like a man in a rubber suit. I think the first science fiction movie that looked astonishing real was Stanley Kubrick’s 1968 masterpiece 2001: A Space Odyssey, which adhered to physics like no others before and only a handful since. The simulation of weightlessness in space was marvelous and to me the ultimate attention to detail was the scene in the rotating space station where a mild curvature in the floor could be perceived. The next groundbreaking moment was the 1993 film Jurassic Park, which truly brought dinosaurs to life. The first scene of a giant sauropod eating from a tree top was astonishing. The distinction between fantasy and reality was forever gone.

The effect of this essentially perfect rendering of anything into a realistic image is that we now have a plausible reason to reject any evidence. Photographic evidence can be completely discounted because the technology exists to create completely fabricated versions. This is equally true of audio tapes and anything your read on the Internet. In Bayesian terms, we now have an internal model or likelihood function that any data could be false. The more cynical you are the closer this constant is to one. Once the likelihood becomes insensitive to data then we are in the same situation as before. Technology alone, in the absence of fake news, could lead to a world where no one ever changes their mind. The irony could be that this will force people to evaluate truth the way they did before such technology existed, which is that you believe people (or machines) that you trust through building relationships over long periods of time.

The US election and the future

Political scientists will be dissecting the results of the 2016 US presidential election for the next decade but certainly one fact that is likely to be germane to any analysis is that real wages have been stagnant or declining for the past 45 years. I predict that this trend will only worsen no matter who is in power. The stark reality is that most jobs are replaceable by machines. This is not because AI has progressed to the point that machines can act human but because most jobs, especially higher paying jobs, do not depend heavily on being human. While I have seen some consternation about the prospect of 1.5 million truck drivers being replaced by self-driving vehicles in the near future, I have seen much less discourse on the fact that this is also likely to be true for accountants, lawyers, middle managers, medical professionals, and other well compensated professionals. What people seem to miss is that the reason these jobs are well paid is that there are relatively few people who are capable of doing them and that is because they are difficult for humans to master. In other words, they are well paid because they require not acting particulary human. IBM’s Watson, which won the game show Jeopardy and AlphaGo, which beat the world’s best Go player, shows that machines can quite easily better humans at specific tasks. The more specialized the task, the easier it will be for a machine to do it. The cold hard truth is that AI does not have to improve for you to be replaced by a machine. It does not matter whether strong AI, (an artificial intelligence that truly thinks like a human), is ever possible. It only matters that machine learning algorithms can mimic what you do now. The only thing necessary for this to happen was for computers to be fast enough and now they are.

What this implies is that the jobs of the future will be limited to those that require being human or where interacting with a human is preferred. This will include 1) jobs that most people can do and thus will not be well paid like store sales people, restaurant servers, bar tenders, cafe baristas, and low skill health workers, 2) jobs that require social skills that might be better paid such as social workers, personal assistants, and mental health professionals, 3) jobs that require special talents like artisans, artists, and some STEM professionals, and 4) capitalists that own firms that employ mostly robots. I strongly believe that only a small fraction of the population will make it to categories 3) and 4). Most people will be in 1) or not have a job at all. I have argued before that one way out is for society to choose to make low productivity work viable. In any case, the anger we saw this year is only going to grow because existing political institutions are in denial about the future. The 20th century is over. We are not getting it back. The future is either the 17th or 18th century with running water, air conditioning and health care or the 13th century with none of these.

Revolution vs incremental change

I think that the dysfunction and animosity we currently see in the US political system and election is partly due to the underlying belief that meaningful change cannot be effected through slow evolution but rather requires an abrupt revolution where the current system is torn down and rebuilt. There is some merit to this idea. Sometimes the structure of a building can be so damaged that it would be easier to demolish and rebuild rather than repair and renovate. Mathematically, this can be expressed as a system being stuck in a local minimum (where getting to the global minimum is desired). In order to get to the true global optimum, you need to get worse before you can get better. When fitting nonlinear models to data, dealing with local minima is a major problem and the reason that a stochastic MCMC algorithm that does occasionally go uphill works so much better than gradient descent, which only goes downhill.

However, the recent success of deep learning may dispel this notion when the dimension is high enough. Deep learning, which is a multi-layer neural network that can have millions of parameters is the quintessence of a high dimensional model. Yet, it seems to be able to work just fine using the back propagation algorithm, which is a form of gradient descent. The reason could be that in high enough dimensions, local minima are rare and the majority of critical points (places where the slope is zero) are high dimensional saddle points, where there is always a way out in some direction. In order to have a local minimum, the matrix of second derivatives in all directions (i.e. Hessian matrix) must be positive definite (i.e. have all positive eigenvalues). As the dimension of the matrix gets larger and larger there are simply more ways for one eigenvalue to be negative and that is all you need to provide an escape hatch. So in a high dimensional system, gradient descent may work just fine and there could be an interesting tradeoff between a parsimonious model with few parameters but difficult to fit versus a high dimensional model that is easy to fit. Now the usual danger of having too many parameters is that you overfit and thus you fit the noise at the expense of the signal and have no ability to generalize. However, deep learning models seem to be able to overcome this limitation.

Hence, if the dimension is high enough evolution can work while if it is too low then you need a revolution. So the question is what is the dimensionality of governance and politics. In my opinion, the historical record suggests that revolutions generally do not lead to good outcomes and even when they do small incremental changes seem to get you to a similar place. For example, the US and France had bloody revolutions while Canada and the England did not and they all have arrived at similar liberal democratic systems. In fact, one could argue that a constitutional monarchy (like Canada and Denmark), where the head of state is a figure head is more stable and benign than a republic, like Venezuela or Russia (e.g. see here). This distinction could have pertinence for the current US election if a group of well-meaning people, who believe that the two major parties do not have any meaningful difference, do not vote or vote for a third party. They should keep in mind that incremental change is possible and small policy differences can and do make a difference in people’s lives.

Forming a consistent political view

In view of the current US presidential election, I think it would be a useful exercise to see if I could form a rational political view that is consistent with what I actually know and believe from my training as a scientist. From my knowledge of dynamical systems and physics, I believe in the inherent unpredictability of complex nonlinear systems. Uncertainty is a fundamental property of the universe at all scales. From neuroscience, I know that people are susceptible to errors, do not always make optimal choices, and are inconsistent. People are motivated by whatever triggers dopamine to be released. From genetics, I know that many traits are highly heritable and that includes height, BMI, IQ and the Big Five personality traits. There is lots of human variance. People are motivated by different things, have various aptitudes, and have various levels of honesty and trustworthiness. However, from evolution theory, I know that genetic variance is also essential for any species to survive. Variety is not just the spice of life, it is also the meat. From ecology, I know that the world is a linked ecosystem. Everything is connected. From computer science, I know that there are classes of problems that are easy to solve, classes that are hard to solve, and classes that are impossible to solve and no amount of computing power can change that. From physics and geology, I fully accept that greenhouse gases will affect the energy balance on earth and that the climate is changing. However, given the uncertainty of dynamical systems, while I do believe that current climate models are pretty good, there does exist the possibility that they are missing something. I believe that the physical laws that govern our lives are computable and this includes consciousness. I believe everything is fallible and that includes people, markets and government.

So how would that translate into a political view? Well, it would be a mishmash of what might be considered socialist, liberal, conservative, and libertarian ideas. Since I think randomness and luck is a large part of life, including who your parents are, I do not subscribe to the theory of just desserts. I don’t think those with more “talents” deserve all the wealth they can acquire. However, I also do realize that we are motivated by dopamine and part of what triggers dopamine is reaping the rewards of our efforts so we must leave incentives in place. We should not try to make society completely equal but redistributive taxation is necessary and justified.

Since I think people are basically incompetent and don’t always make good choices, people sometimes need to be protected from themselves. We need some nanny state regulations such as building codes, water and air quality standards, transportation safety, and toy safety. I don’t believe that all drugs should be legalized because some drugs can permanently damage brains, especially those of children. Amphetamines and opioids should definitely be illegal. Marijuana is probably okay but not for children. Pension plans should be defined benefit (rather than defined contribution) schemes. Privatizing social security would be a disaster. However, we should not over regulate.  I would deregulate a lot of land use especially density requirements. We should eliminate all regulations that enforce monopolies including some professional requirements that deliberately restrict supply. We should not try to pick winners in any industry.

I believe that people will try to game the system so we should design welfare and tax systems that minimize the possibility of cheating. The current disability benefits program needs to be fixed. I do not believe in means testing for social programs as it gives room to cheat. Cheating not only depletes the system but also engenders resentment in others who do not cheat. Part of the anger of the working class is that they see people around them gaming the system. The way out is to replace the entire welfare system with a single universal basic income. People have argued that it makes no sense for Bill Gates and Warren Buffet to get a basic income. In actuality, they would end up paying most of it back in taxes. In biology, this is called a futile cycle but it has utility since it is easier to just give everyone the same benefits and tax according to one rule then having exceptions for everything as we have now. We may not be able to afford a basic income now but we eventually will.

Given our lack of certainty and incompetence, I would be extremely hesitant about any military interventions on foreign soil. We are as likely to make things worse as we are to make the better. I think free trade is in net a good thing because it does lead to higher efficiency and helps people in lower income countries. However, it will absolutely hurt some segment of the population in the higher income country. Since income is correlated with demand for your skills, in a globalized world those with skills below the global median will be losers. If a lot of people will do your job for less then you will lose your job or get paid less. For the time being, there should be some wage support for low wage people but eventually this should transition to the basic income.

Since I believe the brain is computable, this means that any job a human can do, a robot will eventually do as well or better. No job is safe. I do not know when the mass displacement of work will take place but I am sure it will come. As I wrote in my AlphaGo piece, not everyone can be a “knowledge” worker, media star, or CEO. People will find things to do but they won’t all be able to earn a living off of it in our current economic model. Hence, in the robot world, everyone would get a basic income and guaranteed health care and then be free to do whatever they want to supplement that income including doing nothing. I romantically picture a simulated 18th century world with people indulging in low productivity work but it could be anything. This will be financed by taxing the people who are still making money.

As for taxes, I think we need to go a system that de-emphasizes income taxes, which can be gamed and disincentivizes work, to one that taxes the use of shared resources (i.e. economic rents). This includes land rights, mineral rights, water rights, air rights, solar rights, wind rights, monopoly rights, eco system rights, banking rights, genetic rights, etc. These are resources that belong to everyone. We could use a land value tax model. When people want to use a resource, like land to build a house, they would pay the intrinsic value of that resource. They would keep any value they added. This would incentivize efficient utility of the resource while not telling anyone how to use it.

We could use an auction system to value these resources and rights. Hence, we need not regulate wall street firms per se but we would tax them according to the land they use and what sort of monopoly influence they exploit. We wouldn’t need to force them to obey capital requirements, we would tax them for the right to leverage debt. We wouldn’t need Glass-Steagall or Too Big to Fail laws for banks. We’ll just tax them for the right to do these things. We would also not need a separate carbon tax. We’ll tax the right to extract fossil fuels at a level equal to the resource value and the full future cost to the environment. The climate change debate would then shift to be about the discount rate. Deniers would argue for a large rate and alarmists for a small one. Sports leagues and teams would be taxed for their monopolies. The current practice of preventing cities from owning teams would be taxed.

The patent system needs serious reform. Software patents should be completely eliminated. Instead of giving someone arbitrary monopoly rights for a patent, patent holders should be taxed at some level that increases with time. This would force holders to commercialize, sell or relinquish the patent when they could no longer bear the tax burden and this would eliminate patent trolling.

We must accept that there is no free will per se so that crime and punishment must be reinterpreted. We should only evaluate whether offenders are dangerous to society and the seriousness of the crime. Motive should no longer be important. Only dangerous offenders would be institutionalized or incarcerated. Non-dangerous ones should repay the cost of the crime plus a penalty. We should also do a Manhattan project for nonlethal weapons so the police can carry them.

Finally, under the belief that nothing is certain, laws and regulations should be regularly reviewed including the US Constitution and the Bill of Rights. In fact, I propose that the 28th Amendment be that all laws and regulations must be reaffirmed or they will expire in some set amount of time.

 

 

 

The simulation argument made quantitative

Elon Musk, of Space X, Tesla, and Solar City fame, recently mentioned that he thought the the odds of us not living in a simulation were a billion to one. His reasoning was based on extrapolating the rate of improvement in video games. He suggests that soon it will be impossible to distinguish simulations from reality and in ten thousand years there could easily be billions of simulations running. Thus there are a billion more simulated universes than real ones.

This simulation argument was first quantitatively formulated by philosopher Nick Bostrom. He even has an entire website devoted to the topic (see here). In his original paper, he proposed a Drake-like equation for the fraction of all “humans” living in a simulation:

f_{sim} = \frac{f_p f_I N_I}{f_p f_I N_I + 1}

where f_p is the fraction of human level civilizations that attain the capability to simulate a human populated civilization, f_I is the fraction of these civilizations interested in running civilization simulations, and N_I is the average number of simulations running in these interested civilizations. He then argues that if N_I is large, then either f_{sim}\approx 1 or f_p f_I \approx 0. Musk believes that it is highly likely that N_I is large and f_p f_I is not small so, ergo, we must be in a simulation. Bostrom says his gut feeling is that f_{sim} is around 20%. Steve Hsu mocks the idea (I think). Here, I will show that we have absolutely no way to estimate our probability of being in a simulation.

The reason is that Bostrom’s equation obscures the possibility of two possible divergent quantities. This is more clearly seen by rewriting his equation as

f_{sim} = \frac{y}{x+y} = \frac{y/x}{y/x+1}

where x is the number of non-sim civilizations and y is the number of sim civilizations. (Re-labeling x and y as people or universes does not change the argument). Bostrom and Musk’s observation is that once a civilization attains simulation capability then the number of sims can grow exponentially (people in sims can run sims and so forth) and thus y can overwhelm x and ergo, you’re in a simulation. However, this is only true in a world where x is not growing or growing slowly. If x is also growing exponentially then we can’t say anything at all about the ratio of y to x.

I can give a simple example.  Consider the following dynamics

\frac{dx}{dt} = ax

\frac{dy}{dt} = bx + cy

y is being created by x but both are both growing exponentially. The interesting property of exponentials is that a solution to these equations for a > c is

x = \exp(at)

y = \frac{b}{a-c}\exp(at)

where I have chosen convenient initial conditions that don’t affect the results. Even though y is growing exponentially on top of an exponential process, the growth rates of x and y are the same. The probability of being in a simulation is then

f_{sim} = \frac{b}{a+b-c}

and we have no way of knowing what this is. The analogy is that you have a goose laying eggs and each daughter lays eggs, which also lay eggs. It would seem like there would be more eggs from the collective progeny than the original mother. However, if the rate of egg laying by the original mother goose is increasing exponentially then the number of mother eggs can grow as fast as the number of daughter, granddaughter, great…, eggs. This is just another example of how thinking quantitatively can give interesting (and sometimes counterintuitive) results. Until we have a better idea about the physics underlying our universe, we can say nothing about our odds of being in a simulation.

Addendum: One of the predictions of this simple model is that there should be lots of pre-sim universes. I have always found it interesting that the age of the universe is only about three times that of the earth. Given that the expansion rate of the universe is actually increasing, the lifetime of the universe is likely to be much longer than the current age. So, why is it that we are alive at such an early stage of our universe? Well, one reason may be that the rate of universe creation is very high and so the probability of being in a young universe is higher than being in an old one.

Addendum 2: I only gave a specific solution to the differential equation. The full solution has the form Y_1\exp(at) + Y_2 \exp(ct).  However, as long as a >c, the first term will dominate.

Addendum 3: I realized that I didn’t make it clear that the civilizations don’t need to be in the same universe. Multiverses with different parameters are predicted by string theory.  Thus, even if there is less than one civilization per universe, universes could be created at an exponentially increasing rate.

 

Confusion about consciousness

I have read two essays in the past month on the brain and consciousness and I think both point to examples of why consciousness per se and the “problem of consciousness” are both so confusing and hard to understand. The first article is by philosopher Galen Strawson in The Stone series of the New York Times. Strawson takes issue with the supposed conventional wisdom that consciousness is extremely mysterious and cannot be easily reconciled with materialism. He argues that the problem isn’t about consciousness, which is certainly real, but rather matter, for which we have no “true” understanding. We know what consciousness is since that is all we experience but physics can only explain how matter behaves. We have no grasp whatsoever of the essence of matter. Hence, it is not clear that consciousness is at odds with matter since we don’t understand matter.

I think Strawson’s argument is mostly sound but he misses on the crucial open question of consciousness. It is true that we don’t have an understanding of the true essence of matter and we probably never will but that is not why consciousness is mysterious. The problem is that we do now know whether the rules that govern matter, be they classical mechanics, quantum mechanics, statistical mechanics, or general relativity, could give rise to a subjective conscious experience. Our understanding of the world is good enough for us to build bridges, cars, computers and launch a spacecraft 4 billion kilometers to Pluto, take photos, and send them back. We can predict the weather with great accuracy for up to a week. We can treat infectious diseases and repair the heart. We can breed super chickens and grow copious amounts of corn. However, we have no idea how these rules can explain consciousness and more importantly we do not know whether these rules are sufficient to understand consciousness or whether we need a different set of rules or reality or whatever. One of the biggest lessons of the twentieth century is that knowing the rules does not mean you can predict the outcome of the rules. Not even taking into the computability and decidability results of Turing and Gödel, it is still not clear how to go from the microscopic dynamics of molecules to the Navier-Stokes equation for macroscopic fluid flow and how to get from Navier-Stokes to the turbulent flow of a river. Likewise, it is hard to understand how the liver works, much less the brain, starting from molecules or even cells. Thus, it is possible that consciousness is an emergent phenomenon of the rules that we already know, like wetness or a hurricane. We simply do not know and are not even close to knowing. This is the hard problem of consciousness.

The second article is by psychologist Robert Epstein in the online magazine Aeon. In this article, Epstein rails against the use of computers and information processing as a metaphor for how the brain works. He argues that this type of restricted thinking is why we can’t seem to make any progress understanding the brain or consciousness. Unfortunately, Epstein seems to completely misunderstand what computers are and what information processing means.

Firstly, a computation does not necessarily imply a symbolic processing machine like a von Neumann computer with a central processor, memory, inputs and outputs. A computation in the Turing sense is simply about finding or constructing a desired function from one countable set to another. Now, the brain certainly performs computations; any time we identify an object in an image or have a conversation, the brain is performing a computation. You can couch it in whatever language you like but it is a computation. Additionally, the whole point of a universal computer is that it can perform any computation. Computations are not tied to implementations. I can always simulate whatever (computable) system you want on a computer. Neural networks and deep learning are not symbolic computations per se but they can be implemented on a von Neumann computer. We may not know what the brain is doing but it certainly involves computation of some sort. Any thing that can sense the environment and react is making a computation. Bacteria can compute. Molecules compute. However, that is not to say that everything a brain does can be encapsulated by Turing universal computation. For example, Penrose believes that the brain is not computable although as I argued in a previous post, his argument is not very convincing. It is possible that consciousness is beyond the realm of computation and thus would entail very different physics. However, we have yet to find an example of a real physical phenomenon that is not computable.

Secondly, the brain processes information by definition. Information in both the Shannon and Fisher senses is a measure of uncertainty reduction. For example, in order to meet someone for coffee you need at least two pieces of information, where and when. Before you received that information your uncertainty was huge since there were so many possible places and times the meeting could take place. After receiving the information your uncertainty was eliminated. Just knowing it will be on Thursday is already a big decrease in uncertainty and an increase in information. Much of the brain’s job at least for cognition is about uncertainly reduction. When you are searching for your friend in the crowded cafe, you are eliminating possibilities and reducing uncertainty. The big mistake that Epstein makes is conflating an example with the phenomenon. Your brain does not need to function like your smartphone to perform computations or information processing. Computation and information theory are two of the most important mathematical tools we have for analyzing cognition.

Chomsky on The Philosopher’s Zone

Listen to MIT Linguistics Professor Noam Chomsky on ABC’s radio show The Philosopher’s Zone (link here).  Even at 87, he is still as razor sharp as ever. I’ve always been an admirer of Chomsky although I think I now mostly disagree with his ideas about language. I do remember being completely mesmerized by the few talks I attended when I was a graduate student.

Chomsky is the father of modern linguistics. He turned it into a subfield of computer science and mathematics. People still use Chomsky Normal Form and the Chomsky Hierarchy in computer science. Chomsky believes that the language ability is universal among all humans and is genetically encoded. He comes to this conclusion because in his mathematical analysis of language he found what he called “deep structures”, which are embedded rules that we are consciously unaware of when we use language. He was adamantly opposed to the idea that language could be acquired via a probabilistic machine learning algorithm. His most famous example is that we know that the sentence “Colorless green ideas sleep furiously” makes grammatical sense but is nonsensical while the sentence “Furiously sleep ideas green colorless”, is nongrammatical. Since, neither of these sentences had ever been spoken nor written he surmised that no statistical algorithm could ever learn the difference between the two. I think it is pretty clear now that Chomsky was incorrect and machine learning can learn to parse language and classify these sentences. There has also been field work that seems to indicate that there do exist languages in the Amazon that are qualitatively different form the universal set. It seems that the brain, rather than having an innate ability for grammar and language, may have an innate ability to detect and learn deep structure with a very small amount of data.

The host Joe Gelonesi, who has filled in admirably for the sadly departed Alan Saunders, asks Chomsky about the hard problem of consciousness near the end of the program. Chomsky, in his typical fashion of invoking 17th and 18th century philosophy, dismisses it by claiming that science itself and physics in particular has long dispensed with the equivalent notion. He says that the moment that Newton wrote down the equation for gravitational force, which requires action at a distance, physics stopped being about making the universe intelligible and became about creating predictive theories. He thus believes that we will eventually be able to create a theory of consciousness although it may not be intelligible to humans. He also seems to subscribe to panpsychism, where consciousness is a property of matter like mass, an idea championed by Christof Koch and Giulio Tononi. However, as I pointed out before, panpsychism is dualism. If it does exist, then it exists apart from the way we currently describe the universe. Lately, I’ve come to believe and accept the fact that consciousness is an epiphenomenon and has no causal consequence in the universe. I must credit David Chalmers (e.g. see previous post) for making it clear that this is the only recourse to dualism. We are no more nor less than automata caroming through the universe, with the ability to spectate a few tens of milliseconds after the fact.

Addendum: As pointed out in the comments, there are monoistic theories, as espoused by Bishop Berkeley, where only ideas are real.  My point about the only recourse to dualism is epiphenomena for consciousness, is if one adheres to materialism.

 

 

 

 

 

The nature of evil

In our current angst over terrorism and extremism, I think it is important to understand the motivation of the agents behind the evil acts if we are ever to remedy the situation. The observable element of evil (actus reus) is the harm done to innocent individuals. However, in order to prevent evil acts, we must understand the motivation behind the evil (mens rea). The Radiolab podcast “The Bad Show” gives an excellent survey of the possible varieties of evil. I will categorize evil into three types, each with increasing global impact. The first is the compulsion or desire within an individual to harm another. This is what motivates serial killers like the one described in the show. Generally, such evilness will be isolated and the impact will be limited albeit grisly. The second is related to what philosopher Hannah Arendt called “The Banality of Evil.” This is an evil where the goal of the agent is not to inflict harm per se as in the first case but in the process of pursuing some other goal, there is no attempt to avoid possible harm to others. This type of sociopathic evil is much more dangerous and widespread as is most recently seen in Volkswagen’s fraudulent attempt to pass emission standards. Although there are sociopathic individuals that really have no concern for others, I think many perpetrators in this category are swayed by cultural norms or pressures to conform. The third type of evil is when the perpetrator believes the act is not evil at all but a means to a just and noble end. This is the most pernicious form of evil because when it is done by “your side” it is not considered evil. For example, the dropping of atomic bombs on Japan was considered to be a necessary sacrifice of a few hundred thousand lives to end WWII and save many more lives.

I think it is important to understand that the current wave of terrorism and unrest in the Middle East is motivated by the third type. Young people are joining ISIS not because they particularly enjoy inflicting harm on others or they don’t care how their actions affect others, but because they are rallying to a cause they believe to be right and important. Many if not most suicide bombers come from middle class families and many are women. They are not merely motivated by a promise of a better afterlife or by a dire economic situation as I once believed. They are doing this because they believe in the cause and the feeling that they are part of something bigger than themselves. The same unwavering belief and hubris that led people to Australia fifty thousand years ago is probably what motivates ISIS today. They are not nihilists as many in the west believe. They have an entirely different value system and they view the west as being as evil as the west sees them. Until we fully acknowledge this we will not be able to end it.

The Drake equation and the Cambrian explosion

This summer billionaire Yuri Milner announced that he would spend upwards of 100 million dollars to search for extraterrestrial intelligent life (here is the New York Times article). This quest to see if we have company started about fifty years ago when Frank Drake pointed a radio telescope at some stars. To help estimate the number of possible civilizations, N, Drake wrote down his celebrated equation,

N = R_*f_p n_e f_l f_i f_c L

where R_* is the rate of star formation, f_p is the fraction of stars with planets, n_e is the average number of planets per star that could support life, f_l fraction of planets that develop life, f_i fraction of those planets that develop intelligent life, f_c fraction of civilizations that emit signals, and L is the length of time civilizations emit signals.

The past few years have demonstrated that planets in the galaxy are likely to be plentiful and although the technology to locate earth-like planets does not yet exist, my guess is that they will also be plentiful. So does that mean that it is just a matter of time before we find ET? I’m going to come on record here and say no. My guess is that life is rare and intelligent life may be so rare that there could only be one civilization at a time in any given galaxy.

While we are now filling in the numbers for the left side of Drake’s equation, we have absolutely no idea about the right side of the equation. However, I have good reason to believe that it is astronomically small and that reason is statistical independence. Although Drake characterized the probability of intelligent life into the probability of life forming times the probability it goes on to develop extra-planetary communication capability, there are actually a lot of factors in between. One striking example is the probability of the formation of multi-cellular life. In earth’s history, for the better part of three and a half billion years we had mostly single cellular life and maybe a smattering of multicellular experiments. Then suddenly about half a billion years ago, we had the Cambrian Explosion where multicellular animal life from which we are descended suddenly came onto the scene. This implies that forming multicellular life is extremely difficult and it is easy to envision an earth where it never formed at all.

We can continue. If it weren’t for an asteroid impact, the dinosaurs may never have gone extinct and mammals may not have developed. Even more recently, there seem to have been many species of advanced primates yet only one invented radios. Agriculture only developed ten thousand years ago, which meant that modern humans took about a hundred thousand years to discover it and only in one place. I think it is equally plausible that humans could have gone extinct like all of our other australopithecus and homo cousins. Life in the sea has existed much longer than life on land and there is no technologically advanced sea creature although I do think octopuses, dolphins and whales are intelligent.

We have around 100 billion stars in the galaxy and let’s just say that each has a habitable planet. Well, if the probability of each stage of life is one in a billion and if we need say three stages to attain technology then the probability of finding ET is one in 10^{16}. I would say that this is an optimistic estimate. Probabilities get small really quickly when you multiply them together. The probability of single cellular life will be much higher. It is possible that there could be hundred planets in our galaxy that have life but the chance that one of those is within a hundred light years will again be very low. However, I do think it is a worthwhile exercise to look for extracellular life, especially for oxygen or other life emitting gases in the atmosphere of exoplanets. It could tell us a lot about biology on earth.

2015-10-1: I corrected a factor of 10 error in some of the numbers.

The philosophy of Thomas the Tank Engine

My toddler loves to watch the television show Thomas and Friends based on the The Railway Series books by the Rev. Wilbert Audry. The show tells the story of sentient trains on a mythical island off the British coast called Sodor. Each episode is a morality play where one of the trains causes some problem because of a character flaw like arrogance or vanity that eventually comes to the attention of the avuncular head of the railroad, Sr. Topham Hatt (called The Fat Controller in the UK). He mildly chastises the train, who becomes aware of his foolishness (it’s almost always a he) and remedies the situation.

While I think the show has some educational value for small children, it also brings up some interesting ethical and metaphysical questions that could be very relevant for our near future. For one, although the trains are sentient and seem to have full control over their actions, some of them also have human drivers. What are these drivers doing? Are they simply observers or are they complicit in the ill-judged actions of the trains? Should they be held responsible for the mistakes of the train? Who has true control, the driver or the train? Can one over-ride the other? These questions will be on everyone’s minds when the first self-driving cars hit the mass market in a few years.

An even more relevant ethical dilemma regards the place the trains have in society. Are they employees or indentured servants of the railroad company? Are they free to leave the railroad if they want? Do they own possessions? When the trains break down they are taken to the steam works, which is run by a train named Victor. However, humans effect the repairs. Do they take orders from Victor? Presumably, the humans get paid and are free to change jobs so is this a situation where free beings are supervised by slaves?

The highest praise a train can receive from Sir Topham Hatt is that he or she was “very useful.” This is not something one would say to a human employee in a modern corporation. You might say you were very helpful or that your action was very useful but it sounds dehumanizing to say “you are useful.” Thus, Sir Topham Hatt at least, does not seem to consider the trains to be humans. Perhaps, he considers them to be more like domesticated animals. However, these are animals that clearly have aspirations, goals, and feelings of self-worth. It seems to me that they should be afforded the full rights of any other citizen of Sodor. As machines become more and more integrated into our lives, it may well be useful to probe the philosophical quandaries of Thomas and Friends.

 

 

 

 

Sebastian Seung and the Connectome

The New York Times Magazine has a nice profile on theoretical neuroscientist Sebastian Seung this week. I’ve known Sebastian since we were graduate students in Boston in the 1980’s. We were both physicists then and both ended up in biology though through completely different paths. The article focuses on his quest to map all the connections in the brain, which he terms the connectome. Near the end of the article, neuroscientist Eve Marder of Brandeis comments on the endeavor with the pithy remark that “If we want to understand the brain, the connectome is absolutely necessary and completely insufficient.”  To which the article ends with

Seung agrees but has never seen that as an argument for abandoning the enterprise. Science progresses when its practitioners find answers — this is the way of glory — but also when they make something that future generations rely on, even if they take it for granted. That, for Seung, would be more than good enough. “Necessary,” he said, “is still a pretty strong word, right?”

Personally, I am not sure if the connectome is necessary or sufficient although I do believe it is a worthy task. However, my hesitation is not because of what was proposed in the article, which is that we exist in a fluid world and the connectome is static. Rather, like Sebastian, I do believe that memories are stored in the connectome and I do believe that “your” connectome does capture much of the essence of “you”. Many years ago, the CPU on my computer died. Our IT person swapped out the CPU and when I turned my computer back on, it was like nothing had happened. This made me realize that everything about the computer that was important to me was stored on the hard drive. The CPU didn’t matter even though every thing a computer did relied on the CPU. I think the connectome is like the hard drive and trying to figure out how the brain works from it is like trying to reverse engineer the CPU from the hard drive. You can certainly get clues from it such as information is stored in binary form but I’m not sure if it is necessary or sufficient to figure out how a computer works by recreating an entire hard drive. Likewise, someday we may use the connectome to recover lost memories or treat some diseases but we may not need it to understand how a brain works.