AlphaGo and the Future of Work

In March of this year, Google DeepMind’s computer program AlphaGo defeated world Go champion Lee Sedol. This was hailed as a great triumph of artificial intelligence and signaled to many the beginning of the new age when machines take over. I believe this is true but the real lesson of AlphaGo’s win is not how great machine learning algorithms are but how suboptimal human Go players are. Experts believed that machines would not be able to defeat humans at Go for a long time because the number of possible games is astronomically large, \sim 250^{150} moves, in contrast to chess with a paltry \sim 35^{80} moves. Additionally, unlike chess, it is not clear what is a good position and who is winning during intermediate stages of a game. Thus, any direct enumeration and evaluation of possible next moves as chess computers do, like IBM’s Deep Blue that defeated Gary Kasparov, seemed to be impossible. It was thought that humans had some sort of inimitable intuition to play Go that machines were decades away from emulating. It turns out that this was wrong. It took remarkably little training for AlphaGo to defeat a human. All the algorithms used were fairly standard – supervised and reinforcement backpropagation learning in multi-layer neural networks1. DeepMind just put them together in a clever way and had the (in retrospect appropriate) audacity to try.

The take home message of AlphaGo’s success is that humans are very, very far away from being optimal at playing Go. Uncharitably, we simply stink at Go. However, this probably also means that we stink at almost everything we do. Machines are going to take over our jobs not because they are sublimely awesome but because we are stupendously inept. It is like the old joke about two hikers encountering a bear and one starts to put on running shoes. The other hiker says: “Why are you doing that? You can’t outrun a bear.” to which she replies, “I only need to outrun you!” In fact, the more difficult a job seems to be for humans to perform, the easier it will be for a machine to do better. This was noticed a long time ago in AI research and called Moravec’s Paradox. Tasks that require a lot of high level abstract thinking like chess or predicting what movie you will like are easy for computers to do while seemingly trivial tasks that a child can do like folding laundry or getting a cookie out of a jar on an unreachable shelf is really hard. Thus high paying professions in medicine, accounting, finance, and law could be replaced by machines sooner than lower paying ones in lawn care and house cleaning.

There are those who are not worried about a future of mass unemployment because they believe people will just shift to other professions. They point out that a century ago a majority of Americans worked in agriculture and now the sector comprises of less than 2 percent of the population. The jobs that were lost to technology were replaced by ones that didn’t exist before. I think this might be true but in the future not everyone will be a software engineer or a media star or a CEO of her own company of robot employees. The increase in productivity provided by machines ensures this. When the marginal cost of production goes to zero (i.e. cost to make one more item), as it is for software or recorded media now, the whole supply-demand curve is upended. There is infinite supply for any amount of demand so the only way to make money is to increase demand.

The rate-limiting step for demand is the attention span of humans. In a single day, a person can at most attend to a few hundred independent tasks such as thinking, reading, writing, walking, cooking, eating, driving, exercising, or consuming entertainment. I can stream any movie I want now and I only watch at most twenty a year, and almost all of them on long haul flights. My 3 year old can watch the same Wild Kratts episode (great children’s show about animals) ten times in a row without getting bored. Even though everyone could be a video or music star on YouTube, superstars such as Beyoncé and Adele are viewed much more than anyone else. Even with infinite choice, we tend to do what our peers do. Thus, for a population of ten billion people, I doubt there can be more than a few million that can make a decent living as a media star with our current economic model. The same goes for writers. This will also generalize to manufactured goods. Toasters and coffee makers essentially cost nothing compared to three decades ago, and I will only buy one every few years if that. Robots will only make things cheaper and I doubt there will be a billion brands of TV’s or toasters. Most likely, a few companies will dominate the market as they do now. Even, if we could optimistically assume that a tenth of the population could be engaged in producing goods and services necessary for keeping the world functioning that still leaves the rest with little to do.

Even much of what scientists do could eventually be replaced by machines. Biology labs could consist of a principle investigator and robot technicians. Although it seems like science is endless, the amount of new science required for sustaining the modern world could diminish. We could eventually have an understanding of biology sufficient to treat most diseases and injuries and develop truly sustainable energy technologies. In this case, machines could be tasked to keep the modern world up and running with little need of input from us. Science would mostly be devoted to abstract and esoteric concerns.

Thus, I believe the future for humankind is in low productivity occupations – basically a return to pre-industrial endeavors like small plot farming, blacksmithing, carpentry, painting, dancing, and pottery making, with an economic system in place to adequately live off of this labor. Machines can provide us with the necessities of life while we engage in a simulated 18th century world but without the poverty, diseases, and mass famines that made life so harsh back then. We can make candles or bread and sell them to our neighbors for a living wage. We can walk or get in self-driving cars to see live performances of music, drama and dance by local artists. There will be philosophers and poets with their small followings as they have now. However, even when machines can do everything humans can do, there will still be a capacity to sustain as many mathematicians as there are people because mathematics is infinite. As long as P is not NP, theorem proving can never be automated and there will always be unsolved math problems.  That is not to say that machines won’t be able to do mathematics. They will. It’s just that they won’t ever be able to do all of it. Thus, the future of work could also be mathematics.

  1. Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016).

The simulation argument made quantitative

Elon Musk, of Space X, Tesla, and Solar City fame, recently mentioned that he thought the the odds of us not living in a simulation were a billion to one. His reasoning was based on extrapolating the rate of improvement in video games. He suggests that soon it will be impossible to distinguish simulations from reality and in ten thousand years there could easily be billions of simulations running. Thus there are a billion more simulated universes than real ones.

This simulation argument was first quantitatively formulated by philosopher Nick Bostrom. He even has an entire website devoted to the topic (see here). In his original paper, he proposed a Drake-like equation for the fraction of all “humans” living in a simulation:

f_{sim} = \frac{f_p f_I N_I}{f_p f_I N_I + 1}

where f_p is the fraction of human level civilizations that attain the capability to simulate a human populated civilization, f_I is the fraction of these civilizations interested in running civilization simulations, and N_I is the average number of simulations running in these interested civilizations. He then argues that if N_I is large, then either f_{sim}\approx 1 or f_p f_I \approx 0. Musk believes that it is highly likely that N_I is large and f_p f_I is not small so, ergo, we must be in a simulation. Bostrom says his gut feeling is that f_{sim} is around 20%. Steve Hsu mocks the idea (I think). Here, I will show that we have absolutely no way to estimate our probability of being in a simulation.

The reason is that Bostrom’s equation obscures the possibility of two possible divergent quantities. This is more clearly seen by rewriting his equation as

f_{sim} = \frac{y}{x+y} = \frac{y/x}{y/x+1}

where x is the number of non-sim civilizations and y is the number of sim civilizations. (Re-labeling x and y as people or universes does not change the argument). Bostrom and Musk’s observation is that once a civilization attains simulation capability then the number of sims can grow exponentially (people in sims can run sims and so forth) and thus y can overwhelm x and ergo, you’re in a simulation. However, this is only true in a world where x is not growing or growing slowly. If x is also growing exponentially then we can’t say anything at all about the ratio of y to x.

I can give a simple example.  Consider the following dynamics

\frac{dx}{dt} = ax

\frac{dy}{dt} = bx + cy

y is being created by x but both are both growing exponentially. The interesting property of exponentials is that a solution to these equations for a > c is

x = \exp(at)

y = \frac{b}{a-c}\exp(at)

where I have chosen convenient initial conditions that don’t affect the results. Even though y is growing exponentially on top of an exponential process, the growth rates of x and y are the same. The probability of being in a simulation is then

f_{sim} = \frac{b}{a+b-c}

and we have no way of knowing what this is. The analogy is that you have a goose laying eggs and each daughter lays eggs, which also lay eggs. It would seem like there would be more eggs from the collective progeny than the original mother. However, if the rate of egg laying by the original mother goose is increasing exponentially then the number of mother eggs can grow as fast as the number of daughter, granddaughter, great…, eggs. This is just another example of how thinking quantitatively can give interesting (and sometimes counterintuitive) results. Until we have a better idea about the physics underlying our universe, we can say nothing about our odds of being in a simulation.

Addendum: One of the predictions of this simple model is that there should be lots of pre-sim universes. I have always found it interesting that the age of the universe is only about three times that of the earth. Given that the expansion rate of the universe is actually increasing, the lifetime of the universe is likely to be much longer than the current age. So, why is it that we are alive at such an early stage of our universe? Well, one reason may be that the rate of universe creation is very high and so the probability of being in a young universe is higher than being in an old one.

Addendum 2: I only gave a specific solution to the differential equation. The full solution has the form Y_1\exp(at) + Y_2 \exp(ct).  However, as long as a >c, the first term will dominate.

Addendum 3: I realized that I didn’t make it clear that the civilizations don’t need to be in the same universe. Multiverses with different parameters are predicted by string theory.  Thus, even if there is less than one civilization per universe, universes could be created at an exponentially increasing rate.

 

What Uber doesn’t get

You may have heard that ride hailing services Uber and Lyft have pulled out of Austin, TX because they refuse to be regulated. You can read about the details here. The city wanted to fingerprint drivers, as they do for taxis, but Uber and Lyft forced a referendum on the city to make them exempt or else they would leave. The city voted against them. I personally use Uber and really like it but what I like about Uber has nothing to do with Uber per se or regulation. What I like is 1) no money needs to be exchanged especially the tip and 2) the price is essentially fixed so it is in the driver’s interest to get me to my destination as fast as possible. I have been taken on joy rides far too many times by taxi drivers trying to maximize the fare and I never know how much to tip. However, these are things that regulated taxis could implement and should implement. I do think it is extremely unfair that Uber can waltz into a city like New York and compete against highly regulated taxis, who have paid as much as a million dollars for the right to operate. Uber and Lyft should collaborate with existing taxi companies rather than trying to put them out of business. There was a reason to regulate taxis (e.g. safety, traffic control, fraud protection), and that should apply whether I hail a cab on the street or I use a smartphone app.

Phasers on stun

The recent controversy over police shootings of unarmed citizens has again stirred up the debate over gun control. However, Shashaank Vattikuti points out that there is another option and that is for the police to carry nonlethal weapons like phasers with a stun option. Although, an effective long range nonlethal weapon currently does not exist (tasers just don’t cut it), a billionaire like Mark Zuckerberg, Peter Thiel, or Elon Musk could start a company to develop one. New York Times columnist Joe Nocera has suggested that Michael Bloomberg buy a gun company. There are so many guns already in existence that barring an unlikely confiscation scheme there is probably no way to get rid of them. The only way to reduce gun violence at this point is for a superior technology to make them obsolete. Hobbyists and collectors would still own guns, just as there are sword collectors, but those who own guns for protection would probably slowly switch over. However, the presence of a nonlethal option could lead to more people shooting each other so strong laws regarding their use would need to accompany their introduction.

 

 

 

 

Are we in a fusion renaissance?

Fusion is a potentially unlimited source of non-carbon emitting energy. It requires the mashing together of small nuclei such as deuterium and tritium to make another nucleus and a lot of leftover energy. The problem is that nuclei do not want to be mashed together and thus to achieve fusion you need something to confine high energy nuclei for a long enough time. Currently, there are only two methods that have successfully demonstrated fusion: 1) gravitational confinement as in the center of a star, and 2) inertial confinement as in a nuclear bomb. In order to get nuclei at high enough energy to overcome the energy barrier for a fusion reaction, electrons can no longer be bound to nuclei to form atoms. A gas of quasi-neutral hot nuclei and electrons is called a plasma and has often been dubbed the fourth state of matter. Hence, the physics of fusion is mostly the physics of plasmas.

My PhD work was in plasma physics and although my thesis ultimately dealt with chaos in nonlinear partial differential equations, my early projects were tangentially related to fusion. At that time there were two approaches to attaining fusion, one was to try to do controlled inertial confinement by using massive lasers to implode a tiny pellet of fuel and the second was to use magnetic confinement in a tokamak reactor. Government sponsored research has been focused almost exclusively on these two approaches for the past forty years. There is a huge laser fusion lab at Livermore and an even bigger global project for magnetic confinement fusion in Cadarache France, called ITER. As of today, neither has proven that they will ever be viable sources of energy although there is evidence of break even where the reactors produce more energy than is put in.

However, these approaches may not ultimately be viable and there really has not been much research funding to pursue alternative strategies. This recent New York Times article reports on a set of privately funded efforts to achieve fusion backed by some big names in technology including Paul Allen, Jeff Bezos and Peter Thiel. Although there is well deserved skepticism for the success of these companies,  (I’m sure my thesis advisor Abe Bers would have had some insightful things to say about them), the time may be ripe for new approaches. In an impressive talk I heard many years ago, roboticist Rodney Brooks remarked that Moore’s Law has allowed robotics to finally be widely available because you could use software to compensate for hardware. Instead of requiring cost prohibitive high precision motors, you could use cheap ones and use software to control them. The hybrid car is only possible because of the software to decide when to use the electric motor and when to use the gas engine. The same idea may also apply to fusion. Fusion is so difficult because plasmas are inherently unstable. Most of the past effort has been geared towards designing physical systems to contain them. However, I can now imagine using software instead.

Finally, government attempts have mostly focused on using a Deuterium-Tritium fusion reaction because it has the highest yield. The problem with this reaction is that it produces a neutron, which then destroys the reactor. However, there are reactions that do not produce neutrons (see here). Abe used to joke that that we could mine the moon for Helium 3 to use in a Deuterium-Helium 3 reactor. So, although we may never have viable fusion on earth, it could be a source of energy on Elon Musk’s moon base, although solar would probably be a lot cheaper.

The Drake equation and the Cambrian explosion

This summer billionaire Yuri Milner announced that he would spend upwards of 100 million dollars to search for extraterrestrial intelligent life (here is the New York Times article). This quest to see if we have company started about fifty years ago when Frank Drake pointed a radio telescope at some stars. To help estimate the number of possible civilizations, N, Drake wrote down his celebrated equation,

N = R_*f_p n_e f_l f_i f_c L

where R_* is the rate of star formation, f_p is the fraction of stars with planets, n_e is the average number of planets per star that could support life, f_l fraction of planets that develop life, f_i fraction of those planets that develop intelligent life, f_c fraction of civilizations that emit signals, and L is the length of time civilizations emit signals.

The past few years have demonstrated that planets in the galaxy are likely to be plentiful and although the technology to locate earth-like planets does not yet exist, my guess is that they will also be plentiful. So does that mean that it is just a matter of time before we find ET? I’m going to come on record here and say no. My guess is that life is rare and intelligent life may be so rare that there could only be one civilization at a time in any given galaxy.

While we are now filling in the numbers for the left side of Drake’s equation, we have absolutely no idea about the right side of the equation. However, I have good reason to believe that it is astronomically small and that reason is statistical independence. Although Drake characterized the probability of intelligent life into the probability of life forming times the probability it goes on to develop extra-planetary communication capability, there are actually a lot of factors in between. One striking example is the probability of the formation of multi-cellular life. In earth’s history, for the better part of three and a half billion years we had mostly single cellular life and maybe a smattering of multicellular experiments. Then suddenly about half a billion years ago, we had the Cambrian Explosion where multicellular animal life from which we are descended suddenly came onto the scene. This implies that forming multicellular life is extremely difficult and it is easy to envision an earth where it never formed at all.

We can continue. If it weren’t for an asteroid impact, the dinosaurs may never have gone extinct and mammals may not have developed. Even more recently, there seem to have been many species of advanced primates yet only one invented radios. Agriculture only developed ten thousand years ago, which meant that modern humans took about a hundred thousand years to discover it and only in one place. I think it is equally plausible that humans could have gone extinct like all of our other australopithecus and homo cousins. Life in the sea has existed much longer than life on land and there is no technologically advanced sea creature although I do think octopuses, dolphins and whales are intelligent.

We have around 100 billion stars in the galaxy and let’s just say that each has a habitable planet. Well, if the probability of each stage of life is one in a billion and if we need say three stages to attain technology then the probability of finding ET is one in 10^{16}. I would say that this is an optimistic estimate. Probabilities get small really quickly when you multiply them together. The probability of single cellular life will be much higher. It is possible that there could be hundred planets in our galaxy that have life but the chance that one of those is within a hundred light years will again be very low. However, I do think it is a worthwhile exercise to look for extracellular life, especially for oxygen or other life emitting gases in the atmosphere of exoplanets. It could tell us a lot about biology on earth.

2015-10-1: I corrected a factor of 10 error in some of the numbers.