The wealth threshold

The explanation for growing wealth inequality proposed by Thomas Piketty in his iconic book Capital in the Twenty-First Century, is that the rate of growth from capital exceeds that of the entire economy in general. Thus, the wealth of owners of capital (i.e. investors) will increase faster than everyone else. However, even if the rate of growth were equal, any difference in initial conditions or savings rate, would also amplify exponentially. This can be seen in this simple model. Suppose w is the total amount of money you have, I is your annual income, E is your annual expense rate, and r is the annual rate of growth of investments or interest rate. The rate of change in your wealth is given by the simple formula

\frac{dw}{dt} = I(t) - E(t)+ r w,

where we have assumed that the interest rate is constant but it can be easily modified to be time dependent. This is a first order linear differential equation, which  can be solved to yield

w = w_0 e^{r t} + \int_{0}^t (I-E) e^{r(t-s)} ds,

where w_0 is your initial wealth at time 0. If we further assume that income and expenses are constant then we have w = w_0 e^{r t} +  (I-E)( e^{rt} -1)/r. Over time, any difference in initial wealth will diverge exponentially and there is a sharp threshold for wealth accumulation. Thus the difference between building versus not building wealth could amount to a few hundred dollars in positive cash flow per month. This threshold is a nonlinear effect that shows how small changes in income or expenses that would be unnoticeable to a wealthy person could make an immense difference for someone near the bottom. Just saving a thousand dollars per year, less than a hundred per month, would give one almost a hundred and fifty thousand dollars after forty years.

Equifax vs Cassini

The tired trope from free market exponents is that private enterprise is agile, efficient, and competent, while government is prodding, incompetent, and wasteful. The argument is that because companies must survive in a competitive environment they are always striving to improve and gain an edge against their competitors. Yet history and recent events seem to indicate otherwise. The best strategy in capitalism seems to be to gain monopoly power and extract rent. While Equifax was busy covering up their malfeasance instead of trying to fix things for everyone they harmed, Cassini ended a brilliantly successful mission to explore Saturn. The contrast couldn’t have been greater if it was staged. The so-called incompetent government has given us moon landings, the internet, and built two Voyager spacecraft that have lasted 40 years and have now exited the Solar system into interstellar space. There is no better run organization than JPL. Each day at NIH, a government facility, I get to interact with effective and competent people who are trying to do good in the world. I think it’s time to update the government is the problem meme.

Science and the vampire/zombie apocalypse

It seems like every time I turn on the TV, which only occurs when I’m in the exercise room, there is a show that involves either zombies or vampires. From my small sampling, it seems that the more recent incarnations try to invoke scientific explanations for these conditions that involve a viral or parasitic etiology. Popular entertainment reflects societal anxieties; disease and pandemics is to the twenty first century what nuclear war was to the late twentieth. Unfortunately, the addition of science to the zombie or vampire mythology makes for a much less compelling story.

A necessary requirement of good fiction is that it be self-consistent. The rules that govern the world the characters inhabit need to apply uniformly. Bram Stoker’s Dracula was a great story because there were simple rules that governed vampires – they die from exposure to sunlight, stakes to the heart, and silver bullets. They are repelled by garlic and Christian symbols. Most importantly, their thirst for blood was a lifestyle choice, like consuming fine wine, rather than a nutritional requirement. Vampires lived in a world of magic and so their world did not need to obey the laws of physics.

Once you try to make vampirism or zombism a disease and scientifically plausible in our world, you run into a host of troubles. Vampires and zombies need to obey the laws of thermodynamics, which means they need energy to function. This implies that the easiest way to kill one of these creatures is to starve them to death. Given how energetically active vampires are and how little caloric content blood has by volume, since it is mostly water, vampires would need to drink a lot of blood to sustain themselves. All you need to do is to quarantine all humans into secure locations for a few days and all vampires should either starve to death or fall into a dormant state. Vampirism is self-limiting because there would not be enough human hosts to sustain a large population. This is why only small animals can subsist entirely on blood (e.g. vampire bats weight about 40 grams and can drink half their weight in blood). Once, you make vampires biological, it makes no sense why they can only drink blood. What exactly is in blood that they can’t get from eating flesh? Even if they don’t have a digestive system that can handle solid food, they could always put meat into a Vitamix and make a smoothie. Zombies eat all parts of humans so they would need to feed less often than vampires and thus be harder to starve. However, zombies are usually non-intelligent and thus easier to avoid and sequester. It seems like any zombie epidemic could be controlled at very early stages. Additionally, why is it that zombies don’t eat each other? Why do they only like to eat humans?  Why aren’t they hanging around farms and eating livestock and poultry?

Vampires and sometimes zombies also have super-strength without having to bulk up. This means that their muscles are much more efficient. How is this possible? Muscles are pretty similar at the cellular level. Chimpanzees are stronger than humans by weight because they have more fast twitch than slow twitch muscles. There is thus always a trade-off between strength and endurance. In a physically plausible world, humans should always find an edge in combating zombies or vampires. The only way to make a vampire or zombie story viable is to endow them with nonphysical properties. My guess is that we have hit peak vampire/zombie; the next wave of horror shows will feature a more plausible threat – evil AI.

The end of (video) reality

I highly recommend listening to this Radiolab podcast. It tells of new software that can create completely fabricated audio and video clips. This will take fake news to an entirely different level. It also means that citizen journalists with smartphones, police body cams, security cameras, etc. will all become obsolete. No recording can be trusted. On the other hand, we had no recording technology of any kind for almost all of human history so we will have to go back to simply trusting (or not trusting) what people say.

The robot human equilibrium

There has been some push back in the media against the notion that we will “soon” be replaced by robots, e.g. see here. But absence of evidence is not evidence of absence. Just because there seem to be very few machine induced job losses today doesn’t mean it won’t happen tomorrow or in ten years. In fact, when it does happen it probably will happen suddenly as have many recent technological changes. The obvious examples are the internet and smartphones but there are many others. We forget that the transition from vinyl records to CDs was extremely fast; then iPods and YouTube killed CDs. Video rentals became ubiquitous from nothing in just a few years and died just as fast when Netflix came along, which was then completely replaced a few years later by streaming video. It took Amazon a little longer to become dominant but the retail model that had existed for centuries has been completely upended in a decade. The same could happen with AI and robots. Unless you believe that human thought is not computable, then in principle there is nothing a human can do that a machine can’t. It could take time to set up the necessary social institutions and infrastructure for an AI takeover but once it is established the transition could be abrupt.

Even so that doesn’t mean all or even most humans will be replaced. The irony of AI, known as Moravec’s Paradox (e.g. here), is that things that are hard for humans to do, like play chess or read X-rays, are easy for machines to do and vice versa. Although drivers and warehouse workers are destined to be the first to be replaced, the next set of jobs will likely be highly paid professionals like stock brokers, accountants, doctors, and lawyers. But as the ranks of the employed start to shrink, the economy will also shrink and wages will go down (even if the displaced do eventually move on to other jobs it will take time). At some point, particularly for jobs that are easy for humans but harder for machines, humans could be cheaper than machines.  So while we can train a machine to be a house cleaner, it may be more cost effective to simply hire a person to change sheets and dust shelves. The premium on a university education will drop. The ability to sit still for long periods of time and acquire arcane specialized knowledge will simply not be that useful anymore. Centers for higher learning will become retreats for the small set of scholarly minded people who simply enjoy it.

As the economy shrinks, land prices in some areas should drop too and thus people could still eke out a living. Some or perhaps many people will opt or be pushed out of the mainstream economy altogether and retire to quasi-pre-industrial lives. I wrote about this in quasi-utopian terms in my AlphaGo post but a dystopian version is equally probable. In the dystopia, the gap between the rich and poor could make today look like an egalitarian paradise. However, unlike the usual dystopian nightmare like the Hunger Games where the rich exploit the poor, the rich will simply ignore the poor. But it is not clear what the elite will do with all that wealth. Will they wall themselves off from the rest of society and then what, engage in endless genetic enhancements or immerse themselves in a virtual reality world? I think I’d rather raise pigs and make candles out of lard.

 

 

 

 

Audio of SIAM talk

Here is an audio recording synchronized to slides of my talk a week and a half ago in Pittsburgh. I noticed some places where I said the wrong thing such as conflating neuron with synapse.  I also did not explain the learning part very well. I should point out that we are not applying a control to the network.  We train a set of weights so that given some initial condition, the neuron firing rates follow a specified target pattern. I also made a joke that implied that the Recursive Least Squares algorithm dates to 1972. That is not correct. It goes back much further back than that. I also take a pot shot at physicists. It was meant as a joke of course and describes many of my own papers.

Talk at SIAM Annual Meeting 2017

I gave an invited plenary talk at the 2017 Society of Applied and Industrial Mathematics Annual Meeting in Pittsburgh yesterday. My slides are here. I talked about some very new work on chaos and learning in spiking neural networks. My fellow Chris Kim and I were producing graphs up to a half hour before my talk! I’m quite excited about this work and I hope to get it published soon.

During my talk, I made an offhand threat that my current Mac would be the last one I buy. I made the joke because it was the first time that I could not connect to a projector with my laptop since I started using Mac almost 20 years ago. I switched to Mac from Linux back then because it was a Unix environment where I didn’t need to be a systems administrator to print and project. However, Linux has made major headway in the past two decades while Mac is backsliding. So, I’m seriously thinking of following through. I’ve been slowly getting disenchanted with Apple products over the past three years but I am especially disappointed with my new MacBook Pro. I have the one with the silly touch screen bar. The first thing the peeves me is that the activate Siri key is right next to the delete key so I accidentally hit and then have to reject Siri every five minutes. What mostly ties me to Mac right now is the Keynote presentation software, which I like(d) because it is easy to embed formulas and PDF files into. It is much harder to do the same in PowerPoint and I haven’t found an open source version that is as easy to use. However, Keynote keeps hanging on my new machine. I also get this situation where my embedded equations will randomly disappear and then reappear. Luckily I did a quick run through just before my talk and noticed that the vanished equations reappeared and I could delete them. Thus, the Keynote appeal has definitely diminished. Now, if someone would like to start an open source Keynote project with me… Finally, the new Mac does not seem any faster than my old Mac (it still takes forever to boot up) and Bard Ermentrout told me that his dynamical systems software tool XPP runs five times slower. So, any suggestions for a new machine?

From creativity to anxiety

When I was a child, the toy Lego was the ultimate creative experience. There were basically 4 or so different kinds of blocks from which you could connect together into whatever you could think of. Now, most Lego sets consists of a fixed number of pieces that are to be assembled into a specific object, like a fire truck. Instead of just making whatever you can think of, you now must precisely follow an instruction book with the major anxiety that a crucial piece will be missing. Creativity has been replaced by the ability to follow detailed instructions. Maybe, this makes kids look for creative outlets elsewhere, like pretend play. Maybe, they become more compliant employees. Or maybe, the world is just becoming less a imaginative and duller place.

Trade and income inequality

The conventional wisdom in economics is that trade is mutually beneficial to all parties and the freer the trade the better. However, as David Autor and collaborators have empirically shown, the benefits of trade can be unevenly distributed. A simple way to think about this is to consider a simple model of a nation’s income (I) as a function of socio-economic status (S), I = \alpha +\beta S. Here, S can be distributed in anyway but has zero mean. The mean income of the nation is \alpha while \beta is a measure of inequality (i.e. proportional to standard deviation). Generally, it was presumed that trade increases \alpha. However, as Autor finds, trade can also increase \beta and then it becomes a quantitative game as to whether you personally will do better or worse with trade. Your change in income will be \Delta I = \Delta \alpha +\Delta\beta S. Thus, if you are above the mean S then trade is always beneficial and increasing \beta helps you even more.  However, where the mean is with respect to the median is strongly dependent on the tails of the distribution of S. So if people with high S are very far away from the median, then the mean could also be high with respect to the median. If you are below the mean then gains from \Delta \alpha are offset by decreases in \Delta \beta S and if you’re S is more negative than -\Delta \alpha/\Delta\beta then you will do worse in absolute terms. This could explain what has been happening in the US. The nation benefits from trade by having cheaper goods but some sectors like manufacturing and textiles are greatly hurt and the cheaper goods cannot make up for the decrease in income. Those above the mean are benefitting from a mean shift in income due to trade as well as any increases in inequality. Those below the mean are getting smaller gains and in some cases doing worse as a result of trade. Thus, it may not be surprising that there are divergent views on the benefits of trade.

The demise of Barnes and Noble

Near the end of the twentieth century, there was a battle between small bookstores and the big chains like Barnes and Noble and Borders, typified in the film You’ve Got Mail.  The chains won because they had lower prices, larger stocks, and served as mini-community centers where people liked to hang out. It was sad to see the independent bookstores die but the replacement was actually a nice addition to the neighborhood. The Barnes and Noble business model was to create attractive places to spend time, with play areas for children, a cafe with ample seating, and racks and racks of magazines. The idea was that the more time you spent there the more money you would spend and it worked for at least ten years. Yet, at the height of their dominance, the seeds of their destruction could be plainly seen. Amazon was growing even faster and a new shopping model was invented. People would spend time and browse in B and N and then go home to order the books on Amazon. The advent of the smartphone only quickened the demise because people could order directly from the store. The large and welcoming B and N store was a free sample service for Amazon. Borders is already gone and Barnes and Noble is on its last legs. The one I frequent will be closing this summer.

The loss of B and N will be a blow to many communities. It’s a particular favorite locale for retirees to congregate. I think this is a perfect example of a market failure. There is a clear demand for the product but no viable way to monetize it. However, there already is a model for providing the same service as B and N that has worked for a century and that is called a library. Libraries are still extremely popular and provide essential services to people, and particularly low income people. The Enoch Pratt Free Library in Baltimore has a line every morning before it opens for people scrambling to use the computers and access the internet. While libraries have been rapidly modernizing, with a relaxation of behavior rules and adding cafes, they still have short hours and do not provide the comforting atmosphere of B and N.

I see multiple paths forward. The first is that B and N goes under and maybe someone invents a new private model to replace it. Amazon may create book stores in its place that act more like showrooms for their products rather than profit making entities. The second is that a philanthropist will buy it and endow it as a nonprofit entity for the community much like Carnegie and other robber barons of the nineteenth century did with libraries. The third is that communities will start to take over the spaces and create a new type of library that is subsidized by tax payers and has the same hours and ambience of B and N.

Productivity, marginal cost, and monopoly

240px-supply-and-demand-svg

In any introductory economics class, one is introduced to the concept of supply and demand. The price of a product is expressed as a function of the number of products that suppliers would produce and buyers would purchase at that price, respectively. Supply curves have positive slope, meaning that the higher the price the more suppliers will produce and vice versa for demand curves. If a market is perfectly competitive, then the supply curve is determined by the marginal cost of production, which is the incremental cost of production for making one additional unit. Firms will keep producing more goods until the price falls below the marginal cost.

Increases in productivity lead to decreases in marginal cost, and since the advent of the industrial revolution, technology has been increasing productivity. In some cases, like software or recorded music, the marginal cost is already zero. The cost for Microsoft to make one more copy of Office is miniscule. However, if the marginal cost is zero then according to classical microeconomic theory firms would produce goods and give it away for free. Public intellectual Jeremy Rifkin has been writing about a zero marginal cost society for several years now, (e.g. see here and here), and has proposed that ubiquitous zero marginal cost will lead to a communitarian revolution where capitalism is overturned and people will collaborate and share goods along the lines of the open software model, which has produced the likes of Wikipedia, Linux, Python, and Julia.

I’m not so sanguine. There are two rational strategies for firms to pursue to increase profit. The first is to lower costs and the second is to create monopolies. In completely unregulated markets, like drug trafficking, it seems like suppliers spend much of their time and efforts pursuing monopolies by literally killing their competition. In the absence of the violence option, firms can gain monopolies by buying or merging with competitors and through regulatory capture to create barriers to entry. There are also industries where size and success create virtual monopolies. This is what happens for tech companies where a single behemoth like Microsoft, Google, Facebook, or Amazon, completely dominates a domain. Being large has a huge advantage in finance and banking. Entertainment seems to breed random monopoly status where a single artist will garner most of the attention even though objectively there may not be much difference between the top and the 100th best selling artist. As costs continue to decrease, there will be even more incentive to create monopolies. Instead of a sharing collaborative egalitarian world, a more likely scenario is a world with a small number of entrenched monopolists controlling most of the wealth.

 

Talk at Maryland

I gave a talk at the Center for Scientific Computing and Mathematical Modeling at the University of Maryland today.  My slides are here.  I apologize for the excessive number of pages but I had to render each build in my slides, otherwise many would be unreadable.  A summary of the work and links to other talks and papers can be found here.

Technology and inference

In my previous post, I gave an example of how fake news could lead to a scenario of no update of posterior probabilities. However, this situation could occur just from the knowledge of technology. When I was a child, fantasy and science fiction movies always had a campy feel because the special effects were unrealistic looking. When Godzilla came out of Tokyo Harbour it looked like little models in a bathtub. The Creature from the Black Lagoon looked like a man in a rubber suit. I think the first science fiction movie that looked astonishing real was Stanley Kubrick’s 1968 masterpiece 2001: A Space Odyssey, which adhered to physics like no others before and only a handful since. The simulation of weightlessness in space was marvelous and to me the ultimate attention to detail was the scene in the rotating space station where a mild curvature in the floor could be perceived. The next groundbreaking moment was the 1993 film Jurassic Park, which truly brought dinosaurs to life. The first scene of a giant sauropod eating from a tree top was astonishing. The distinction between fantasy and reality was forever gone.

The effect of this essentially perfect rendering of anything into a realistic image is that we now have a plausible reason to reject any evidence. Photographic evidence can be completely discounted because the technology exists to create completely fabricated versions. This is equally true of audio tapes and anything your read on the Internet. In Bayesian terms, we now have an internal model or likelihood function that any data could be false. The more cynical you are the closer this constant is to one. Once the likelihood becomes insensitive to data then we are in the same situation as before. Technology alone, in the absence of fake news, could lead to a world where no one ever changes their mind. The irony could be that this will force people to evaluate truth the way they did before such technology existed, which is that you believe people (or machines) that you trust through building relationships over long periods of time.

Fake news and beliefs

Much has been written of the role of fake news in the US presidential election. While we will never know how much it actually contributed to the outcome, as I will show below, it could certainly affect people’s beliefs. Psychology experiments have found that humans often follow Bayesian inference – the probability we assign to an event or action is updated according to Bayes rule. For example, suppose P(T) is the probability we assign to whether climate change is real; P(F) = 1-P(T) is our probability that climate change is false. In the Bayesian interpretation of probability, this would represent our level of belief in climate change. Given new data D (e.g. news), we will update our beliefs according to

P(T|D) = \frac{P(D|T) P(T)}{P(D)}

What this means is that our posterior probability or belief that climate change is true given the new data, P(T|D), is equal to the probability that the new data came from our internal model of a world with climate change (i.e. our likelihood), P(D|T), multiplied by our prior probability that climate change is real, P(T), divided by the probability of obtaining such data in all possible worlds, P(D). According to the rules of probability, the latter is given by P(D) = P(D|T)P(T) + P(D|F)P(F), which is the sum of the probability the data came from a world with climate change and that from one without.

This update rule can reveal what will happen in the presence of new data including fake news. The first thing to notice is that if P(T) is zero, then there is no update. In this binary case, this means that if we believe that climate change is absolutely false or true then no data will change our mind. In the case of multiple outcomes, any outcome with zero prior (has no support) will never change. So if we have very specific priors, fake news is not having an impact because no news is having an impact. If we have nonzero priors for both true and false then if the data is more likely from our true model then our posterior for true will increase and vice versa. Our posteriors will tend towards the direction of the data and thus fake news could have a real impact.

For example, suppose we have an internal model where we expect the mean annual temperature to be 10 degrees Celsius with a standard deviation of 3 degrees if there is no climate change and a mean of 13 degrees with climate change. Thus if the reported data is mostly centered around 13 degrees then our belief of climate change will increase and if it is mostly centered around 10 degrees then it will decrease. However, if we get data that is spread uniformly over a wide range then both models could be equally likely and we would get no update. Mathematically, this is expressed as – if P(D|T)=P(D|F) then P(D) = P(D|T)(P(T)+P(F))= P(D|T). From the Bayesian update rule, the posterior will be identical to the prior. In a world of lots of misleading data, there is no update. Thus, obfuscation and sowing confusion is a very good strategy for preventing updates of priors. You don’t need to refute data, just provide fake examples and bury the data in a sea of noise.

 

Fun with zero gravity

Here is the video of the band OK Go filmed on a plane doing parabolic arcs. OK Go is famous for having the most creative videos, which combine Rube Goldberg contraptions with extreme synchronized choreography. The video of Upside Down and Inside Out is a single shot. Each zero gravity arc is about 30 seconds long. The intervening hyper gravity arcs are compressed in the video although it is very hard to detect in the first viewing.