Science and the vampire/zombie apocalypse

It seems like every time I turn on the TV, which only occurs when I’m in the exercise room, there is a show that involves either zombies or vampires. From my small sampling, it seems that the more recent incarnations try to invoke scientific explanations for these conditions that involve a viral or parasitic etiology. Popular entertainment reflects societal anxieties; disease and pandemics is to the twenty first century what nuclear war was to the late twentieth. Unfortunately, the addition of science to the zombie or vampire mythology makes for a much less compelling story.

A necessary requirement of good fiction is that it be self-consistent. The rules that govern the world the characters inhabit need to apply uniformly. Bram Stoker’s Dracula was a great story because there were simple rules that governed vampires – they die from exposure to sunlight, stakes to the heart, and silver bullets. They are repelled by garlic and Christian symbols. Most importantly, their thirst for blood was a lifestyle choice, like consuming fine wine, rather than a nutritional requirement. Vampires lived in a world of magic and so their world did not need to obey the laws of physics.

Once you try to make vampirism or zombism a disease and scientifically plausible in our world, you run into a host of troubles. Vampires and zombies need to obey the laws of thermodynamics, which means they need energy to function. This implies that the easiest way to kill one of these creatures is to starve them to death. Given how energetically active vampires are and how little caloric content blood has by volume, since it is mostly water, vampires would need to drink a lot of blood to sustain themselves. All you need to do is to quarantine all humans into secure locations for a few days and all vampires should either starve to death or fall into a dormant state. Vampirism is self-limiting because there would not be enough human hosts to sustain a large population. This is why only small animals can subsist entirely on blood (e.g. vampire bats weight about 40 grams and can drink half their weight in blood). Once, you make vampires biological, it makes no sense why they can only drink blood. What exactly is in blood that they can’t get from eating flesh? Even if they don’t have a digestive system that can handle solid food, they could always put meat into a Vitamix and make a smoothie. Zombies eat all parts of humans so they would need to feed less often than vampires and thus be harder to starve. However, zombies are usually non-intelligent and thus easier to avoid and sequester. It seems like any zombie epidemic could be controlled at very early stages. Additionally, why is it that zombies don’t eat each other? Why do they only like to eat humans?  Why aren’t they hanging around farms and eating livestock and poultry?

Vampires and sometimes zombies also have super-strength without having to bulk up. This means that their muscles are much more efficient. How is this possible? Muscles are pretty similar at the cellular level. Chimpanzees are stronger than humans by weight because they have more fast twitch than slow twitch muscles. There is thus always a trade-off between strength and endurance. In a physically plausible world, humans should always find an edge in combating zombies or vampires. The only way to make a vampire or zombie story viable is to endow them with nonphysical properties. My guess is that we have hit peak vampire/zombie; the next wave of horror shows will feature a more plausible threat – evil AI.

Advertisements

The end of (video) reality

I highly recommend listening to this Radiolab podcast. It tells of new software that can create completely fabricated audio and video clips. This will take fake news to an entirely different level. It also means that citizen journalists with smartphones, police body cams, security cameras, etc. will all become obsolete. No recording can be trusted. On the other hand, we had no recording technology of any kind for almost all of human history so we will have to go back to simply trusting (or not trusting) what people say.

The robot human equilibrium

There has been some push back in the media against the notion that we will “soon” be replaced by robots, e.g. see here. But absence of evidence is not evidence of absence. Just because there seem to be very few machine induced job losses today doesn’t mean it won’t happen tomorrow or in ten years. In fact, when it does happen it probably will happen suddenly as have many recent technological changes. The obvious examples are the internet and smartphones but there are many others. We forget that the transition from vinyl records to CDs was extremely fast; then iPods and YouTube killed CDs. Video rentals became ubiquitous from nothing in just a few years and died just as fast when Netflix came along, which was then completely replaced a few years later by streaming video. It took Amazon a little longer to become dominant but the retail model that had existed for centuries has been completely upended in a decade. The same could happen with AI and robots. Unless you believe that human thought is not computable, then in principle there is nothing a human can do that a machine can’t. It could take time to set up the necessary social institutions and infrastructure for an AI takeover but once it is established the transition could be abrupt.

Even so that doesn’t mean all or even most humans will be replaced. The irony of AI, known as Moravec’s Paradox (e.g. here), is that things that are hard for humans to do, like play chess or read X-rays, are easy for machines to do and vice versa. Although drivers and warehouse workers are destined to be the first to be replaced, the next set of jobs will likely be highly paid professionals like stock brokers, accountants, doctors, and lawyers. But as the ranks of the employed start to shrink, the economy will also shrink and wages will go down (even if the displaced do eventually move on to other jobs it will take time). At some point, particularly for jobs that are easy for humans but harder for machines, humans could be cheaper than machines.  So while we can train a machine to be a house cleaner, it may be more cost effective to simply hire a person to change sheets and dust shelves. The premium on a university education will drop. The ability to sit still for long periods of time and acquire arcane specialized knowledge will simply not be that useful anymore. Centers for higher learning will become retreats for the small set of scholarly minded people who simply enjoy it.

As the economy shrinks, land prices in some areas should drop too and thus people could still eke out a living. Some or perhaps many people will opt or be pushed out of the mainstream economy altogether and retire to quasi-pre-industrial lives. I wrote about this in quasi-utopian terms in my AlphaGo post but a dystopian version is equally probable. In the dystopia, the gap between the rich and poor could make today look like an egalitarian paradise. However, unlike the usual dystopian nightmare like the Hunger Games where the rich exploit the poor, the rich will simply ignore the poor. But it is not clear what the elite will do with all that wealth. Will they wall themselves off from the rest of society and then what, engage in endless genetic enhancements or immerse themselves in a virtual reality world? I think I’d rather raise pigs and make candles out of lard.

 

 

 

 

Audio of SIAM talk

Here is an audio recording synchronized to slides of my talk a week and a half ago in Pittsburgh. I noticed some places where I said the wrong thing such as conflating neuron with synapse.  I also did not explain the learning part very well. I should point out that we are not applying a control to the network.  We train a set of weights so that given some initial condition, the neuron firing rates follow a specified target pattern. I also made a joke that implied that the Recursive Least Squares algorithm dates to 1972. That is not correct. It goes back much further back than that. I also take a pot shot at physicists. It was meant as a joke of course and describes many of my own papers.

Talk at SIAM Annual Meeting 2017

I gave an invited plenary talk at the 2017 Society of Applied and Industrial Mathematics Annual Meeting in Pittsburgh yesterday. My slides are here. I talked about some very new work on chaos and learning in spiking neural networks. My fellow Chris Kim and I were producing graphs up to a half hour before my talk! I’m quite excited about this work and I hope to get it published soon.

During my talk, I made an offhand threat that my current Mac would be the last one I buy. I made the joke because it was the first time that I could not connect to a projector with my laptop since I started using Mac almost 20 years ago. I switched to Mac from Linux back then because it was a Unix environment where I didn’t need to be a systems administrator to print and project. However, Linux has made major headway in the past two decades while Mac is backsliding. So, I’m seriously thinking of following through. I’ve been slowly getting disenchanted with Apple products over the past three years but I am especially disappointed with my new MacBook Pro. I have the one with the silly touch screen bar. The first thing the peeves me is that the activate Siri key is right next to the delete key so I accidentally hit and then have to reject Siri every five minutes. What mostly ties me to Mac right now is the Keynote presentation software, which I like(d) because it is easy to embed formulas and PDF files into. It is much harder to do the same in PowerPoint and I haven’t found an open source version that is as easy to use. However, Keynote keeps hanging on my new machine. I also get this situation where my embedded equations will randomly disappear and then reappear. Luckily I did a quick run through just before my talk and noticed that the vanished equations reappeared and I could delete them. Thus, the Keynote appeal has definitely diminished. Now, if someone would like to start an open source Keynote project with me… Finally, the new Mac does not seem any faster than my old Mac (it still takes forever to boot up) and Bard Ermentrout told me that his dynamical systems software tool XPP runs five times slower. So, any suggestions for a new machine?

From creativity to anxiety

When I was a child, the toy Lego was the ultimate creative experience. There were basically 4 or so different kinds of blocks from which you could connect together into whatever you could think of. Now, most Lego sets consists of a fixed number of pieces that are to be assembled into a specific object, like a fire truck. Instead of just making whatever you can think of, you now must precisely follow an instruction book with the major anxiety that a crucial piece will be missing. Creativity has been replaced by the ability to follow detailed instructions. Maybe, this makes kids look for creative outlets elsewhere, like pretend play. Maybe, they become more compliant employees. Or maybe, the world is just becoming less a imaginative and duller place.