CO2 and the return of the dinosaurs

The dinosaurs lived during the Mesozoic Era, which was divided into the Triassic, Jurassic, and Cretaceous Periods. Many of the iconic dinosaurs that we know and love such as Tyrannosaurus rex and Triceratops lived at the end of the Cretaceous while others such as Stegosaurus and Apatosaurus (formerly known as Brontosaurus) lived 80 or so million years earlier in the Jurassic. I used to picture all the dinosaurs co-existing simultaneously but the time span separating Stegosaurus from T. rex is larger than that between T. rex and the present! Dinosaurs also weren’t the only creatures alive at that time, just the most dominant ones. Technically, the term dinosaur only applies to land-based reptiles with certain features. The avian reptiles, such as Pteranodon, or the marine ones, such as Plesiosaurs, resembled dinosaurs but were not classified as such. Aside from those dinosaur-like animals, there were also invertebrates, fish, sharks, and a class of animals called Synapsids, defined by an opening in the skull behind the eyes, from which all mammals are descended.

Synapsids were small marginal creatures during the Mesozoic but came to dominate the land after the dinosaurs went extinct at the end of the Cretaceous (the KT extinction event). The consensus theory is that a large asteroid or comet strike in the Yucatan set off fire storms, seismic events and a cloud that blocked sunlight for up to a year. This caused plants to die globally, which collapsed the food chain.The only survivors were creatures that could go deep underwater or bury underground and survive long periods with little or no food. Survivors of the KT extinction include some fish, small sharks, small crocodiles and other cold blooded reptiles, small bipedal theropod dinosaurs, of which T-Rex is a member, and small rodent-like synapsids.

If the KT extinction event was a transient perturbation then it is reasonable to expect that whatever it was that allowed dinosaurs to become dominant would remain and the surviving theropods would come to dominate again. But that is not what happened. Theropods did survive to become modern birds but aside from a few exceptions, most are small and almost all are avian. Instead, the Synapsids came to dominate and the largest creature to ever live, namely the Blue Whale, is a Synapsid. Now this could be purely due to random chance and if we played out the KT event over and over there would be some distribution where either Synapsids or dinosaurs become dominant. However, it could also be that global conditions after the Cretaceous changed to favour Synapsids over dinosaurs.

One possible change is the atmospheric level of carbon dioxide. CO2 levels were higher than they are today for much of the past 500 million years, even with the recent rapid increase. The levels were particularly high in the Triassic and Jurassic but began to decline during the Cretaceous (e.g. see here) and have continued to decrease until the industrial revolution when it turned upwards again. Average global temperatures were also higher in the Mesozoic. The only other time that C02 levels and global temperatures have been as low as they are now was in the Permian before the Great Dying. During the Permian, the ancestor to dinosaurs was a small insectivore that had the ability to run on two legs while the dominant creatures were none other than the Synapsids! So, mammal-like creatures were dominant before and after the dinosaurs when CO2 levels and temperatures were low.

Perhaps this is just a coincidence but there is one more interesting fact to this story and that is the amount of stored carbon (i.e. fossil fuels) has been very high twice over the past 500 million years – the Permian and now. It had been believed that the rise in CO2 at the end of the Permian was due to increased volcanism but a paper from 2014, (see here), speculated that a horizontal gene transfer event allowed an archaea microbe to become efficient in exploiting the buried carbon and this led to an exponential increase in methane and CO2 production. The active volcanos provided the necessary nickel to catalyze the reactions. Maybe it was simply a matter of time before some creature would find a way to exploit all the stored energy conveniently buried underground and release the carbon back into the atmosphere. The accompanying rise in temperatures and increased acidification of the oceans may also spell the end of this current reign of Synapsids and start a new era. While the smart (rich?) money seems to be on some sort of trans-human cyborg being the future, I am betting that some insignificant bird out there will be the progenitor of the next dominant age of dinosaurs.

Are humans successful because they are cruel?

According to Wikipedia, the class Mammalia has 29 orders (e.g. Carnivora), 156 families (e.g. Ursidae), 1258 Genera (e.g. Ursus), and nearly 6000 species (e.g. Polar Bear). Some orders like Chiroptera (bats) and Rodentia are very large with many families, genera, and species. Some are really small like Orycteropodidae, which has only one species – the aardvark. Humans are in the order Primates, of which there are quite a few families and genera. Almost all of them live in tropical or subtropical areas and almost all of them have small populations, many of them endangered. The exception of course is humans who is the only species remaining of the genus homo. The other genera in the great ape family hominidae – gorillas, orangutans, chimpanzees, and bonobos – are all in big trouble.

I think most people would attribute the incomparable success of humans to their resilience, intelligence, and ingenuity. However, another important factor could be their bottomless capacity for intentional cruelty. Although there seems to be a decline in violence throughout history as documented in Steven Pinker’s recent book, there are still no shortages of examples. Take a listen to this recent Econtalk podcast with Mike Munger on how the South rationalized slavery. It could very well be that what made modern humans dominate earth and wipe out all the other homo species along the way was not that they were more intelligent but that they were more cruel and rapacious. Neanderthals and Denisovans may have been happy sitting around the campfire after a hunt, while humans needed to raid every nearby tribe and kill them.

Insider trading

I think one of the main things that has fueled a backlash against the global elites is the (correct) perception that they play by different rules. When they make financial mistakes, they get bailed out with taxpayer dollars with no consequences. Gains are privatized and losses are socialized. Another example is insider trading where people profit from securities transactions using nonpublic information. While there have been several high profile cases in recent years (e.g. here is a Baltimore example), my guess is that insider trading is rampant since it is so easy to do and so hard to detect. The conventional wisdom for combating insider trading is stronger enforcement and penalties. However, my take is that this will just lead to a situation where small time insider traders get squeezed out while the sophisticated ones who have more resources will continue. This is an example where a regulation creates a monopoly or economic rent opportunity.

Aside from the collapse of morality that may come with extreme wealth and power (e.g. listen here), I also think that insider traders rationalize their activities because they don’t think that it hurts anyone even though there is an obvious victim. For example, if someone gets inside information that a highly touted drug has failed to win approval from the FDA then they can short the stock (or buy put options), which is an agreement or opportunity to sell the stock at the current price in the future. When the stock decreases in value after the announcement, they just buy the stock at the lower price, resell at the higher price, and reap the profits. The victim is the counter party to the trade who could be a rich trader but could also be someone’s pension fund or employees of the company.

Now the losing party or a regulatory agency could suspect a case of insider trading but to prove it would require someone confessing or finding an email or phone recording of the information passed. They could also try to set up a sting operation to try to catch serial violators. All of these things are difficult and costly. The alternative may seem ridiculous but I think the best solution may be to make insider trading legal. If it were legal then several things would happen. More people would do it which would drive down the prices for the trades, the information would more likely be leaked to the public since people would not be afraid of sharing it, and people would be more careful in making trades prior to big decisions because the other party may have more information than they do. Companies would be responsible for policing people in their firms that leak information. By making insider information legal, the rent created by regulations would be reduced.

The US election and the future

Political scientists will be dissecting the results of the 2016 US presidential election for the next decade but certainly one fact that is likely to be germane to any analysis is that real wages have been stagnant or declining for the past 45 years. I predict that this trend will only worsen no matter who is in power. The stark reality is that most jobs are replaceable by machines. This is not because AI has progressed to the point that machines can act human but because most jobs, especially higher paying jobs, do not depend heavily on being human. While I have seen some consternation about the prospect of 1.5 million truck drivers being replaced by self-driving vehicles in the near future, I have seen much less discourse on the fact that this is also likely to be true for accountants, lawyers, middle managers, medical professionals, and other well compensated professionals. What people seem to miss is that the reason these jobs are well paid is that there are relatively few people who are capable of doing them and that is because they are difficult for humans to master. In other words, they are well paid because they require not acting particulary human. IBM’s Watson, which won the game show Jeopardy and AlphaGo, which beat the world’s best Go player, shows that machines can quite easily better humans at specific tasks. The more specialized the task, the easier it will be for a machine to do it. The cold hard truth is that AI does not have to improve for you to be replaced by a machine. It does not matter whether strong AI, (an artificial intelligence that truly thinks like a human), is ever possible. It only matters that machine learning algorithms can mimic what you do now. The only thing necessary for this to happen was for computers to be fast enough and now they are.

What this implies is that the jobs of the future will be limited to those that require being human or where interacting with a human is preferred. This will include 1) jobs that most people can do and thus will not be well paid like store sales people, restaurant servers, bar tenders, cafe baristas, and low skill health workers, 2) jobs that require social skills that might be better paid such as social workers, personal assistants, and mental health professionals, 3) jobs that require special talents like artisans, artists, and some STEM professionals, and 4) capitalists that own firms that employ mostly robots. I strongly believe that only a small fraction of the population will make it to categories 3) and 4). Most people will be in 1) or not have a job at all. I have argued before that one way out is for society to choose to make low productivity work viable. In any case, the anger we saw this year is only going to grow because existing political institutions are in denial about the future. The 20th century is over. We are not getting it back. The future is either the 17th or 18th century with running water, air conditioning and health care or the 13th century with none of these.

Revolution vs incremental change

I think that the dysfunction and animosity we currently see in the US political system and election is partly due to the underlying belief that meaningful change cannot be effected through slow evolution but rather requires an abrupt revolution where the current system is torn down and rebuilt. There is some merit to this idea. Sometimes the structure of a building can be so damaged that it would be easier to demolish and rebuild rather than repair and renovate. Mathematically, this can be expressed as a system being stuck in a local minimum (where getting to the global minimum is desired). In order to get to the true global optimum, you need to get worse before you can get better. When fitting nonlinear models to data, dealing with local minima is a major problem and the reason that a stochastic MCMC algorithm that does occasionally go uphill works so much better than gradient descent, which only goes downhill.

However, the recent success of deep learning may dispel this notion when the dimension is high enough. Deep learning, which is a multi-layer neural network that can have millions of parameters is the quintessence of a high dimensional model. Yet, it seems to be able to work just fine using the back propagation algorithm, which is a form of gradient descent. The reason could be that in high enough dimensions, local minima are rare and the majority of critical points (places where the slope is zero) are high dimensional saddle points, where there is always a way out in some direction. In order to have a local minimum, the matrix of second derivatives in all directions (i.e. Hessian matrix) must be positive definite (i.e. have all positive eigenvalues). As the dimension of the matrix gets larger and larger there are simply more ways for one eigenvalue to be negative and that is all you need to provide an escape hatch. So in a high dimensional system, gradient descent may work just fine and there could be an interesting tradeoff between a parsimonious model with few parameters but difficult to fit versus a high dimensional model that is easy to fit. Now the usual danger of having too many parameters is that you overfit and thus you fit the noise at the expense of the signal and have no ability to generalize. However, deep learning models seem to be able to overcome this limitation.

Hence, if the dimension is high enough evolution can work while if it is too low then you need a revolution. So the question is what is the dimensionality of governance and politics. In my opinion, the historical record suggests that revolutions generally do not lead to good outcomes and even when they do small incremental changes seem to get you to a similar place. For example, the US and France had bloody revolutions while Canada and the England did not and they all have arrived at similar liberal democratic systems. In fact, one could argue that a constitutional monarchy (like Canada and Denmark), where the head of state is a figure head is more stable and benign than a republic, like Venezuela or Russia (e.g. see here). This distinction could have pertinence for the current US election if a group of well-meaning people, who believe that the two major parties do not have any meaningful difference, do not vote or vote for a third party. They should keep in mind that incremental change is possible and small policy differences can and do make a difference in people’s lives.

What liberal boomers don’t get

Writer Lionel Shriver recently penned an opinion piece in the New York Times lamenting that the millennial penchant for political correctness is stifling free speech and imposing cultural conformity the way the conservatives did in the 60’s and 70’s. The opinion piece was her response to the uproar over her speech at the 2016 Brisbane Writer’s Festival instigated by a young woman named Yassmin Abdel-Magied, who walked out in the middle and then wrote a commentary about why she did so in the Guardian. You can read Shriver’s piece here, Abdel-Magied’s here, and a blog post about the talk here. The question of cultural appropriation, identity politics, and political correctness is a major theme in the current US presidential election. While there has always been conservative resentment towards political correctness there has been a recent strong liberal backlash.

The liberal resentment has been spurred mainly by two recent incidents at two elite US colleges. The first was when Yale’s Intercultural Affairs Council recommended that students not wear Hallowe’en costumes that might offend other students. Lecturer and associate master of one of Yale’s residential colleges, Erica Christakis, wrote an email questioning the need to regulate student’s clothing choices and that students should be allowed to be a little offensive. This triggered a massive reaction from the student body strongly criticizing Christakis. The second incident occurred at Bowdoin College in which there was a “tequila” themed party at a College Residence, where students wore sombreros and acted out Mexican sterotypes. Two members of the student government attended the party and this led to a movement by students to have the two impeached. Both of these incidents led to pretty uniform condemnation of the students by the main stream media. For example, see this article in the Atlantic.

The liberal backlash is based on the premise that the millennial generation (those born between 1980 and 2000) have been so coddled (by their baby boomer parents, born between 1945 and 1965, I should add) that they refuse to be exposed to any offensive speech or image. (Personal disclosure: I am technically a boomer, born in 1962, although by the time I came of age the culture wars of the 60’s had past. I’m a year younger than Douglas Coupland, who wrote the book Generation X, which was partially an anthem for neglected tail-end boomers who missed out on all the fun and excitement of the cohort a decade older. The cruel irony is that the term Generation X was later appropriated to mostly mean those born in the 70’s making us once again, an afterthought.)

My initial reaction to those incidents was to agree with the backlash but the contrast between Ms. Abdel-Magied’s thoughtful heartfelt comment and Ms. Shriver’s exasperated impatient one made me realize that I have underestimated the millennials and that they do have a point. Many liberal boomers believe that while full racial equality may not yet exist, much of the heavy lifting towards that end was done by the Civil Rights Movement of the 60’s, which they supported. What these boomers miss is that the main reason that full racial equality has not been reached is because of cultural biases and attitudes that many of them may even possess. The millennial approach may be a little heavy handed but they at least recognize the true problem and are trying to do something about it.

The plain truth is that just being black does carry an extra risk of being killed in an encounter with law enforcement. Whites and blacks still live in segregated neighborhoods. Even in the so-called liberal enclave of academia, minorities are underrepresented in high level administrative positions. There are just a handful of East Asian women full professors in Ophthalmology in all US medical schools. Hollywood executives do believe that movies cannot be successful with Asian lead actors and thus they still cast white actors for Asian roles. Asians are disadvantaged in the admissions process at elite American schools. Racial stereotypes do exist and pervade even the most self-professed liberal minds and this is a problem. This is not just a battle over free speech as liberal boomers have cast it. This is about what we need to do to make society more just and fair. Shriver thought it was ridiculous that people would be upset over wearing sombreros but it does indicate that there are those that automatically associate a Mexican drink with a Mexican stereotype. Some of these students will be future leaders and I don’t think it is too much to ask that they be aware of the inherent racial biases they may harbour.

Arsenic and Selenium

You should listen to this podcast from Quirks and Quarks about how University of Calgary scientist Judit Smits is trying to use selenium rich lentils from Saskatchewan, Canada to treat arsenic poisoning in Bangladesh. Well water in parts of rural Bangladesh have high levels of natural arsenic and this is a major health problem. Professor Smits, who is actually in the department of veterinary medicine, has done work using arsenic to treat selenium poisoning in animals. It turns out that arsenic and selenium, both of which can be toxic in high doses, effectively neutralize each other. They each seem to increase excretion of the other into the bile. So she hypothesized that selenium might counter arsenic poisoning but the interaction is nontrivial so it is not a certainty that it would work. Dr. Smits organized a study to transport ten tons of lentils from Canada to Bangladesh this past summer to test the hypothesis and you can hear about the trials and tribulations of getting the study done. The results are not yet in but I think this is a perfect example of how cleverness combined with determination can make a real difference. This study is funded entirely from Canadian sources but it sounds like something the Gates and Clinton foundations could be interested in.

2016-9-26. Corrected a typo, changed Saskatchewan to Bangladesh

New Papers

Li, Y., Chow, C. C., Courville, A. B., Sumner, A. E. & Periwal, V. Modeling glucose and free fatty acid kinetics in glucose and meal tolerance test. Theoretical Biology and Medical Modelling 1–20 (2016). doi:10.1186/s12976-016-0036-3

Katan, M. B. et al. Impact of Masked Replacement of Sugar-Sweetened with Sugar-Free Beverages on Body Weight Increases with Initial BMI: Secondary Analysis of Data from an 18 Month Double–Blind Trial in Children. PLoS ONE 11, e0159771 (2016).

These two papers took painfully long times to be published, which was completely perplexing and frustrating given that they both seemed rather straightforward and noncontroversial. The first is a generalization of our previously developed minimal model of the fatty acid and glucose as a function of insulin to a response to an ingested meal, where the rate of appearance of fat and glucose in the blood was modeled by an empirically determined time dependent function. The second was a reanalysis of the effects of substituting sugar-sweetened beverages with non-sugar ones. We applied our childhood growth model to predict what the children ate to account for their growth. Interestingly, what we found is that the model predicted that children with higher BMI are less able to compensate for a reduction of calories than children with lower BMI. This could imply that children with higher BMI have a less sensitive caloric sensing system and thus could be prone to overeating but on the flip side, can also be “tricked” into eating less.

Selection of the week

Sorry for the long radio silence. However, I was listening to the radio yesterday and this version of the Double Violin Concerto in D minor, BWV 1043 by JS Bach came on and I sat in my car in a hot parking lot listening to it. It’s from a forty year old EMI recording with violinists Itzhak Perlman and Pinchas Zukerman with Daniel Barenboim conducting the English Chamber Orchestra. I’ve been limiting my posts to videos of live performances but sometimes classic recordings should be given their due and this is certainly a classic. Even though I posted a version with Oistrakh and Menuhin before, I just had to share this.

A conservative legal argument for gun control

I am an advocate for gun control because, as I expounded in my previous post, of my inherent belief in the incompetence of all humans. A major impediment to gun control in the US is the 2nd Amendment of the Constitution, which states “A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.” The two key phrases of the 2nd Amendment are “well regulated Militia” and “right … to bear arms” and there has always been constant tension over what they exactly mean. The current Supreme Court, prior to Antonin Scalia’s death, held that the 2nd Amendment means that people can bear arms under any circumstance and this has led to the overturning of many gun control measures in cities like Washington DC and Chicago. However, this has not always been the case. Previous courts have put more weight into the “well regulated” part and allowed for some gun restrictions.

Although the right to bear arms is considered to be the conservative position, I actually think there is an equally compelling conservative argument for gun control. One of the things that conservatives argue for is that government should be less centralized and that individual states should be able to set their own laws, as long as they don’t violate the Constitution. Hence, gun control advocates should use a “States’ Rights” argument that communities should be able to establish their own interpretation for how “well regulated” and “right to bear arms” should be balanced. Instead of trying to fight for uniform federal gun control laws, they should argue that local laws should be allowed to stand, provided that they do not completely outlaw guns. So if Washington DC wants an assault weapons ban, that should be fine. If Chicago wants to limit magazine sizes in hand guns, that should also be okay. People in gun ravaged cities like Baltimore should not have to have gun laws that might be popular in states like Idaho be forced upon them. Depending on who fills the vacant position on the next Supreme Court this line or argument could be moot but I think it is one that gun control advocates should perhaps pursue.

Forming a consistent political view

In view of the current US presidential election, I think it would be a useful exercise to see if I could form a rational political view that is consistent with what I actually know and believe from my training as a scientist. From my knowledge of dynamical systems and physics, I believe in the inherent unpredictability of complex nonlinear systems. Uncertainty is a fundamental property of the universe at all scales. From neuroscience, I know that people are susceptible to errors, do not always make optimal choices, and are inconsistent. People are motivated by whatever triggers dopamine to be released. From genetics, I know that many traits are highly heritable and that includes height, BMI, IQ and the Big Five personality traits. There is lots of human variance. People are motivated by different things, have various aptitudes, and have various levels of honesty and trustworthiness. However, from evolution theory, I know that genetic variance is also essential for any species to survive. Variety is not just the spice of life, it is also the meat. From ecology, I know that the world is a linked ecosystem. Everything is connected. From computer science, I know that there are classes of problems that are easy to solve, classes that are hard to solve, and classes that are impossible to solve and no amount of computing power can change that. From physics and geology, I fully accept that greenhouse gases will affect the energy balance on earth and that the climate is changing. However, given the uncertainty of dynamical systems, while I do believe that current climate models are pretty good, there does exist the possibility that they are missing something. I believe that the physical laws that govern our lives are computable and this includes consciousness. I believe everything is fallible and that includes people, markets and government.

So how would that translate into a political view? Well, it would be a mishmash of what might be considered socialist, liberal, conservative, and libertarian ideas. Since I think randomness and luck is a large part of life, including who your parents are, I do not subscribe to the theory of just desserts. I don’t think those with more “talents” deserve all the wealth they can acquire. However, I also do realize that we are motivated by dopamine and part of what triggers dopamine is reaping the rewards of our efforts so we must leave incentives in place. We should not try to make society completely equal but redistributive taxation is necessary and justified.

Since I think people are basically incompetent and don’t always make good choices, people sometimes need to be protected from themselves. We need some nanny state regulations such as building codes, water and air quality standards, transportation safety, and toy safety. I don’t believe that all drugs should be legalized because some drugs can permanently damage brains, especially those of children. Amphetamines and opioids should definitely be illegal. Marijuana is probably okay but not for children. Pension plans should be defined benefit (rather than defined contribution) schemes. Privatizing social security would be a disaster. However, we should not over regulate.  I would deregulate a lot of land use especially density requirements. We should eliminate all regulations that enforce monopolies including some professional requirements that deliberately restrict supply. We should not try to pick winners in any industry.

I believe that people will try to game the system so we should design welfare and tax systems that minimize the possibility of cheating. The current disability benefits program needs to be fixed. I do not believe in means testing for social programs as it gives room to cheat. Cheating not only depletes the system but also engenders resentment in others who do not cheat. Part of the anger of the working class is that they see people around them gaming the system. The way out is to replace the entire welfare system with a single universal basic income. People have argued that it makes no sense for Bill Gates and Warren Buffet to get a basic income. In actuality, they would end up paying most of it back in taxes. In biology, this is called a futile cycle but it has utility since it is easier to just give everyone the same benefits and tax according to one rule then having exceptions for everything as we have now. We may not be able to afford a basic income now but we eventually will.

Given our lack of certainty and incompetence, I would be extremely hesitant about any military interventions on foreign soil. We are as likely to make things worse as we are to make the better. I think free trade is in net a good thing because it does lead to higher efficiency and helps people in lower income countries. However, it will absolutely hurt some segment of the population in the higher income country. Since income is correlated with demand for your skills, in a globalized world those with skills below the global median will be losers. If a lot of people will do your job for less then you will lose your job or get paid less. For the time being, there should be some wage support for low wage people but eventually this should transition to the basic income.

Since I believe the brain is computable, this means that any job a human can do, a robot will eventually do as well or better. No job is safe. I do not know when the mass displacement of work will take place but I am sure it will come. As I wrote in my AlphaGo piece, not everyone can be a “knowledge” worker, media star, or CEO. People will find things to do but they won’t all be able to earn a living off of it in our current economic model. Hence, in the robot world, everyone would get a basic income and guaranteed health care and then be free to do whatever they want to supplement that income including doing nothing. I romantically picture a simulated 18th century world with people indulging in low productivity work but it could be anything. This will be financed by taxing the people who are still making money.

As for taxes, I think we need to go a system that de-emphasizes income taxes, which can be gamed and disincentivizes work, to one that taxes the use of shared resources (i.e. economic rents). This includes land rights, mineral rights, water rights, air rights, solar rights, wind rights, monopoly rights, eco system rights, banking rights, genetic rights, etc. These are resources that belong to everyone. We could use a land value tax model. When people want to use a resource, like land to build a house, they would pay the intrinsic value of that resource. They would keep any value they added. This would incentivize efficient utility of the resource while not telling anyone how to use it.

We could use an auction system to value these resources and rights. Hence, we need not regulate wall street firms per se but we would tax them according to the land they use and what sort of monopoly influence they exploit. We wouldn’t need to force them to obey capital requirements, we would tax them for the right to leverage debt. We wouldn’t need Glass-Steagall or Too Big to Fail laws for banks. We’ll just tax them for the right to do these things. We would also not need a separate carbon tax. We’ll tax the right to extract fossil fuels at a level equal to the resource value and the full future cost to the environment. The climate change debate would then shift to be about the discount rate. Deniers would argue for a large rate and alarmists for a small one. Sports leagues and teams would be taxed for their monopolies. The current practice of preventing cities from owning teams would be taxed.

The patent system needs serious reform. Software patents should be completely eliminated. Instead of giving someone arbitrary monopoly rights for a patent, patent holders should be taxed at some level that increases with time. This would force holders to commercialize, sell or relinquish the patent when they could no longer bear the tax burden and this would eliminate patent trolling.

We must accept that there is no free will per se so that crime and punishment must be reinterpreted. We should only evaluate whether offenders are dangerous to society and the seriousness of the crime. Motive should no longer be important. Only dangerous offenders would be institutionalized or incarcerated. Non-dangerous ones should repay the cost of the crime plus a penalty. We should also do a Manhattan project for nonlethal weapons so the police can carry them.

Finally, under the belief that nothing is certain, laws and regulations should be regularly reviewed including the US Constitution and the Bill of Rights. In fact, I propose that the 28th Amendment be that all laws and regulations must be reaffirmed or they will expire in some set amount of time.

 

 

 

The hazards of being obese

One of my favourite contrarian positions is that being overweight is not so bad. I don’t truly believe this but I like to use it to point out that although most everyone holds that being obese is not healthy, there is actually very little evidence to support this assertion. However, this recent rather impressive paper in the Lancet finally shows that being overweight or obese is really bad. The paper is a meta-analysis of hundreds of studies with a combined study size of over 10 million! The take home message is that the hazard ratio for dying is significantly greater than one but not too bad for overweight and mildly obese people (BMI < 30) but increases sharply after that. It is over two and rapidly increasing for BMI greater than 35. The hazard ratio gives the relative probability of mortality (or any outcome) per unit time (i.e. mortality rate) in a survival analysis, which in this case was a Cox proportional hazards model. The hazard ratio as a function of BMI is well fit by a quadratic function with a minimum around 22 kg/m^2. The chances of dying increase if you are thinner or fatter than this. The study was careful to not include smokers and anyone with a chronic disease and also did not start the analysis until 5 years after the measurement to avoid capturing people who are thin because they are already ill. They also broke the model down into various regions. Surprisingly, the chances of dying when you are obese is worse if you are in Europe or North America compared to Asia. Particularly surprising is the fact that the hazard ratio rises slowest in South Asia for increasing BMI. South Asians have been found to be more susceptible to insulin resistance and Type II diabetes with increased body fat but it seems that they die from it at lower rates. However, the error bars were also very large because the sample size was smaller so this may not hold up with more data. In any case, I can no longer use the lack of health consequences of obesity to rib my colleagues so I’ll have to find a new axe to grind.

Low carb diet study paper finally out

Kevin Hall’s long awaited paper on what I dubbed “the land sub” experiment, where subjects were sequestered for two months, is finally in print (see here). This was the study funded by Gary Taube’s organization Nusi. The idea was to do a fully controlled study comparing low carb to a standard high carb diet to test the hypothesis that high carbs lead to weight gain through increased insulin. See here for a summary of the hypothesis. The experiment showed very little effect and refutes the carbohydrate-insulin model of weight gain. Kevin was so frustrated with dealing with Nusi that he opted out of any follow up study. Taubes did not support the conclusions of the paper and claimed that the diet used (which Nusi approved) wasn’t high enough in carbs. This is essentially positing that the carb effect is purely nonlinear – it only shows up if you are just eating white bread and rice all day. Even if this were true it would still mean that carbs could not explain the increase in average body weight over the past three decades since there is a wide range of carb consumption over the general population. It is not as if only the super carb lovers were getting obese. There were some weird effects that warrant further study. One is that study participants seemed to burn 500 more Calories outside of a metabolic chamber compared to inside. This was why the participants lost weight on the lead-in stabilizing diet. These missing Calories far swamped any effect of macronutrient composition.

AlphaGo and the Future of Work

In March of this year, Google DeepMind’s computer program AlphaGo defeated world Go champion Lee Sedol. This was hailed as a great triumph of artificial intelligence and signaled to many the beginning of the new age when machines take over. I believe this is true but the real lesson of AlphaGo’s win is not how great machine learning algorithms are but how suboptimal human Go players are. Experts believed that machines would not be able to defeat humans at Go for a long time because the number of possible games is astronomically large, \sim 250^{150} moves, in contrast to chess with a paltry \sim 35^{80} moves. Additionally, unlike chess, it is not clear what is a good position and who is winning during intermediate stages of a game. Thus, any direct enumeration and evaluation of possible next moves as chess computers do, like IBM’s Deep Blue that defeated Gary Kasparov, seemed to be impossible. It was thought that humans had some sort of inimitable intuition to play Go that machines were decades away from emulating. It turns out that this was wrong. It took remarkably little training for AlphaGo to defeat a human. All the algorithms used were fairly standard – supervised and reinforcement backpropagation learning in multi-layer neural networks1. DeepMind just put them together in a clever way and had the (in retrospect appropriate) audacity to try.

The take home message of AlphaGo’s success is that humans are very, very far away from being optimal at playing Go. Uncharitably, we simply stink at Go. However, this probably also means that we stink at almost everything we do. Machines are going to take over our jobs not because they are sublimely awesome but because we are stupendously inept. It is like the old joke about two hikers encountering a bear and one starts to put on running shoes. The other hiker says: “Why are you doing that? You can’t outrun a bear.” to which she replies, “I only need to outrun you!” In fact, the more difficult a job seems to be for humans to perform, the easier it will be for a machine to do better. This was noticed a long time ago in AI research and called Moravec’s Paradox. Tasks that require a lot of high level abstract thinking like chess or predicting what movie you will like are easy for computers to do while seemingly trivial tasks that a child can do like folding laundry or getting a cookie out of a jar on an unreachable shelf is really hard. Thus high paying professions in medicine, accounting, finance, and law could be replaced by machines sooner than lower paying ones in lawn care and house cleaning.

There are those who are not worried about a future of mass unemployment because they believe people will just shift to other professions. They point out that a century ago a majority of Americans worked in agriculture and now the sector comprises of less than 2 percent of the population. The jobs that were lost to technology were replaced by ones that didn’t exist before. I think this might be true but in the future not everyone will be a software engineer or a media star or a CEO of her own company of robot employees. The increase in productivity provided by machines ensures this. When the marginal cost of production goes to zero (i.e. cost to make one more item), as it is for software or recorded media now, the whole supply-demand curve is upended. There is infinite supply for any amount of demand so the only way to make money is to increase demand.

The rate-limiting step for demand is the attention span of humans. In a single day, a person can at most attend to a few hundred independent tasks such as thinking, reading, writing, walking, cooking, eating, driving, exercising, or consuming entertainment. I can stream any movie I want now and I only watch at most twenty a year, and almost all of them on long haul flights. My 3 year old can watch the same Wild Kratts episode (great children’s show about animals) ten times in a row without getting bored. Even though everyone could be a video or music star on YouTube, superstars such as Beyoncé and Adele are viewed much more than anyone else. Even with infinite choice, we tend to do what our peers do. Thus, for a population of ten billion people, I doubt there can be more than a few million that can make a decent living as a media star with our current economic model. The same goes for writers. This will also generalize to manufactured goods. Toasters and coffee makers essentially cost nothing compared to three decades ago, and I will only buy one every few years if that. Robots will only make things cheaper and I doubt there will be a billion brands of TV’s or toasters. Most likely, a few companies will dominate the market as they do now. Even, if we could optimistically assume that a tenth of the population could be engaged in producing goods and services necessary for keeping the world functioning that still leaves the rest with little to do.

Even much of what scientists do could eventually be replaced by machines. Biology labs could consist of a principle investigator and robot technicians. Although it seems like science is endless, the amount of new science required for sustaining the modern world could diminish. We could eventually have an understanding of biology sufficient to treat most diseases and injuries and develop truly sustainable energy technologies. In this case, machines could be tasked to keep the modern world up and running with little need of input from us. Science would mostly be devoted to abstract and esoteric concerns.

Thus, I believe the future for humankind is in low productivity occupations – basically a return to pre-industrial endeavors like small plot farming, blacksmithing, carpentry, painting, dancing, and pottery making, with an economic system in place to adequately live off of this labor. Machines can provide us with the necessities of life while we engage in a simulated 18th century world but without the poverty, diseases, and mass famines that made life so harsh back then. We can make candles or bread and sell them to our neighbors for a living wage. We can walk or get in self-driving cars to see live performances of music, drama and dance by local artists. There will be philosophers and poets with their small followings as they have now. However, even when machines can do everything humans can do, there will still be a capacity to sustain as many mathematicians as there are people because mathematics is infinite. As long as P is not NP, theorem proving can never be automated and there will always be unsolved math problems.  That is not to say that machines won’t be able to do mathematics. They will. It’s just that they won’t ever be able to do all of it. Thus, the future of work could also be mathematics.

  1. Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016).

The simulation argument made quantitative

Elon Musk, of Space X, Tesla, and Solar City fame, recently mentioned that he thought the the odds of us not living in a simulation were a billion to one. His reasoning was based on extrapolating the rate of improvement in video games. He suggests that soon it will be impossible to distinguish simulations from reality and in ten thousand years there could easily be billions of simulations running. Thus there are a billion more simulated universes than real ones.

This simulation argument was first quantitatively formulated by philosopher Nick Bostrom. He even has an entire website devoted to the topic (see here). In his original paper, he proposed a Drake-like equation for the fraction of all “humans” living in a simulation:

f_{sim} = \frac{f_p f_I N_I}{f_p f_I N_I + 1}

where f_p is the fraction of human level civilizations that attain the capability to simulate a human populated civilization, f_I is the fraction of these civilizations interested in running civilization simulations, and N_I is the average number of simulations running in these interested civilizations. He then argues that if N_I is large, then either f_{sim}\approx 1 or f_p f_I \approx 0. Musk believes that it is highly likely that N_I is large and f_p f_I is not small so, ergo, we must be in a simulation. Bostrom says his gut feeling is that f_{sim} is around 20%. Steve Hsu mocks the idea (I think). Here, I will show that we have absolutely no way to estimate our probability of being in a simulation.

The reason is that Bostrom’s equation obscures the possibility of two possible divergent quantities. This is more clearly seen by rewriting his equation as

f_{sim} = \frac{y}{x+y} = \frac{y/x}{y/x+1}

where x is the number of non-sim civilizations and y is the number of sim civilizations. (Re-labeling x and y as people or universes does not change the argument). Bostrom and Musk’s observation is that once a civilization attains simulation capability then the number of sims can grow exponentially (people in sims can run sims and so forth) and thus y can overwhelm x and ergo, you’re in a simulation. However, this is only true in a world where x is not growing or growing slowly. If x is also growing exponentially then we can’t say anything at all about the ratio of y to x.

I can give a simple example.  Consider the following dynamics

\frac{dx}{dt} = ax

\frac{dy}{dt} = bx + cy

y is being created by x but both are both growing exponentially. The interesting property of exponentials is that a solution to these equations for a > c is

x = \exp(at)

y = \frac{b}{a-c}\exp(at)

where I have chosen convenient initial conditions that don’t affect the results. Even though y is growing exponentially on top of an exponential process, the growth rates of x and y are the same. The probability of being in a simulation is then

f_{sim} = \frac{b}{a+b-c}

and we have no way of knowing what this is. The analogy is that you have a goose laying eggs and each daughter lays eggs, which also lay eggs. It would seem like there would be more eggs from the collective progeny than the original mother. However, if the rate of egg laying by the original mother goose is increasing exponentially then the number of mother eggs can grow as fast as the number of daughter, granddaughter, great…, eggs. This is just another example of how thinking quantitatively can give interesting (and sometimes counterintuitive) results. Until we have a better idea about the physics underlying our universe, we can say nothing about our odds of being in a simulation.

Addendum: One of the predictions of this simple model is that there should be lots of pre-sim universes. I have always found it interesting that the age of the universe is only about three times that of the earth. Given that the expansion rate of the universe is actually increasing, the lifetime of the universe is likely to be much longer than the current age. So, why is it that we are alive at such an early stage of our universe? Well, one reason may be that the rate of universe creation is very high and so the probability of being in a young universe is higher than being in an old one.

Addendum 2: I only gave a specific solution to the differential equation. The full solution has the form Y_1\exp(at) + Y_2 \exp(ct).  However, as long as a >c, the first term will dominate.

Addendum 3: I realized that I didn’t make it clear that the civilizations don’t need to be in the same universe. Multiverses with different parameters are predicted by string theory.  Thus, even if there is less than one civilization per universe, universes could be created at an exponentially increasing rate.

 

Selection of the week

The third movement of Felix Mendelssohn’s violin concerto played by Swedish prodigy Daniel Lozakovitj at age 10 with the Tchaikovsky Symphony Orchestra at the Tchaikovsky Concert Hall in 2011.

Here is the version by international superstar and former violin prodigy Sarah Chang with Kurt Masur and the New York Philharmonic in 1995 when she was about 15.

 

 

Confusion about consciousness

I have read two essays in the past month on the brain and consciousness and I think both point to examples of why consciousness per se and the “problem of consciousness” are both so confusing and hard to understand. The first article is by philosopher Galen Strawson in The Stone series of the New York Times. Strawson takes issue with the supposed conventional wisdom that consciousness is extremely mysterious and cannot be easily reconciled with materialism. He argues that the problem isn’t about consciousness, which is certainly real, but rather matter, for which we have no “true” understanding. We know what consciousness is since that is all we experience but physics can only explain how matter behaves. We have no grasp whatsoever of the essence of matter. Hence, it is not clear that consciousness is at odds with matter since we don’t understand matter.

I think Strawson’s argument is mostly sound but he misses on the crucial open question of consciousness. It is true that we don’t have an understanding of the true essence of matter and we probably never will but that is not why consciousness is mysterious. The problem is that we do now know whether the rules that govern matter, be they classical mechanics, quantum mechanics, statistical mechanics, or general relativity, could give rise to a subjective conscious experience. Our understanding of the world is good enough for us to build bridges, cars, computers and launch a spacecraft 4 billion kilometers to Pluto, take photos, and send them back. We can predict the weather with great accuracy for up to a week. We can treat infectious diseases and repair the heart. We can breed super chickens and grow copious amounts of corn. However, we have no idea how these rules can explain consciousness and more importantly we do not know whether these rules are sufficient to understand consciousness or whether we need a different set of rules or reality or whatever. One of the biggest lessons of the twentieth century is that knowing the rules does not mean you can predict the outcome of the rules. Not even taking into the computability and decidability results of Turing and Gödel, it is still not clear how to go from the microscopic dynamics of molecules to the Navier-Stokes equation for macroscopic fluid flow and how to get from Navier-Stokes to the turbulent flow of a river. Likewise, it is hard to understand how the liver works, much less the brain, starting from molecules or even cells. Thus, it is possible that consciousness is an emergent phenomenon of the rules that we already know, like wetness or a hurricane. We simply do not know and are not even close to knowing. This is the hard problem of consciousness.

The second article is by psychologist Robert Epstein in the online magazine Aeon. In this article, Epstein rails against the use of computers and information processing as a metaphor for how the brain works. He argues that this type of restricted thinking is why we can’t seem to make any progress understanding the brain or consciousness. Unfortunately, Epstein seems to completely misunderstand what computers are and what information processing means.

Firstly, a computation does not necessarily imply a symbolic processing machine like a von Neumann computer with a central processor, memory, inputs and outputs. A computation in the Turing sense is simply about finding or constructing a desired function from one countable set to another. Now, the brain certainly performs computations; any time we identify an object in an image or have a conversation, the brain is performing a computation. You can couch it in whatever language you like but it is a computation. Additionally, the whole point of a universal computer is that it can perform any computation. Computations are not tied to implementations. I can always simulate whatever (computable) system you want on a computer. Neural networks and deep learning are not symbolic computations per se but they can be implemented on a von Neumann computer. We may not know what the brain is doing but it certainly involves computation of some sort. Any thing that can sense the environment and react is making a computation. Bacteria can compute. Molecules compute. However, that is not to say that everything a brain does can be encapsulated by Turing universal computation. For example, Penrose believes that the brain is not computable although as I argued in a previous post, his argument is not very convincing. It is possible that consciousness is beyond the realm of computation and thus would entail very different physics. However, we have yet to find an example of a real physical phenomenon that is not computable.

Secondly, the brain processes information by definition. Information in both the Shannon and Fisher senses is a measure of uncertainty reduction. For example, in order to meet someone for coffee you need at least two pieces of information, where and when. Before you received that information your uncertainty was huge since there were so many possible places and times the meeting could take place. After receiving the information your uncertainty was eliminated. Just knowing it will be on Thursday is already a big decrease in uncertainty and an increase in information. Much of the brain’s job at least for cognition is about uncertainly reduction. When you are searching for your friend in the crowded cafe, you are eliminating possibilities and reducing uncertainty. The big mistake that Epstein makes is conflating an example with the phenomenon. Your brain does not need to function like your smartphone to perform computations or information processing. Computation and information theory are two of the most important mathematical tools we have for analyzing cognition.