The tired trope from free market exponents is that private enterprise is agile, efficient, and competent, while government is prodding, incompetent, and wasteful. The argument is that because companies must survive in a competitive environment they are always striving to improve and gain an edge against their competitors. Yet history and recent events seem to indicate otherwise. The best strategy in capitalism seems to be to gain monopoly power and extract rent. While Equifax was busy covering up their malfeasance instead of trying to fix things for everyone they harmed, Cassini ended a brilliantly successful mission to explore Saturn. The contrast couldn’t have been greater if it was staged. The so-called incompetent government has given us moon landings, the internet, and built two Voyager spacecraft that have lasted 40 years and have now exited the Solar system into interstellar space. There is no better run organization than JPL. Each day at NIH, a government facility, I get to interact with effective and competent people who are trying to do good in the world. I think it’s time to update the government is the problem meme.
Political scientists will be dissecting the results of the 2016 US presidential election for the next decade but certainly one fact that is likely to be germane to any analysis is that real wages have been stagnant or declining for the past 45 years. I predict that this trend will only worsen no matter who is in power. The stark reality is that most jobs are replaceable by machines. This is not because AI has progressed to the point that machines can act human but because most jobs, especially higher paying jobs, do not depend heavily on being human. While I have seen some consternation about the prospect of 1.5 million truck drivers being replaced by self-driving vehicles in the near future, I have seen much less discourse on the fact that this is also likely to be true for accountants, lawyers, middle managers, medical professionals, and other well compensated professionals. What people seem to miss is that the reason these jobs are well paid is that there are relatively few people who are capable of doing them and that is because they are difficult for humans to master. In other words, they are well paid because they require not acting particulary human. IBM’s Watson, which won the game show Jeopardy and AlphaGo, which beat the world’s best Go player, shows that machines can quite easily better humans at specific tasks. The more specialized the task, the easier it will be for a machine to do it. The cold hard truth is that AI does not have to improve for you to be replaced by a machine. It does not matter whether strong AI, (an artificial intelligence that truly thinks like a human), is ever possible. It only matters that machine learning algorithms can mimic what you do now. The only thing necessary for this to happen was for computers to be fast enough and now they are.
What this implies is that the jobs of the future will be limited to those that require being human or where interacting with a human is preferred. This will include 1) jobs that most people can do and thus will not be well paid like store sales people, restaurant servers, bar tenders, cafe baristas, and low skill health workers, 2) jobs that require social skills that might be better paid such as social workers, personal assistants, and mental health professionals, 3) jobs that require special talents like artisans, artists, and some STEM professionals, and 4) capitalists that own firms that employ mostly robots. I strongly believe that only a small fraction of the population will make it to categories 3) and 4). Most people will be in 1) or not have a job at all. I have argued before that one way out is for society to choose to make low productivity work viable. In any case, the anger we saw this year is only going to grow because existing political institutions are in denial about the future. The 20th century is over. We are not getting it back. The future is either the 17th or 18th century with running water, air conditioning and health care or the 13th century with none of these.
I think that the dysfunction and animosity we currently see in the US political system and election is partly due to the underlying belief that meaningful change cannot be effected through slow evolution but rather requires an abrupt revolution where the current system is torn down and rebuilt. There is some merit to this idea. Sometimes the structure of a building can be so damaged that it would be easier to demolish and rebuild rather than repair and renovate. Mathematically, this can be expressed as a system being stuck in a local minimum (where getting to the global minimum is desired). In order to get to the true global optimum, you need to get worse before you can get better. When fitting nonlinear models to data, dealing with local minima is a major problem and the reason that a stochastic MCMC algorithm that does occasionally go uphill works so much better than gradient descent, which only goes downhill.
However, the recent success of deep learning may dispel this notion when the dimension is high enough. Deep learning, which is a multi-layer neural network that can have millions of parameters is the quintessence of a high dimensional model. Yet, it seems to be able to work just fine using the back propagation algorithm, which is a form of gradient descent. The reason could be that in high enough dimensions, local minima are rare and the majority of critical points (places where the slope is zero) are high dimensional saddle points, where there is always a way out in some direction. In order to have a local minimum, the matrix of second derivatives in all directions (i.e. Hessian matrix) must be positive definite (i.e. have all positive eigenvalues). As the dimension of the matrix gets larger and larger there are simply more ways for one eigenvalue to be negative and that is all you need to provide an escape hatch. So in a high dimensional system, gradient descent may work just fine and there could be an interesting tradeoff between a parsimonious model with few parameters but difficult to fit versus a high dimensional model that is easy to fit. Now the usual danger of having too many parameters is that you overfit and thus you fit the noise at the expense of the signal and have no ability to generalize. However, deep learning models seem to be able to overcome this limitation.
Hence, if the dimension is high enough evolution can work while if it is too low then you need a revolution. So the question is what is the dimensionality of governance and politics. In my opinion, the historical record suggests that revolutions generally do not lead to good outcomes and even when they do small incremental changes seem to get you to a similar place. For example, the US and France had bloody revolutions while Canada and the England did not and they all have arrived at similar liberal democratic systems. In fact, one could argue that a constitutional monarchy (like Canada and Denmark), where the head of state is a figure head is more stable and benign than a republic, like Venezuela or Russia (e.g. see here). This distinction could have pertinence for the current US election if a group of well-meaning people, who believe that the two major parties do not have any meaningful difference, do not vote or vote for a third party. They should keep in mind that incremental change is possible and small policy differences can and do make a difference in people’s lives.
In view of the current US presidential election, I think it would be a useful exercise to see if I could form a rational political view that is consistent with what I actually know and believe from my training as a scientist. From my knowledge of dynamical systems and physics, I believe in the inherent unpredictability of complex nonlinear systems. Uncertainty is a fundamental property of the universe at all scales. From neuroscience, I know that people are susceptible to errors, do not always make optimal choices, and are inconsistent. People are motivated by whatever triggers dopamine to be released. From genetics, I know that many traits are highly heritable and that includes height, BMI, IQ and the Big Five personality traits. There is lots of human variance. People are motivated by different things, have various aptitudes, and have various levels of honesty and trustworthiness. However, from evolution theory, I know that genetic variance is also essential for any species to survive. Variety is not just the spice of life, it is also the meat. From ecology, I know that the world is a linked ecosystem. Everything is connected. From computer science, I know that there are classes of problems that are easy to solve, classes that are hard to solve, and classes that are impossible to solve and no amount of computing power can change that. From physics and geology, I fully accept that greenhouse gases will affect the energy balance on earth and that the climate is changing. However, given the uncertainty of dynamical systems, while I do believe that current climate models are pretty good, there does exist the possibility that they are missing something. I believe that the physical laws that govern our lives are computable and this includes consciousness. I believe everything is fallible and that includes people, markets and government.
So how would that translate into a political view? Well, it would be a mishmash of what might be considered socialist, liberal, conservative, and libertarian ideas. Since I think randomness and luck is a large part of life, including who your parents are, I do not subscribe to the theory of just desserts. I don’t think those with more “talents” deserve all the wealth they can acquire. However, I also do realize that we are motivated by dopamine and part of what triggers dopamine is reaping the rewards of our efforts so we must leave incentives in place. We should not try to make society completely equal but redistributive taxation is necessary and justified.
Since I think people are basically incompetent and don’t always make good choices, people sometimes need to be protected from themselves. We need some nanny state regulations such as building codes, water and air quality standards, transportation safety, and toy safety. I don’t believe that all drugs should be legalized because some drugs can permanently damage brains, especially those of children. Amphetamines and opioids should definitely be illegal. Marijuana is probably okay but not for children. Pension plans should be defined benefit (rather than defined contribution) schemes. Privatizing social security would be a disaster. However, we should not over regulate. I would deregulate a lot of land use especially density requirements. We should eliminate all regulations that enforce monopolies including some professional requirements that deliberately restrict supply. We should not try to pick winners in any industry.
I believe that people will try to game the system so we should design welfare and tax systems that minimize the possibility of cheating. The current disability benefits program needs to be fixed. I do not believe in means testing for social programs as it gives room to cheat. Cheating not only depletes the system but also engenders resentment in others who do not cheat. Part of the anger of the working class is that they see people around them gaming the system. The way out is to replace the entire welfare system with a single universal basic income. People have argued that it makes no sense for Bill Gates and Warren Buffet to get a basic income. In actuality, they would end up paying most of it back in taxes. In biology, this is called a futile cycle but it has utility since it is easier to just give everyone the same benefits and tax according to one rule then having exceptions for everything as we have now. We may not be able to afford a basic income now but we eventually will.
Given our lack of certainty and incompetence, I would be extremely hesitant about any military interventions on foreign soil. We are as likely to make things worse as we are to make the better. I think free trade is in net a good thing because it does lead to higher efficiency and helps people in lower income countries. However, it will absolutely hurt some segment of the population in the higher income country. Since income is correlated with demand for your skills, in a globalized world those with skills below the global median will be losers. If a lot of people will do your job for less then you will lose your job or get paid less. For the time being, there should be some wage support for low wage people but eventually this should transition to the basic income.
Since I believe the brain is computable, this means that any job a human can do, a robot will eventually do as well or better. No job is safe. I do not know when the mass displacement of work will take place but I am sure it will come. As I wrote in my AlphaGo piece, not everyone can be a “knowledge” worker, media star, or CEO. People will find things to do but they won’t all be able to earn a living off of it in our current economic model. Hence, in the robot world, everyone would get a basic income and guaranteed health care and then be free to do whatever they want to supplement that income including doing nothing. I romantically picture a simulated 18th century world with people indulging in low productivity work but it could be anything. This will be financed by taxing the people who are still making money.
As for taxes, I think we need to go a system that de-emphasizes income taxes, which can be gamed and disincentivizes work, to one that taxes the use of shared resources (i.e. economic rents). This includes land rights, mineral rights, water rights, air rights, solar rights, wind rights, monopoly rights, eco system rights, banking rights, genetic rights, etc. These are resources that belong to everyone. We could use a land value tax model. When people want to use a resource, like land to build a house, they would pay the intrinsic value of that resource. They would keep any value they added. This would incentivize efficient utility of the resource while not telling anyone how to use it.
We could use an auction system to value these resources and rights. Hence, we need not regulate wall street firms per se but we would tax them according to the land they use and what sort of monopoly influence they exploit. We wouldn’t need to force them to obey capital requirements, we would tax them for the right to leverage debt. We wouldn’t need Glass-Steagall or Too Big to Fail laws for banks. We’ll just tax them for the right to do these things. We would also not need a separate carbon tax. We’ll tax the right to extract fossil fuels at a level equal to the resource value and the full future cost to the environment. The climate change debate would then shift to be about the discount rate. Deniers would argue for a large rate and alarmists for a small one. Sports leagues and teams would be taxed for their monopolies. The current practice of preventing cities from owning teams would be taxed.
The patent system needs serious reform. Software patents should be completely eliminated. Instead of giving someone arbitrary monopoly rights for a patent, patent holders should be taxed at some level that increases with time. This would force holders to commercialize, sell or relinquish the patent when they could no longer bear the tax burden and this would eliminate patent trolling.
We must accept that there is no free will per se so that crime and punishment must be reinterpreted. We should only evaluate whether offenders are dangerous to society and the seriousness of the crime. Motive should no longer be important. Only dangerous offenders would be institutionalized or incarcerated. Non-dangerous ones should repay the cost of the crime plus a penalty. We should also do a Manhattan project for nonlethal weapons so the police can carry them.
Finally, under the belief that nothing is certain, laws and regulations should be regularly reviewed including the US Constitution and the Bill of Rights. In fact, I propose that the 28th Amendment be that all laws and regulations must be reaffirmed or they will expire in some set amount of time.
The tragedy in Oregon has reignited the gun debate. Gun control advocates argue that fewer guns mean fewer deaths while gun supporters argue that if citizens were armed then shooters could be stopped through vigilante action. These arguments can be quantified in a simple model of the probability of gun death, :
where is the probability of having a gun, is the probability of being a criminal or mentally unstable enough to become a shooter, is the probability of effective vigilante action, and is the probability of accidental death or suicide. The probability of being killed by a gun is given by the probability of someone having a gun times the probability that they are unstable enough to use it. This is reduced by the probability of a potential victim having a gun times the probability of acting effectively to stop the shooter. Finally, there is also a probability of dying through an accident.
The first derivative of with respect to is and the second derivative is negative. Thus, the minimum of cannot be in the interior and must be at the boundary. Given that when and when , the absolute minimum is found when no one has a gun. Even if vigilante action was 100% effective, there would still be gun deaths due to accidents. Now, some would argue that zero guns is not possible so we can examine if it is better to have fewer guns or more guns. is maximal at . Thus, unless is greater than one half then even in the absence of accidents there is no situation where increasing the number of guns makes us safer. The bottom line is that if we want to reduce gun deaths we should either reduce the number of guns or make sure everyone is armed and has military training.
The US National Institutes of Health is divided into an Extramural Program (EP), where scientists in universities and research labs apply for grants, and an Intramural Program (IP), where investigators such as myself are provided with a budget to do research without having to write grants. Intramural Investigators are reviewed fairly rigorously every four years, which affects budgets for the next four years, but this is less stressful than trying to run a lab on NIH grants. This funding model difference is particularly salient in the face of budget cuts because for the IP a 10% cut is 10% cut whereas for the EP, it means that 10% fewer grants are funded. When a lab cannot renew a grant, people lose their jobs. This problem is further exacerbated by medical schools loading up with “soft money” positions, where researchers must pay their own salaries from grants. The institutions also extract fairly large indirect costs from these grants, so in essence, the investigators write grants to both pay their salaries and fill university coffers. I often nervously joke that since the IP is about 10% of the NIH budget, an easy way to implement a 10% budget cut is to eliminate the IP.
However, I think there is value in having something like the IP where people have the financial security to take some risks. It is the closet thing we have these days to the old Bell Labs, where the transistor, information theory, C, and Unix were invented. The IP has produced 18 Nobel Prizes and can be credited with breaking the genetic code (Marshall Nirenberg), the discovery of fluoride to prevent tooth decay, lithium for bipolar disorder, and vaccines against multiple diseases (see here for a list of past accomplishments). What the IP needs to ensure its survival is a more a rigorous and transparent procedure for entry into the IP where the EP participates. An IP position should be treated like a lifetime grant to which anyone at any stage in their career can apply. Not everyone may want to be here. Research groups are generally smaller and there are lots of rules and regulations to deal with, particularly for travel. But if someone just wants to close their door and do high risk high reward research, this is a pretty good place to be and they should get a shot at it.
The Stadtman Tenure-track Investigator program is a partial implementation of this idea. For the past five years, the IP has conducted institute-wide searches to identify young talent in a broad set of fields. I am co-chair of the Computational Biology search this year. We have invited five candidates to come to a “Stadtman Symposium”, which will be held tomorrow at NIH. Details are here along with all the symposia. Candidates that strike the interest of individual scientific directors of the various institutes will be invited back for a more traditional interview. Most of the hires at NIH over the past five years have been through the Stadtman process. I think this has been a good idea and has brought some truly exceptional people to the IP. What I would do to make it even more transparent is to open up the search to people at all stages in the their career and to have EP people participate in the searches and eventual selection of the investigators.
Here is a letter (reposted with permission) from Michael Gottesman, Deputy Director for Intramural Research of the NIH, telling the story of how the NIH intramural research program was instrumental in helping Eric Betzig win this years Nobel Prize in Chemistry. I think it once again shows how great breakthroughs rarely occur in isolation.
The NIH intramural program has placed its mark on another Nobel Prize. You likely heard last week that Eric Betzig of HHMI’s Janelia Farm Research Campus will share the 2014 Nobel Prize in Chemistry “for the development of super-resolved fluorescence microscopy.” Eric’s key experiment came to life right here at the NIH, in the lab of Jennifer Lippincott-Schwartz.
In fact, Eric’s story is quite remarkable and highlights the key strengths of our intramural program: freedom to pursue high-risk research, opportunities to collaborate, and availability of funds to kick-start such a project.
Eric was “homeless” from a scientist’s viewpoint. He was unemployed and working out of a cottage in rural Michigan with no way of turning his theory into reality. He had a brilliant idea to isolate individual fluorescent molecules by a unique optical feature to overcome the diffraction limit of light microscopes, which is about 0.2 microns. He thought that if green fluorescent proteins (GFPs) could be switched on and off a few molecules at a time, it might be possible using Gaussian fitting to synthesize a series of images based on point localization that, when stacked, provide extraordinary resolution.
Eric chanced to meet Jennifer, who heads the NICHD’s Section on Organelle Biology. She and George Patterson, then a postdoc in Jennifer’s lab and now a PI in NIBIB, had developed a photoactivable version of GFP with these capabilities, which they were already applying to the study of organelles. Jennifer latched on to Eric’s idea immediately; she was among the first to understand its significance and saw that her laboratory had just the tool that Eric needed.
So, in mid-2005, Jennifer offered to host Eric and his friend and colleague, Harald Hess, to collaborate on building a super-resolution microscope based on the use of photoactivatable GFP. The two had constructed key elements of this microscope in Harald’s living room out of their personal funds.
Jennifer located a small space in her lab in Building 32. She and Juan Bonifacino, also in NICHD, then secured some centralized IATAP funds for microscope parts to supplement the resources that Eric and Harald brought to the lab. Owen Rennert, then the NICHD scientific director, provided matching funds. By October 2005, Eric and Harald became affiliated with HHMI, which also contributed funds to the project.
Eric and Harald quickly got to work with their new NICHD colleagues in their adopted NIH home. The end result was a fully operational microscope married to GFP technology capable of producing super-resolution images of intact cells for the first time. Called photoactivated localization microscopy (PALM), the new technique provided 10 times the resolution of conventional light microscopy.
Another postdoc in Jennifer’s lab, Rachid Sougrat, now at King Abdullah University of Science and Technology in Saudi Arabia, correlated the PALM images of cell organelles to electron micrographs to validate the new technique, yet another important contribution.
Upon hearing of Eric’s Nobel Prize, Jennifer told me: “We didn’t imagine at the time how quickly the point localization imaging would become such an amazing enabling technology; but it caught on like wildfire, expanding throughout many fields of biology.”
That it did! PALM and all its manifestations are at the heart of extraordinary discoveries. We think this is a quintessential intramural story. We see the elements of high-risk/high-reward research and the importance of collaboration and the freedom to pursue ideas, as well as NIH scientists with the vision to encourage and support this research.
Read the landmark 2006 Science article by Eric, Harald, and the NICHD team, “Imaging Intracellular Fluorescent Proteins at Nanometer Resolution,” at http://www.sciencemag.org/content/313/5793/1642.long.
The story of the origins of Eric Betzig’s Nobel Prize in Jennifer Lippincott-Schwartz’s lab is one that needs to be told. I feel proud to work for an organization that can attract such talent and enable such remarkable science to happen.
Kudos to Eric and to Jennifer and her crew.
Michael M. Gottesman
Deputy Director for Intramural Research