The failure of supply-side social policy

The US is in the midst of two social crises. The first is an opioid epidemic that is decimating parts of rural and now urban America and the second is a surge in the number of migrants crossing the southern US border primarily from Central America. In any system that involves flow, either physical (e.g. electricity) or social (e.g. money), the amount of flow (i.e. flux) is dependent on the amount of supply (e.g. power station/federal reserve) and the amount of demand (e.g. air conditioner/disposable income). So if you want to reduce opioid consumption or illegal immigration you can either shut down the supply or reduce the demand.

During the twentieth century there was a debate over the causes of booms and busts in the economy. I am greatly simplifying the debate but on one side were the demand-side Keynesians who believed that the business cycle is mostly a result of fluctuating demand. If people suddenly decide to stop spending then businesses would lose customers, which would lead them to lay off workers, who would then have less money to spend in other businesses and thus reduce demand further and so forth, leading to a recession. On the other side there were the supply-siders who believed that the problem of economic downturns was inadequate supply, which would be solved by cutting taxes and reducing business regulations. The Great Recession of 2008 provided a partial test of both theories as the US applied a demand-side fix in the form of a stimulus while Europe went for “expansionary austerity” and cut government spending, which slashes demand. The US has now experienced over a decade of steady growth while Europe went into a double dip recession before climbing out after the policy changed. That is not to say that demand-side policies always work. The 1970’s were plagued by stagflation with high unemployment and high inflation for which the Keynesians had no fix. Former Fed Chairman Paul Volcker famously raised interest rates in 1979 to reduce the money supply. It triggered a short recession, which was followed by nearly three decades of low inflation economic growth.

In terms of social policy, the US has really only tried supply-side solutions. The drug war put a lot of petty dealers and drug users in jail but did little to halt the use of drugs. It seems to me that if we really want to solve or at least alleviate the opioid and drug crisis, we need to slash demand. Opioids are pain killers and are physically addictive. Addicted users who try to stop will experience withdrawal, which is extremely painful. If you do succeed you will no longer be physically addicted. However, you can always relapse if you use again. The current US opioid epidemic started with a change in the philosophy of pain management by the medical establishment with a concurrent development of new supposedly less addictive opioid pills. So doctors, encouraged by the pharmaceutical industry, began prescribing opioids for all manners of ailments. Most doctors were well intentioned but a handful participated in outright criminal activity and became de facto drug dealers. In any case, this led to the initial phase of the opioid epidemic. When awareness of over prescription started to enter public consciousness there was pressure to reduce the supply. Addicts then turned to illicit opioids like heroin, which started phase 2 of the epidemic. However, as this supply was targeted by drug enforcement, a new highly potent and cheaper synthetic opioid, fentanyl, emerged. This was something that was easy to produce in makeshift labs anywhere and also provided a safer business model for drug dealers. However, fentanyl is so potent that this is has led to a surge in overdose deaths. Instead of targeting supply we need to reduce demand. First we need to understand why people take them in the first place. While some drugs are taken for the experience or entertainment, opioids are mostly being used to alleviate pain and suffering. It is probably no coincidence that the places most ravaged by opioids are also those that are struggling most economically. If we want to get a handle on the opioid crisis we need to improve these areas economically. People probably also take drugs for some form of escape. This is where I think video games and virtual reality may be helpful. We can debate the merits of playing Fortnite 16 hours a day but it is surely better than taking cocaine. I think we should take using video games as a treatment for drug addiction seriously. We could and should also develop games for this purpose.

Extra border security has not stemmed illegal immigration. What does slow immigration is a downturn in the US economy, which quenches demand for low-skilled labour, or an improvement in the conditions of the originating countries, which reduces the desire to leave in the first place. The current US migrant crisis is mostly due to the abhorrent and dangerous conditions in Guatemala and Honduras. For Europe, it is problems in Africa and the Middle East. In both cases, putting up more barriers or treating the migrants inhospitably is not really doing much. It just makes the journey more perilous, which is bad for the migrant and a moral and public relations nightmare for host countries. Perhaps, we could try to stem demand by at least making it safer in the originating countries. The US could provide more aid to Latin America including stationing American troops if necessary to curb gang activity and restore civil order. This would at least help diminish those seeking asylum. Reducing economic migration is much harder since we really don’t know how to do economic development very well but more investment in source countries could help. While globalization and free trade may have hurt the US worker and contributed to the opioid epidemic by decimating manufacturing in the US, it has also brought a lot of people out of abject poverty. The growth miracles in China and the rest of Asia would not be possible without international trade and investment. Thus the two crises are not independent. More free trade could help to reduce illegal immigration but it could also lead to worsening economic conditions for some regions spurring more opioid use. There are no magic bullets but we at least need to change the strategy.

Duality and computation in the MCU

I  took my kindergartener to see Avengers: Endgame recently. My son was a little disappointed, complaining that the film had too much talking and not enough fighting. To me, the immense popularity of the Marvel Cinematic Universe series and so-called science fiction/fantasy in general is an indicator of how people think they like science but really want magic. Popular science-fictiony franchises like MCU and Star Wars are couched in scientism but are often at odds with actual science as practiced today. Arthur C Clarke famously stated in his third law that “Any sufficiently advanced technology is indistinguishable from magic.” A sentiment captured in these films.

Science fiction should extrapolate from current scientific knowledge to the possible. Otherwise, it should just be called fiction. There have been a handful of films that try to do this like 2001: A Space Odyssey or more recently Interstellar and The Martian. I think there is a market for these types of films but they are certainly not as popular as the fantasy films. To be fair, neither Marvel nor Star Wars (both now owned by Disney) market themselves as science fiction as I defined it. They are intended to be mythologies a la Joseph Campbell’s Hero’s Journey. However, they do have a scientific aesthetic with worlds dominated by advanced technology.

Although I find the MCU films not overly compelling, they do bring up two interesting propositions. The first is dualism. The superhero character Ant-Man has a suit that allows him to change size and even shrink to sub-atomic scales, called the quantum realm in the films. (I won’t bother to discuss whether energy is conserved in these near instantaneous size changes, an issue that affects the Hulk as well). The film was advised by physicist Spiros Michalakis and is rife with physics terminology and concepts like quantum entanglement. One crucial concept it completely glosses over is how Ant-man maintains his identity as a person, much less his shape, when he is smaller than an atom. Even if one were to argue that one’s consciousness could be transferred to some set of quantum states at the sub-atomic scale, it would be overwhelmed by quantum fluctuations. The only self-consistent premise of Ant-Man is that the essence or soul if you wish of a person is not material. The MCU takes a definite stand for dualism on the mind-body problem, a sentiment with which I presume the public mostly agrees. 

The second is that magic has immense computational power. In the penultimate Avengers movie, the villain Thanos snaps his fingers while in possession of the complete set of infinity stones and eliminates half of all living things. (Setting aside the issue that Thanos clearly does not understand the the concept of exponential growth. If you are concerned about overpopulation, it is pointless to shrink the population and do nothing else because it will just return to its original size in short time.) What I’d like to know is who or what does the computation to carry out the command. There are at least two hard computational problems that must be solved. The first is to identify all lifeforms.  This is clearly no easy task as we to this day have no precise definition of life. Do viruses get culled by the snap? Do the population of silicon-based lifeforms of Star Trek get halved or is it only biochemical life? What algorithm does the snap use to find all the life forms? Living things on earth range in size from single cells (or viruses if you count them) all the way to 35 metre behemoths, which are comprised of over 10^{23} numbers of atoms. How do the stones know what scales they span in the MCU? Do photosynthetic lifeforms get spared since they don’t use many resources? What about fungi? Is the MCU actually a simulated universe where there is a continually updated census of all life? How accurate is the algorithm? Was it perfect? Did it aim for high specificity (i.e. reduce false positives so you only kill lifeforms and not non lifeforms) or high sensitivity (i.e. reduce false negatives and thus don’t miss any lifeforms). I think it probably favours sensitivity over specificity – who cares if a bunch of ammonia molecules accidentally get killed. The find-all-life problem is made much easier by proposition 1 because if all life were material then the only way to detect them would be to look for multiscale correlations between atoms (or find organic molecules if you only care about biochemical life). If each lifeform has a soul then you can simply search for “soulfulness”. The lifeforms were not erased instantly but only after a brief delay. What was happening over this delay. Is magic propagation limited by the speed of light or some other constraint? Or did the computation take time? In Endgame, the Hulk restores all the Thanos erased lifeforms and Tony Stark then snaps away Thanos and all of his allies. Where were the lifeforms after they were erased? In Heaven? In a soul repository somewhere? Is this one of the Nine Realms of the MCU? How do the stones know who is a Thanos ally? The second computation is to then decide which half to extinguish. The movie seems to imply that the choice was random so where did the randomness come from? Do the infinity stones generate random numbers? Do they rely on quantum fluctuations? Finally, in a world with magic, why is there also science? Why does the universe follow the laws of physics sometimes and magic other times. Is magic a finite resource as in Larry Niven’s The Magic Goes Away. So many questions, so few answers.

The probability of life

Estimates from the Kepler satellite suggest that there could be at least 40 billion exoplanets capable of supporting life in our galaxy alone. Given that there are perhaps 2 trillion observable galaxies, that amounts to a lot of places where life could exist. And this is only counting biochemical life as we know it. There could also be non-biochemical lifeforms that we can’t even imagine. I chatted with mathematician Morris Hirsch a long ago in Berkeley and he whimsically suggested that there could be creatures living in astrophysical plasmas, which are highly nonlinear. So let’s be generous and say that there are 10^{12} planets for biochemical life to exist in the milky way and 10^{24} total in the observable universe.

Now, does this mean that extraterrestrial life is very likely? If you listen to most astronomers and astrobiologists these days, it would seem that life is guaranteed to be out there and we just need to build a big enough telescope to find it. There are several missions in the works to detect signatures of life like methane or oxygen in the atmosphere of an exoplanet. However, the likelihood of life outside of the earth is predicated on the probability of forming life anywhere and we have no idea what that number is. Although it only took about a billion years for life to form on earth that does not really gives us any information for how likely it will form elsewhere.

Here is a simple example to illustrate how life could grow exponentially fast after it forms but take an arbitrarily long time to form. Suppose the biomass of life on a planet, x, obeys the simple equation

\frac{dx}{dt} = -x(a-x) + \eta(t)

where \eta is a zero mean stochastic forcing with variance D. The deterministic equation has two fixed points, a stable one at x = 0 and an unstable one at x = a. Thus as long as x is smaller than a life will never form but as soon as x exceeds a it will grow (super-exponentially), which will then be damped by nonlinear processes that I don’t consider. We can rewrite this problem as

\frac{dx}{dt} = -\partial_x U(x) + \eta(t)

where U = a x^2/2 - x^3/3.

Untitled.001

The probability of life is then given by the probability of escape from the well U(x) given noisy forcing (thermal bath) provided by \eta. By Kramer’s escape rate (which you can look up or I’ll derive in a future post), the rate of escape is given approximately by e^{-E/D}, where E is the well depth, which is given by a^3/6. Thus the probability of life is exponentially damped by a factor of a^3/6D. Given that we know nothing about a or D the probability of life could be anything. For example, if we arbitrarily assign a = 10^{10} and D = 10^{-10}, we get a rate (or probability if we normalize correctly) for life to be on the order of e^{-10^{40}/6}, which is very small indeed and makes life very unlikely in the universe.

Now, how could it be that there is any life in the universe at all if it had such a low probability to form at all. Well, there is no reason that there could have been lots of universes, which is what string theory and cosmology now predict. Maybe it took 10^{100} universes to exist before life formed. I’m not saying that there is no one out there, I’m only saying that an N of one does not give us much information about how likely that is.

 

Addendum, 2019-04-07: As was pointed out in the comments the model as is allows for negative biomass.  This can be corrected by adding an infinite barrier at zero (i.e. restricting x to always be positive) and this won’t affect the result.  Depending on the barrier height and noise amplitude it can take an arbitrarily long time to escape.

New paper in Cell

 2018 Dec 10. pii: S0092-8674(18)31518-6. doi: 10.1016/j.cell.2018.11.026. [Epub ahead of print]

Intrinsic Dynamics of a Human Gene Reveal the Basis of Expression Heterogeneity.

Abstract

Transcriptional regulation in metazoans occurs through long-range genomic contacts between enhancers and promoters, and most genes are transcribed in episodic “bursts” of RNA synthesis. To understand the relationship between these two phenomena and the dynamic regulation of genes in response to upstream signals, we describe the use of live-cell RNA imaging coupled with Hi-C measurements and dissect the endogenous regulation of the estrogen-responsive TFF1 gene. Although TFF1 is highly induced, we observe short active periods and variable inactive periods ranging from minutes to days. The heterogeneity in inactive times gives rise to the widely observed “noise” in human gene expression and explains the distribution of protein levels in human tissue. We derive a mathematical model of regulation that relates transcription, chromosome structure, and the cell’s ability to sense changes in estrogen and predicts that hypervariability is largely dynamic and does not reflect a stable biological state.

KEYWORDS:

RNA; chromosome; estrogen; fluorescence; heterogeneity; imaging; live-cell; single-molecule; steroid; transcription

PMID: 30554876

 

DOI: 10.1016/j.cell.2018.11.026

New paper on GWAS

 2018 Dec;42(8):783-795. doi: 10.1002/gepi.22161. Epub 2018 Sep 24.

The accuracy of LD Score regression as an estimator of confounding and genetic correlations in genome-wide association studies.

Author information

1
Department of Psychology, University of Minnesota Twin Cities, Minneapolis, Minnesota.
2
Mathematical Biology Section, Laboratory of Biological Modeling, NIDDK, National Institutes of Health, Bethesda, Maryland.

Abstract

To infer that a single-nucleotide polymorphism (SNP) either affects a phenotype or is linkage disequilibrium with a causal site, we must have some assurance that any SNP-phenotype correlation is not the result of confounding with environmental variables that also affect the trait. In this study, we study the properties of linkage disequilibrium (LD) Score regression, a recently developed method for using summary statistics from genome-wide association studies to ensure that confounding does not inflate the number of false positives. We do not treat the effects of genetic variation as a random variable and thus are able to obtain results about the unbiasedness of this method. We demonstrate that LD Score regression can produce estimates of confounding at null SNPs that are unbiased or conservative under fairly general conditions. This robustness holds in the case of the parent genotype affecting the offspring phenotype through some environmental mechanism, despite the resulting correlation over SNPs between LD Scores and the degree of confounding. Additionally, we demonstrate that LD Score regression can produce reasonably robust estimates of the genetic correlation, even when its estimates of the genetic covariance and the two univariate heritabilities are substantially biased.

KEYWORDS:

causal inference; genetic correlation; heritability; population stratification; quantitative genetics

PMID:

 

30251275

 

DOI:

 

10.1002/gepi.22161