# The inherent conflict of liberalism

Liberalism, as a philosophy, arose during the European Enlightenment of the 17th century. It’s basic premise is that people should be free to choose how they live, have a government that is accountable to them, and be treated equally under the law. It was the founding principle of the American and French revolutions and the basic premise of western liberal democracies. However, liberalism is inherently conflicted because when I exercise my freedom to do something (e.g. not wear a mask), I infringe on your freedom from the consequence of that thing (e.g. not be infected) and there is no rational resolution to this conflict. This conflict led to the split of liberalism into left and right branches. In the United States, the term liberal is exclusively applied to the left branch, which mostly focuses on the ‘freedom from’ part of liberalism. Those in the right branch, who mostly emphasize the ‘freedom to’ part, refer to themselves as libertarian, classical liberal, or (sometimes and confusingly to me) conservative. (I put neo-liberalism, which is a fundamentalist belief in free markets, into the right camp although it has adherents on both the left and right.) Both of these viewpoints are offspring of the same liberal tradition and here I will use the term liberal in the general sense.

Liberalism has never operated in a vacuum. The conflicts between “freedom to” and “freedom from” have always been settled by prevailing social norms, which in the Western world was traditionally dominated by Christian values. However, neither liberalism nor social norms have ever been sufficient to prevent bad outcomes. Slavery existed and was promoted by liberal Christian states. Genocide of all types and scales have been perpetrated by liberal Christian states. The battle to overcome slavery and to give equal rights to all peoples was a long and hard fought battle over slowly changing social norms rather than laws per se. Thus, while liberalism is the underlying principle behind Western governments, it is only part of the fabric that holds society together. Even though we have just emerged from the Dark Years, Western Liberalism is on its shakiest footing since the Second World War. The end of the Cold War did not bring on a permanent era of liberal democracy but may have spelled it’s eventual demise. What will supplant liberalism is up to us.

It is often perceived that the American Democratic party is a disorganized mess of competing interests under a big tent while the Republicans are much more cohesive but in fact the opposite is true. While the Democrats are often in conflict they are in fact a fairly unified center-left liberal party that strives to advocate for the marginalized. Their conflicts are mostly to do with which groups should be considered marginalized and prioritized. The Republicans on the other hand are a coalition of libertarians and non-liberal conservatives united only by their desire to minimize the influence of the federal government. The libertarians long for unfettered individualism and unregulated capitalism while the conservatives, who do not subscribe to all the tenets of liberalism, wish to halt encroaching secularism and a government that no longer serves their interests.

The unlikely Republican coalition that has held together for four decades is now falling apart. It came together because the more natural association between religious conservatism and a large federal bureaucracy fractured after the Civil Rights movements in the 1960’s when the Democrats no longer prioritized the concerns of the (white) Christian Right. (I will discuss the racial aspects in a future post). The elite pro-business neo-liberal libertarians could coexist with the religious conservatives as long as their concerns did not directly conflict but this is no longer true. The conservative wing of the Republican party have discovered their new found power and that there is an untapped population of disaffected individuals who are inclined to be conservative and also want a larger and more intrusive government that favors them. Prominent conservatives like Adrian Vermeule of Harvard and Senator Josh Hawley are unabashedly anti-liberal.

This puts the neo-liberal elites in a real bind. The Democratic party since Bill Clinton had been moving right with a model of pro-market neo-liberalism but with a safety net. However they were punished time and time again by the neo-liberal right. Instead of partnering with Obama, who was highly favorable towards neoliberalism, they pursued a scorched earth policy against him. Hilary Clinton ran on a pretty moderate safety-net-neo-liberal platform and got vilified as an un-American socialist. Now, both the Republicans and Democrats are trending away from neo-liberalism. The neo-liberals made a strategic blunder. They could have hedged their bets but now have lost influence in both parties.

While the threat of authoritarianism looms large, this is also an opportunity to accept the limits of liberalism and begin to think about what will take its place – something that still respects the basic freedoms afforded by liberalism but acknowledges that it is not sufficient. Conservative intellectuals like Leo Strauss have valid points. There is indeed a danger of liberalism lapsing into total moral relativism or nihilism. Guardrails against such outcomes must be explicitly installed. There is value in preserving (some) traditions, especially ancient ones that are the result of generations of human engagement. There will be no simple solution. No single rule or algorithm. We will need to explicitly delineate what we will accept and what we will not on a case by case basis.

# The machine learning president

For the past four years, I have been unable to post with any regularity. I have dozens of unfinished posts sitting in my drafts folder. I would start with a thought but then get stuck, which had previously been somewhat unusual for me. Now on this first hopeful day I have had for the past four trying years, I am hoping I will be able to post more regularly again.

Prior to what I will now call the Dark Years, I viewed all of history through an economic lens. I bought into the standard twentieth century leftist academic notion that wars, conflicts, social movements, and cultural changes all have economic underpinnings. But I now realize that this is incorrect or at least incomplete. Economics surely plays a role in history but what really motivates people are stories and stories are what led us to the Dark Years and perhaps to get us out.

Trump became president because he had a story. The insurrectionists who stormed the Capitol had a story. It was a bat shit crazy lunatic story but it was still a story. However, the tragic thing about the Trump story (or rather my story of the Trump story) is that it is an unintentional algorithmically generated story. Trump is the first (and probably not last) purely machine learning president (although he may not consciously know that). Everything he did was based on the feedback he got from his Twitter Tweets and Fox News. His objective function was attention and he would do anything to get more attention. Of the many lessons we will take from the Dark Years, one should be how machine learning and artificial intelligence can go so very wrong. Trump’s candidacy and presidency was based on a simple stochastic greedy algorithm for attention. He would Tweet randomly and follow up on the Tweets that got the most attention. However, the problem with a greedy algorithm (and yes that is a technical term that just happens to coincidentally be apropos) is that once you follow a path it is hard to make a correction. I actually believe that if some of Trump’s earliest Tweets from say 2009-2014 had gone another way, he could have been a different president. Unfortunately, one of his early Tweet themes that garnered a lot of attention was on the Obama birther conspiracy. This lit up both racist Twitter and a counter reaction from liberal Twitter, which led him further to the right and ultimately to the presidency. His innate prejudices biased him towards a darker path and he did go completely unhinged after he lost the election but he is unprincipled and immature enough to change course if he had enough incentive to do so.

Unlike standard machine learning for categorizing images or translating languages, the Trump machine learning algorithm changes the data. Every Tweet alters the audience and the reinforcing feedback between Trump’s Tweets and its reaction can manufacture discontent out of nothing. A person could just happen to follow Trump because they like The Apprentice reality show Trump starred in and be having a bad day because they missed the bus or didn’t get a promotion. Then they see a Trump Tweet, follow the link in it and suddenly they find a conspiracy theory that “explains” why they feel disenchanted. They retweet and this repeats. Trump sees what goes viral and Tweets more on the same topic. This positive feedback loop just generated something out of random noise. The conspiracy theorizing then starts it’s own reinforcing feedback loop and before you know it we have a crazed mob bashing down the Capitol doors with impunity.

Ironically Trump, who craved and idolized power, failed to understand the power he actually had and if he had a better algorithm (or just any strategy at all), he would have been reelected in a landslide. Even before he was elected, Trump had already won over the far right and he could have started moving in any direction he wished. He could have moderated on many issues. Even maintaining his absolute ignorance of how govening actually works, he could have had his wall by having it be part of actual infrastructure and immigration bills. He could have directly addressed the COVID-19 pandemic. He would not have lost much of his base and would have easily gained an extra 10 million votes. Maybe, just maybe if liberal Twitter simply ignored the early incendiary Tweets and only responded to the more productive ones, they could have moved him a bit too. Positive reinforcement is how they train animals after all.

Now that Trump has shown how machine learning can win a presidency, it is only a matter of time before someone harnesses it again and more effectively. I just hope that person is not another narcissistic sociopath.

On some rare days when the sun is shining and I’m enjoying a well made kouign-amann (my favourite comes from b.patisserie in San Francisco but Patisserie Poupon in Baltimore will do the trick), I find a brief respite from my usual depressed state and take delight, if only for a brief moment, in the fact that mathematics completely resolved Zeno’s paradox. To me, it is the quintessential example of how mathematics can fully solve a philosophical problem and it is a shame that most people still don’t seem to know or understand this monumental fact. Although there are probably thousands of articles on Zeno’s paradox on the internet (I haven’t bothered to actually check), I feel like visiting it again today even without a kouign-amann in hand.

I don’t know what the original statement of the paradox is but they all involve motion from one location to another like walking towards a wall or throwing a javelin at a target. When you walk towards a wall, you must first cross half the distance, then half the remaining distance, and so on forever. The paradox is thus: How then can you ever reach the wall, or a javelin reach its target, if it must traverse an infinite number of intervals? This paradox is completely resolved by the concept of the mathematical limit, which Newton used to invent calculus in the seventeenth century. I think understanding the limit is the greatest leap a mathematics student must take in all of mathematics. It took mathematicians two centuries to fully formalize it although we don’t need most of that machinery to resolve Zeno’s paradox. In fact, you need no more than middle school math to solve one of history’s most famous problems.

The solution to Zeno’s paradox stems from the fact that if you move at constant velocity then it takes half the time to cross half the distance and the sum of an infinite number of intervals that are half as long as the previous interval adds up to a finite number. That’s it! It doesn’t take forever to get anywhere because you are adding an infinite number of things that get infinitesimally smaller. The sum of a bunch of terms is called a series and the sum of an infinite number of terms is called an infinite series. The beautiful thing is that we can compute this particular infinite series exactly, which is not true of all series.

Expressed mathematically, the total time $t$ it takes for an object traveling at constant velocity to reach its target is

$t = \frac{d}{v}\left( \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots\right)$

which can be rewritten as

$t = \frac{d}{v}\sum_{n=1}^\infty \frac{1}{2^n}$

where $d$ is the distance and $v$ is the velocity. This infinite series is technically called a geometric series because the ratio of two subsequent terms in the series is always the same. The terms are related geometrically like the volumes of n-dimensional cubes when you have halve the length of the sides (e.g. 1-cube (line and volume is length), 2-cube (square and volume is area), 3-cube (good old cube and volume), 4-cube ( hypercube and hypervolume), etc) .

For simplicity we can take $d/v = 1$. So to compute the time it takes to travel the distance, we must compute:

$t = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16}\cdots$

To solve this sum, the first thing is to notice that we can factor out $1/2$ and obtain

$t = \frac{1}{2}\left(1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8}\cdots\right)$

The quantity inside the bracket is just the original series plus 1, i.e.

$1 + t = 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16}\cdots$

and thus we can substitute this back into the original expression for $t$ and obtain

$t = \frac{1}{2}(1 + t)$

Now, we simply solve for $t$ and I’ll actually go over all the algebraic steps. First multiply both sides by 2 and get

$2 t = 1 +t$

Now, subtract $t$ from both sides and you get the beautiful answer that $t = 1$. We then have the amazing fact that

$t = \sum_{n=1}^\infty \frac{1}{2^n} = 1$

I never get tired of this. In fact this generalizes to any geometric series

$\sum_{n=1}^\infty \frac{1}{a^n} = \frac{1}{1-a} - 1$

for any $a$ that is less than 1. The more compact way to express this is

$\sum_{n=0}^\infty \frac{1}{a^n} = \frac{1}{1-a}$

Now, notice that in this formula if you set $a = 1$, you get $1/0$, which is infinity. Since $1^n= 1$ for any $n$, this tells you that if you try to add up an infinite number of ones, you’ll get infinity. Now if you set $a > 1$ you’ll get a negative number. Does this mean that the sum of an infinite number of positive numbers greater than 1 is a negative number? Well no because the series is only defined for $a < 1$, which is called the domain of convergence. If you go outside of the domain, you can still get an answer but it won’t be the answer to your question. You always need to be careful when you add and subtract infinite quantities. Depending on the circumstance it may or may not give you sensible answers. Getting that right is what math is all about.

# The fear is real

When I was in graduate school, my friends and I would jokingly classify the utility of research in terms of the order the researcher would be killed after the revolution. So, for physics, if you were working on say galaxy formation in the early universe you would be killed before someone working on the properties of hydrogen at low temperatures, who would be killed before someone working on building a fusion reactor. This was during the cold war and thus the prospect of Stalin and Mao still loomed large. We did not joke this way with fear or disdain but rather with a somewhat bemused acknowledgment that we were afforded the luxury to work on esoteric topics, while much of the world still did not have running water. In those days, the left-right divide was between the small government neoliberals (conservatives in those days who advocated for freer and more deregulated markets) and the bigger government New Deal liberals (those for more government action to address economic inequities). We certainly had fierce debates but they were always rather abstract. We never thought our lives would really change that much.

By the time I had finished and started my academic career, it was clear that the neoliberals had prevailed. The Soviet Union had collapsed, AT&T was broken up, and the Democratic president proclaimed the era of big government was over. Francis Fukuyama wrote “The End of History and the Last Man” arguing that western liberal democracy had triumphed over communism and would be the last form of government. I was skeptical then because I thought we could do better but I really didn’t consider that it could get worse.

But things got worse. We had the bursting of the dot com bubble, 9/11, the endless wars, the great recession, and now perhaps the twilight of democracy as Anne Applebaum laments in her most recent book. We can find blame everywhere – globalization, automation, the rise of China, out of touch elites, the greedy 1%, cynical politicians, the internet, social media, and so forth. Whatever the reason, this is an era where no one is happy and everyone is fearful.

The current divide in the United States is very real and there is fear on both sides. On one side, there is fear that an entire way of life is being taken away – a life of a good secure job, a nuclear family with well defined roles, a nice house, neighbors who share your values and beliefs, a government that mostly stays out of the way but helps when you are in need, the liberty to own a firearm, and a sense of community and shared sacrifice. On the other side, there is the fear that progress is being halted, that a minority will forever suppress a majority, that social, racial, and economic justice will never be achieved, that democracy itself is in peril, and that a better future will always be just out of reach.

What is most frustrating to me is that these points of view are not necessarily mutually exclusive. I don’t know how we can reconcile these differences but my biases and priors incline me to believe that we could alleviate some of the animosity and fear if we addressed income insecurity. While I think income inequality is a real problem, I think a more pressing concern is that a large segment of the population on both sides of the the divide lives continuously on a precipice of economic ruin, which has been made unavoidably apparent by our current predicament. I really think we need to consider a universal basic income. I also think it has to be universal because suspicion of fraud and resentment is a real issue. Everyone gets the check and those with sufficient incomes and wealth simply pay it back in taxes.

# How science dies

Nietzsche famously wrote:

This quote is often used as an example of Nietzsche’s nihilism but it is much more complicated. These words are actually spoken by a madman in Nietzsche’s book The Gay Science. According to philosopher Simon Critchley, the quote is meant to be a descriptive rather than a normative statement. What Nietzshe was getting at is that Christianity is a religion that values provable truth and as a result of this truth seeking, science arose. Science in turn generated skepticism of revealed truth and the concept of God. Thus, the end of Christianity was built into Christianity.

Borrowing from this analysis, science may also have have a built-in mechanism for its own doom. An excellent article in this month’s Technology Review describes the concept of epistemic dependence, where science and technology is so complicated now that no single person can understand all of it. In my own work, I could not reproduce a single experiment of my collaborators. Our collaborations work because we trust each other. I don’t really know how scientists identify new species of insects, or how paleontologists can tell what species a bone fragment belongs to, or all the details of the proof of the Poincare conjecture. However, I do understand how science and math works and trust that the results are based on those methods.

But what about people who are not trained in science? If you tell them that the universe was formed 14 billion years ago in a Big Bang and that 99% of all the stuff in the universe is completely invisible, why would they believe you. Why is that more believable then the earth being formed six thousand years ago in seven days? In both cases, knowledge is transferred to them from an authority. Sure you can say because of science, we live longer, have refrigerators, cell phones, and Netflix so we should believe scientists. On the other hand, a charismatic conman could tell them that they have those things because they were gifted from super advanced aliens. Depending on the sales job and one’s priors, it is not clear to me which would be more convincing.

So perhaps we need more science education? Well, in half a century of focus on science education, science literacy is not really very high in the general public. I doubt many people could explain how a refrigerator works much less the second law of thermodynamics and forget about quantum mechanics. Arthur C. Clarke’s third law that “All sufficiently advanced technology is indistinguishable from magic” is more applicable then ever. While it is true that science has delivered on producing better stuff it does not necessarily make us more fulfilled or happier. I can easily see a future where a large fragment of the population simply turns away from science with full knowledge of what they are doing. That would be the good outcome. The bad one is that people start to turn against science and scientists because someone has convinced them that all of their problems (and none of the good stuff) are due to science and scientists. They would then go and destroy the world as we know it without really intending to. I can see this happening too.

# Nobel Prize has outlived its usefulness

The Nobel Prize in Physiology was awarded for the discovery of Hepatitis C today. The work is clearly deserving of recognition but this is another case where there were definitely more than three people who played an essential role in the work. I really think that the Nobel Prize should change its rules to allow for more winners. Below is my post when one of the winners of this years prize, Michael Houghton, turned down the Gairdner Award in 2013:

Hepatitis C and the folly of prizes

The scientific world was set slightly aflutter when Michael Houghton turned down the prestigious Gairdner Award for the the discovery of Hepatitis C. Harvey Alter and Daniel Bradley were the two other recipients. Houghton, who had previously received the Lasker Award with Alter, felt he could not accept one more award because two colleagues Qui-Lim Choo and George Kuo did not receive either of these awards, even though their contributions were equally important.

Hepatitis, which literally means inflammation of the liver, was characterized by Hippocrates and known to be infectious since the 8th century. The disease had been postulated to be viral at the beginning of the 20th century and by the 1960’s two viruses termed Hepatitis A and Hepatitis B had been established. However, there still seemed to be another unidentified infectious agent which was termed Non-A Non-B Hepatitis NANBH.

Michael Hougton, George Kuo and Qui-Lim Choo were all working at the Chiron corporation in the early 1980’s.   Houghton started a project to discover the cause of NANBH in 1982 with Choo joining a short time later. They made significant process in generating mouse monoclonal antibodies with some specificity to NANBH infected materials from chimpanzee samples received from Daniel Bradley at the CDC. They used the antibodies to screen cDNA libraries from infected materials but they had not isolated an agent. George Kuo had his own lab at Chiron working on other projects but would interact with Houghton and Choo. Kuo suggested that they try blind cDNA immunoscreening on serum derived from actual NANBH patients. This approach was felt to be too risky but Kuo made a quantitative assessment that showed it was viable. After two years of intensive and heroic screening by the three of them, they identified one clone that was clearly derived from the NANBH genome and not from human or chimp DNA. This was definitive proof that NANBH was a virus, which is now called Hepatitis C. Kuo then developed a prototype of a clinical Hepatitis C antibody detection kit and used it to screen a panel of NANBH blood provided by Harvey Alter of the NIH. Kuo’s test was a resounding success and the blood test that came out of that work has probably saved 300 million or more people from Hepititis C infection.

The question then is who deserves the prizes. Is it Bradley and Alter, who did careful and diligent work obtaining samples or is it Houghton, Choo, and Kuo, who did the heroic experiments that isolated the virus? For completely unknown reasons, the Lasker was awarded to just Houghton and Alter, which primed the pump for more prizes to these two. Now that the Lasker and Gairdner prizes have been cleared, that leaves just the Nobel Prize. The scientific community could get it right this time and award it to Kuo, Choo, and Houghton.

Addendum added 2013-5-2:  I should add that many labs from around the world were also trying to isolate the infective agent of NANBH and all failed to identify the correct samples from Alter’s panel.  It is not clear how long it would have been and how many more people would have been infected if Kuo, Choo, and Houghton had not succeeded when they did.

# ICCAI talk

I gave a talk at the International Conference on Complex Acute Illness (ICCAI) with the title Forecasting COVID-19. I talked about some recent work with FDA collaborators on scoring a large number of publicly available epidemic COVID-19 projection models and show that they are unable to reliably forecast COVID-19 beyond a few weeks. The slides are here.

# Why it is so hard to forecast COVID-19

I’ve been actively engaged in trying to model the COVID-19 pandemic since April and after 5 months I am pretty confident that models can estimate what is happening at this moment such as the number of people who are currently infected but not counted as a case. Back at the end of April our model predicted that the case ascertainment ratio ( total cases/total infected) was on the order of 1 in 10 that varied drastically between regions and that number has gone up with the advent of more testing so that it may now be on the order of 1 in 4 or possibly higher in some regions. These numbers more or less the anti-body test data.

However, I do not really trust my model to forecast what will happen a month from now much less six months. There are several reasons. One is that while the pandemic is global the dynamics are local and it is difficult if not impossible to get enough data for a detailed fine grained model that captures all the interactions between people. Another is that the data we do have is not completely reliable. Different regions define cases and deaths differently. There is no universally accepted definition for what constitutes a case or a death and the definition can change over time even for the same region. Thus, differences in death rates between regions or months could be due to differences in the biology of the virus, medical care, or how deaths are defined and when they are recorded. Depending on the region or time, a person with a SARS-CoV-2 infection who dies of a cardiac arrest may or may not be counted as a COVID-19 death. Deaths are sometimes not officially recorded for a week or two, particularly if the physician is overwhelmed with cases.

However, the most important reason models have difficulty forecasting the future is that modeling COVID-19 is as much if not more about modeling the behavior of people and government policy than modeling the biology of disease transmission and we are just not very good at predicting what people will do. This was pointed out by economist John Cochrane months ago, which I blogged about (see here). You can see why getting behavior correct is crucial to modeling a pandemic from the classic SIR model

$\frac{dS}{dt} = -\beta SI$

$\frac{dI}{dt} = \beta SI - \sigma I$

where $I$ and $S$ are the infected and susceptible fractions of the initial population, respectively. Behavior greatly affects the rate of infection $\beta$ and small errors in $\beta$ amplify exponentially. Suppression and mitigation measures such as social distancing, mask wearing, and vaccines reduce $\beta$, while super-spreading events increase $\beta$. The amplification of error is readily apparent near the onset of the pandemic where $I$ grows like $e^{\beta t}$. If you change $\beta$ by $\delta \beta$, then the $I$ will grow like $e^{\beta t+\delta \beta t}$ and thus the ratio is growing (or decaying) exponentially like $e^{\delta \beta t}$. The infection rate also appears in the initial reproduction number $R_0 = \sigma/\beta$. From a previous post, I derived approximate expressions for how long a pandemic would last and show that it scales as $1/(R_0-1)$ and thus errors in $\beta$ will produce errors $R_0$, which could result in errors in how long the pandemic will last, which could be very large if $R_0$ is near one.

The infection rate is different everywhere and constantly changing and while it may be possible to get an estimate of it from the existing data there is no guarantee that previous trends can be extrapolated into the future. So while some of the COVID-19 models do a pretty good job at forecasting out a month or even 6 weeks (e.g. see here), I doubt any will be able to give us a good sense of what things will be like in January.

# There is no herd immunity

In order for an infectious disease (e.g. COVID-19) to spread, the infectious agent (e.g. SARS-CoV-2) must jump from one person to another. The rate of this happening depends on the rate that an infectious person will come into contact with a susceptible person multiplied by the rate of the virus making the jump when the two people are nearby. The reproduction number R is obtained from the rate of infection spread times the length of time a person is infectious. If R is above one then a single person will infect more than one person on average and thus the pandemic will grow. If it is below one, then the pandemic will diminish. Herd immunity happens when enough people have been infected that the rate of finding a susceptible person becomes low enough that R drops below one. You can find the math behind this here.

However, a major assumption behind herd immunity is that once a person is infected they can never be infected again and this is not true for many infectious diseases such as other corona-viruses and the flu. There are reports that people can be reinfected by SARS-CoV-2. This is not fully validated but my money is on there being no lasting immunity to SARS-CoV-2 and this means that there is never any herd immunity. COVID-19 will just wax and wane forever.

This doesn’t necessarily mean it will be deadly forever. In all likelihood, each time you are infected your immune response will be more measured and perhaps SARS-CoV-2 will eventually be no worse than the common cold or the seasonal flu. But the fatality rate for first time infection will still be high, especially for the elderly and vulnerable. Those people will need to remain vigilante until there is a vaccine, and there is still no guarantee that a vaccine will work in the field. If we’re lucky and we get a working vaccine, it is likely that vaccine will not have lasting effect and just like the flu we will need to be vaccinated annually or even semi-annually.

# Another Covid-19 plateau

The world seems to be in another Covid-19 plateau for new cases. The nations leading the last surge, namely the US, Russia, India, and Brazil are now stabilizing or declining, while some regions in Europe and in particular Spain are trending back up. If the pattern repeats, we will be in this new plateau for a month or two and then trend back up again, just in time for flu season to begin.

# Why we need a national response

It seems quite clear now that we do not do a very good job of projecting COVID-19 progression. There are many reasons. One is that it is hard to predict how people and governments will behave. A fraction of the population will practice social distancing and withdraw from usual activity in the absence of any governmental mandates, another fraction will not do anything different no matter any official policy and the rest are in between. I for one get more scared of this thing the more I learn about it. Who knows what the long term consequences will be particularly for autoimmune diseases. The virus is triggering a massive immune response everywhere in the body and it could easily develop a memory response to your own cells in addition to the virus.

The virus also spreads in local clusters that may reach local saturation before infecting new clusters but the cross-cluster transmission events are low probability and hard to detect. The virus reached American shores in early January and maybe even earlier but most of those early events died out. This is because the transmission rate is highly varied. A mean reproduction number of 3 could mean everyone has R=3 or that most people transmit with R less than 1 while a small number (or events) transmit with very high R. (Nassim Nicholas Taleb has written copiously on the hazards of highly variable (fat tailed) distributions. For those with mathematical backgrounds, I highly recommend reading his technical volumes: The Technical Incerto. Even if you don’t believe most of what he says, you can still learn a lot.) Thus it is hard to predict when an event will start a local epidemic, although large gatherings of people (i.e. weddings, conventions, etc.) are a good place to start. Once the epidemic starts, it grows exponentially and then starts to saturate either by running out of people in the locality to infect or people changing their behavior or more likely both. Parts of New York may be above the herd immunity threshold now.

Thus at this point, I think we need to take a page out of Taleb’s book (like literally as my daughter would say), and don’t worry too much about forecasting. We can use it as a guide but we have enough information to know that most people are susceptible, about a third will be asymptomatic if infected (which doesn’t mean they won’t have long term consequences), about a fifth to a tenth will be counted as a case, and a few percent of those will die, which strongly depends on age and pre-existing conditions. We can wait around for a vaccine or herd immunity and in the process let many more people die, ( I don’t know how many but I do know that total number of deaths is a nondecreasing quantity), or we can act now everywhere to shut this down and impose a strict quarantine on anyone entering the country until they have been tested negative 3 times with a high specificity PCR test (and maybe 8 out of 17 times with a low specificity and sensitivity antigen test).

Acting now everywhere means, either 1) shutting everything down for at least two weeks. No Amazon or Grubhub or Doordash deliveries, no going to Costco and Walmart, not even going to the super market. It means paying everyone in the country without an income some substantial fraction of their salary. It means distributing two weeks supply of food to everyone. It means truly essential workers, like people keeping electricity going and hospital workers, live in a quarantine bubble hotel, like the NBA and NHL or 2) Testing everyone everyday who wants to leave their house and paying them to quarantine at home or in a hotel if they test positive. Both plans require national coordination and a lot of effort. The CARES act package has run out and we are heading for economic disaster while the pandemic rages on. As a recent president once said, “What have you got to lose?”

# The battle over academic freedom

In the wake of George Floyd’s death, almost all of institutional America put out official statements decrying racism and some universities initiated policies governing allowable speech and research. This was followed by the expected dissent from those who worry that academic freedom is being suppressed (see here, here, and here for some examples). Then there is the (in)famous Harper’s Magazine open letter decrying Cancel Culture, which triggered a flurry of counter responses (e.g. see here and here).

While some faculty members in the humanities and (non-life) sciences are up in arms over the thought of a committee of their peers judging what should be allowable research, I do wish to point out that their colleagues over on the Medical campus have had to get approval for human and animal research for decades. Research on human subjects must first pass through an Institutional Review Board (IRB) while animal experiments must clear the Institutional Animal Care and Use Committee (IACUC). These panels ensure that the proposed work is ethical, sound, and justified. Even research that is completely noninvasive, such as analysis of genetic data, must pass scrutiny to ensure the data is not misused and subject identies are strongly protected. Almost all faculty members would agree that this step is important and necessary. History is rife of questionable research that range from careless to criminal. Is it so unreasonable to extend such a concept to the rest of campus?

# New paper in eLife

I never thought this would ever be finished but it’s out. We hedge in the paper but my bet is that MYC is a facilitator of an accelerator essential for gene transcription.

# Dissecting transcriptional amplification by MYC

eLife 2020;9:e52483

Zuqin Nie, Chunhua Guo, Subhendu K Das, Carson C Chow, Eric Batchelor, S Stoney Simons Jr, David Levens

## Abstract

Supraphysiological MYC levels are oncogenic. Originally considered a typical transcription factor recruited to E-boxes (CACGTG), another theory posits MYC a global amplifier increasing output at all active promoters. Both models rest on large-scale genome-wide ”-omics’. Because the assumptions, statistical parameter and model choice dictates the ‘-omic’ results, whether MYC is a general or specific transcription factor remains controversial. Therefore, an orthogonal series of experiments interrogated MYC’s effect on the expression of synthetic reporters. Dose-dependently, MYC increased output at minimal promoters with or without an E-box. Driving minimal promoters with exogenous (glucocorticoid receptor) or synthetic transcription factors made expression more MYC-responsive, effectively increasing MYC-amplifier gain. Mutations of conserved MYC-Box regions I and II impaired amplification, whereas MYC-box III mutations delivered higher reporter output indicating that MBIII limits over-amplification. Kinetic theory and experiments indicate that MYC activates at least two steps in the transcription-cycle to explain the non-linear amplification of transcription that is essential for global, supraphysiological transcription in cancer.

# How to make a fast but bad COVID-19 test good

Among the myriad of problems we are having with the COVID-19 pandemic, faster testing is one we could actually improve. The standard test for the presence of SARS-CoV-2 virus uses PCR (polymerase chain reaction), which amplifies targeted viral RNA. It is accurate (high specificity) but requires relatively expensive equipment and reagents that are currently in short supply. There are reports of wait times of over a week, which renders a test useless for contact tracing.

An alternative to PCR is an antigen test that tests for the presence of protein fragments associated with COVID-19. These tests can in principle be very cheap and fast, and could even be administered on paper strips. They are generally much more unreliable than PCR and thus have not been widely adopted. However, as I show below by applying the test multiple times, the noise can be suppressed and a poor test can be made arbitrarily good.

The performance of binary tests are usually gauged by two quantities – sensitivity and specificity. Sensitivity is the probability that you test positive (i.e are infected) given that you actually are positive (true positive rate). Specificity is the probability that you test negative if you actually are negative (true negative rate). For a pandemic, sensitivity is more important than specificity because missing someone who is infected means you could put lots of people at risk while a false positive just means the person falsely testing positive is inconvenienced (provided they cooperatively self-isolate). Current PCR tests have very high specificity but relatively low sensitivity (as low as 0.7) and since we don’t have enough capability to retest, a lot of tested infected people could be escaping detection.

The way to make any test have arbitrarily high sensitivity and specificity is to apply it multiple times and take some sort of average. However, you want to do this with the fewest number of applications. Suppose we administer $n$ tests on the same subject, the probability of getting more than $k$ positive tests if the person is positive is $Q(k,n,q) = 1 - CDF(k|n,q)$, where $CDF$ is the cumulative distribution function of the Binomial distribution (i.e. probability that the number of Binomial distributed events is less than or equal to $k$). If the person is negative then the probability of  $k$ or fewer positives is $R(k,n,r) = CDF(k|n,1-r)$. We thus want to find the minimal $n$ given a desired sensitivity and specificity, $q'$ and $r'$. This means that we need to solve the constrained optimization problem: find the minimal $n$ under the constraint that $k < n$, $Q(k,n,q) = \ge q'$ and $R(k,n,r)\ge r'$. $Q$ decreases and $R$ increases with increasing $k$ and vice versa for $n$. We can easily solve this problem by sequentially increasing $n$ and scanning through $k$ until the two constraints are met. I’ve included the Julia code to do this below.  For example, starting with a test with sensitivity .7 and specificity 1 (like a PCR test), you can create a new test with greater than .95 sensitivity and specificity, by administering the test 3 times and looking for a single positive test. However, if the specificity drops to .7 then you would need to find more than 8 positives out of 17 applications to be 95% sure you have COVID-19.

using Distributions

function Q(k,n,q)
d = Binomial(n,q)
return 1 – cdf(d,k)
end

function R(k,n,r)
d = Binomial(n,1-r)
return cdf(d,k)
end

function optimizetest(q,r,qp=.95,rp=.95)

nout = 0
kout = 0

for n in 1:100
for k in 0:n-1
println(R(k,n,r),” “,Q(k,n,q))
if R(k,n,r) >= rp && Q(k,n,q) >= qp
kout=k
nout=n
break
end
end
if nout > 0
break
end
end

return nout, kout
end

# Slides for Covid-19 talk

Here are my slides for my recent COVID-19 talk at the Centre for Applied Mathematics in BioScience and Medicine (CAMBAM). It’s an updated version of the one I gave to the FDA.

# Remember the ventilator

According to our model, the global death rate due to Covid-19 is around 1 percent for all infected (including unreported). However, if it were not for modern medicine and in particular the ventilator, the death rate would be much higher. Additionally, the pandemic first raged in the developed world and is only recently engulfing parts of the world where medical care is not as ubiquitous although this may be mitigated by a younger populace in those places. The delay between the appearance of a Covid-19 case and deaths is also fairly long; our model predicts a mean of over 50 days. So the lower US death rate compared to April could change in a month or two when the effects of the recent surges in the US south and west are finally felt.

# New paper on Wilson-Cowan Model

I forgot to post that my fellow Yahya and I have recently published a review paper on the history and possible future of the Wilson-Cowan Model in the Journal of Neurophysiology tribute issue to Jack Cowan. Thanks to the patient editors for organizing this.

# Before and beyond the Wilson–Cowan equations

## Abstract

The Wilson–Cowan equations represent a landmark in the history of computational neuroscience. Along with the insights Wilson and Cowan offered for neuroscience, they crystallized an approach to modeling neural dynamics and brain function. Although their iconic equations are used in various guises today, the ideas that led to their formulation and the relationship to other approaches are not well known. Here, we give a little context to some of the biological and theoretical concepts that lead to the Wilson–Cowan equations and discuss how to extend beyond them.

PDF here.

# How long and how high for Covid-19

Cases of Covid-19 are trending back up globally and in the US. The world has nearly reached 10 million cases with over 2.3 million in the US. There is still a lot we don’t understand about SARS-CoV-2 transmission but I am confident we are no where near herd immunity. Our model is consistently showing that the case ascertainment ratio, that is the ratio of official Covid-19 cases to total SARS-CoV-2 infections, is between 5 and 10. That means that the US has less than 25 million infections while the world is less than 100 million.

Herd immunity means that for any fixed reproduction number, R0, the number of active infections will trend downward if the fraction of the initially susceptible population falls below 1/R0, or the total number infected is higher than 1- 1/R0. Thus, for an R0 of 4, three quarters of the population needs to be infected to reach herd immunity. However, the total number that will eventually be infected, as I will show below, will be

$1 -\frac{e^{-R_0}}{1- R_0e^{-R_0}}$

which is considerably higher. Thus, mitigation efforts to reduce R0 will reduce the total number infected. (2020-06-27: This expression is not accurate when R0 is near 1. For a formula in that regime, see Addendum.)

Some regions in Western Europe, East Asia, and even the US have managed to suppress R0 below 1 and cases are trending downward. In the absence of reintroduction of SARS-CoV-2 carriers, Covid-19 can be eliminated in these regions. However, as the recent spikes in China, South Korea, and Australia have shown, this requires continual vigilance. As long as any person remains infected in the world, there is always a chance of re-emergence. As long as new cases are increasing or plateauing, R0 remains above 1. As I mentioned before, plateauing is not a natural feature of the epidemic prediction models, which generally either go up or go down. Plateauing requires either continuous adjustment of R0 through feedback or propagation through the population as a wave front, like a lawn mower cutting grass. The latter is what is actually going on from what we are seeing. Regions seem to rise and fall in succession. As one region reaches a peak and goes down either through mitigation or rapid spread of SARS-CoV-2, Covid-19 takes hold in another. We saw China and East Asia rise and fall, then Italy, then the rest of Western Europe, then New York and New Jersey, and so forth in series, not in parallel. Now it is spreading throughout the rest of the USA, South America, and Eastern Europe. Africa has been spared so far but it is probably next as it is beginning to explode in South Africa.

A reduction in R0 also delays the time to reach the peak. As a simple example, consider the standard SIR model

$\frac{ds}{dt} = -\beta sl$

$\frac{dl}{dt} = \beta sl -\sigma l$

where $s$ is the fraction of the population susceptible to SARS-CoV-2 infection and $l$ is the fraction of the population actively infectious. Below are simulations of the pandemic progression for R0 = 4 and 2.

We see that halving R0, basically doubles the time to reach the peak but much more than doubles the number of people that never get infected. We can see why this is true by analyzing the equations. Dividing the two SIR equations gives

$\frac{dl}{ds} = \frac{\sigma l -\beta sl}{\beta sl}$,

which integrates to $l = \frac{\sigma}{\beta} \ln s - s + C$. If we suppose that initially $s=1$ and $l = l_0<<1$ then we get

$l = \frac{1}{R_0} \ln s + 1 - s + l_0$ (*)

where $R_0 = \beta/\sigma$ is the reproduction number. The total number infected will be $1-s$ for $l=0$. Rearranging gives

$s = e^{-R_0(1+l_0+s)}$

If we assume that $R_0 s <<1$ and ignore $l_0$ we can expand the exponential and solve for $s$ to get

$s \approx \frac{e^{-R_0}}{1- R_0e^{-R_0}}$

This is the fraction of the population that never gets infected, which is also the probability that you won’t be infected. It gets smaller as $R_0$ increases. So reducing $R_0$ can exponentially reduce your chances of being infected.

To figure out how long it takes to reach the peak, we substitute equation (*) into the SIR equation for $s$ to get

$\frac{ds}{dt} = -\beta(\frac{1}{R_0} \ln s + 1 - s + l_0) s$

We compute the time to peak, $T$, by separating variables and integrating both sides. The peak is reached when $s = 1/R_0$.  We must thus compute

$T= \int_0^T dt =\int_{1/R_0}^1 \frac{ds}{ \beta(\frac{1}{R_0} \ln s + 1 - s +l_0) s}$

We can’t do this integral but if we set $s = 1- z$ and $z<< 1$, then we can expand $\ln s = -\epsilon$ and obtain

$T= \int_0^T dt =\int_0^{l_p} \frac{dz}{ \beta(-\frac{1}{R_0}z + z +l_0) (1-z)}$

where $l_p = 1-1/R_0$. This can be re-expressed as

$T=\frac{1}{ \beta (l_0+l_p)}\int_0^{l_p} (\frac{1}{1-z} + \frac{l_p}{l_p z + l_0}) dz$

which is integrated to

$T= \frac{1}{ \beta (l_0+l_p)} (-\ln(1-l_p) + \ln (l_p^2 + l_0)-\ln l_0)$

If we assume that $l_0<< l_p$, then we get an expression

$T \approx \sigma \frac{\ln (R_0l_p^2/l_0)}{ R_0 -1}$

So, $T$ is proportional to the recovery time $\sigma$ and inversely related to $R_0$ as expected but if $l_0$ is very small (say 0.00001) compared to $R_0$ (say 3) then $\ln R_0/l_0$ can be big (around 10), which may explain why it takes so long for the pandemic to get started in a region. If the infection rate is very low in a region, the time it takes a for a super-spreader event to make an impact could be much longer than expected (10 times the infection clearance time (which could be two weeks or more)).

Addendum 2020-06-26: Fixed typos in equations and added clarifying text to last paragraph

Addendum 2020-06-27: The approximation for total infected is not very good when $R_0$ is near 1, a better one can be obtained by expanding the exponential to quadratic order in which case you get the new formula for the

$s = \frac{1}{{R_{0}}^2} ( e^{R_0} - R_0 - \sqrt{(e^{R_0}-R_0)^2 - 2{R_0}^2})$

However, for $R_0$ near 1, a better expansion is to substitute $z = 1-s$ into equation (*) and obtain

$l = \frac{1}{R_0} \ln 1-z + z + l_0$

Set $l=0$, after rearranging and exponentiating,  obtain

$1 - z = e^{-R_0(l_0+z)}$, which can be expanded to yield

$1- z = e^{-R_0 l_0}(1 - R_0z + R_0^2 z^2/2$

Solving for $z$ gives the total fraction infected to be

$z = (R_0 -e^{R_0l_0} + \sqrt{(R_0-e^{R_0l_0})^2 - 2 R_0^2(1-e^{R_0l_0})})/R_0^2$

This took me much longer than it should have.

# The formal logic of legal decisions

The US Supreme Court ruled today that the ban on sex-based discrimination in Title VII of the 1964 Civil Rights Act protects employees from discrimination based on sexual orientation or gender identity. Justice Gorsuch, who is a textualist (i.e. believes that laws should only be interpreted according to the written words alone without taking into any consideration the intent of the writers), writes “An employer who fires an individual for being homosexual or transgender fires that person for traits or actions it would not have questioned in members of a different sex. Sex plays a necessary and undisguisable role in the decision, exactly what Title VII forbids.” I find this to be an interesting exercise in logic. For example, consider the case of a “man married to a man”. According to Gorsuch’s logic, this cannot be an excuse to fire someone because if you replace “man” with “woman” in the first instance then you end up with the the phrase “woman married to a man”, and since this is not sufficient for firing, the reason is not sex invariant. Dissenting legal scholars disagree. They argue that the correct logic is to replace all instances of “man” with “woman”, in which case you would end up with “woman married to a woman” and thus the reason for firing would be sex invariant. I think in this case Gorsuch is correct because in order to have complete sex invariance, the rule must apply for any form of exchange. The law should be applied equally in all the possible ways that one gender is replaced by the other.