The global plateau turned out to just be a pause and the growth in new cases continues. The rise seems to be mostly driven by increases in Brazil, India, and until very recently Russia with plateauing in the US and European countries as they relax their mitigation policies. The pandemic is not over by a long shot. There will most certainly be further growth in the near future.

# How much Covid-19 testing do we need?

There is a simple way to estimate how much SARS-CoV-2 PCR testing we need to start diminishing the COVID-19 pandemic. Suppose we test everyone at a rate $f$, with a PCR test with 100% sensitivity, which means we do not miss anyone who is positive but we could have false positives. The number of positives we will find is $f p$, where  $p$ is the prevalence of infectious individuals in a given population. If positive individuals are isolated from the rest of the population until they are no longer infectious with probability $q$, then the rate of reduction in prevalence is $fqp$. To reduce the pandemic, this number needs to be higher than the rate of pandemic growth, which is given by $\beta s p$, where $s$ is the fraction of the population susceptible to SARS-CoV-2 infection and $\beta$ is the rate of transmission from an infected individual to a susceptible upon contact. Thus, to reduce the pandemic, we need to test at a rate higher than $\beta s/q$.

In the initial stages of the pandemic $s$ is one and $\beta = R_0/\sigma$, where $R_0$ is the mean reproduction number, which is probably around 3.7 and $\sigma$ is the mean rate of becoming noninfectious, which is probably around 10 to 20 days. This gives an estimate of  $\beta_0$ to be somewhere around 0.3 per day. Thus, in the early stages of the pandemic, we would need to test everyone at least two or three times per week, provided positives are isolated. However, if people wear masks and avoid crowds then $\beta$ could be reduced. If we can get it smaller then we can test less frequently. Currently, the global average of $R_0$ is around one, so that would mean we need to test every two or three weeks. If positives don’t isolate with high probability, we need to test at a higher rate to compensate. This threshold rate will also go down as $s$ goes down.

In fact, you can just test randomly at rate $f$ and monitor the positive rate. If the positive rate trends downward then you are testing enough. If it is going up then test more. In any case, we may need less testing capability than we originally thought, but we do need to test the entire population and not just suspected cases.

# The Covid-19 plateau

For the past five weeks, the appearance rate of Covid-19 cases has plateaued at about a hundred thousand new cases per day. Just click on the Daily Cases Tab on the JHU site to see for yourself. This is quite puzzling because while individual nations and regions are rising, falling, and plateauing independently, the global total is flat as a pancake. A simple resolution to this seeming paradox was proposed by economist John Cochrane (see his post here). The solution is rather simple but the implications as I will go into more detail below are far reaching. The short answer is that if the world (either through behavior or policy) reacts to the severity of Covid-19 incrementally then a plateau will arise. When cases go up, people socially distance, and the number goes down, when cases go down, they relax a little and it goes back up again.

This can be made more precise with the now-famous SIR model. For the uninitiated, SIR stands for Susceptible Infected Recovered model. It is a simple dynamical model of disease propagation that has been in use for almost a century. The basic premise of an SIR model is that at any given time, the proportion of the population is either infected with the virus I, susceptible to infection S, or recovered from infection and no longer susceptible R. Each time an S comes across an I, it has a chance of being infected and becoming another I. An I will recover (or die) with some rate and become an R. The simplest way to implement an SIR model is to assume that people interact completely randomly and uniformly across the population and the rate of transmission and recovery is uniform as well. This is of course a gross simplification and ignores the complexity and diversity of social interactions, the mechanisms of actual viral transmission, and the progression of disease within individuals. However, even though it misses all of these nuances, it captures many of the important dynamics of epidemics. In differential equation form, the SIR model is written as

$\frac{dS}{dt} = -\frac{\beta}{N} S I$

$\frac{dI}{dt} = \frac{\beta}{N} S I - \sigma I$   (SIR model)

where $N$ is the total number of people in the population of interest. Here, $S$ and $I$ are in units of number of people.  The left hand sides of these equations are derivatives with respect to time, or rates.  They have dimensions or units of people per unit time, say day. From this we can infer that $\beta$ and $\sigma$ must have units of inverse day (per day) since $S$, $I$, and $N$ all have units of numbers of people. Thus $\beta$ is the infection rate per day and $\sigma$ is the recovery/death rate per day. The equation assumes that the probability of an $S$ meeting an $I$ is $I/N$. If there was one infected person in a population of a hundred, then if you were to interact completely randomly with everyone then the chance you would run into an infected person is 1/100. Actually, it would be 1/99 but in a large population, the one becomes insignificant and you can round up. Right away, we can see a problem with this assumption. I interact regularly with perhaps a few hundred people per week or month but the chance of me meeting a person that had just come from Australia in a typical week is extremely low. Thus, it is not at all clear what we should use for $N$ in the model. The local population, the regional population, the national population?

The model assumes that once an $S$ has run into an $I$, the rate of transmission of the virus is $\beta$. The total rate of decrease of $S$ is the product of $\beta$ and $SI/N$. The rate of change of $I$ is given by the increase due to interactions with $S$ and the decrease due to recovery/death $\sigma I$. These terms all have units of person per day. Once you understand the basic principles of constructing differential equations, you can model anything, which is what I like to do. For example, I modeled the temperature dynamics of my house this winter and used it to optimize my thermostat settings. In a post from a long time ago, I used it to model how best to boil water.

Given the SIR model, you can solve them to get how $I$ and $S$ will change in time. The SIR model is a system of nonlinear differential equations that do not have what is called a closed-form solution, meaning you can’t write down that $I(t)$ is some nice function like $e^{t}$ or $\sin(t)$. However, you can solve them numerically on a computer or infer properties of the dynamics directly without actually solving them. For example, if $\beta SI/N$ is initially greater than $\sigma I$, then $dI/dt$ is positive and thus $I$ will increase with time. On the other hand, since $dS/dt$ is always negative (rate of change is negative), it will decrease in time. As $I$ increases and $S$ decreases, since $S$ is decreasing at a faster rate than $I$ is increasing because $\sigma I$ is slowing the growth of $I$, then at some point $\beta SI/N$ will equal $\sigma I$ and $dI/dt=0$. This is a stationary point of $I$.  However, it is only a momentary stationary point because  $S$ keeps decreasing and this will make $I$ start to decrease too and thus this stationary point is a maximum point. In the SIR model, the stationary point is given by the condition

$\frac{dI}{dt} = 0 = \frac{\beta}{N}SI -\sigma I$ (Stationary condition)

which you can solve to get either $I = 0$ or $\beta S/N = \sigma$. The $I=0$ point is not a peak but just reflects the fact that there is no epidemic if there are no infections. The other condition gives the peak:

$\frac{S}{N} = \frac{\sigma}{\beta} \equiv \frac{1}{R_0}$

where $R_0$ is the now-famous R naught or initial reproduction number. It is the average number of people infected by a single person since $\beta$ is the infection rate and $\sigma$ is the infection disappearance rate, the ratio is a number. The stationary condition gives the herd immunity threshold. When the fraction of $S$ reaches $S^*/N = latex 1/R_0$ then the pandemic will begin to decline.  This is usually expressed as the fraction of those infected and no longer susceptible, $1-1/R0$. The 70% number you have heard is because $1-1/R_0$ is approximately 70% for $R_0 = 3.3$, the presumed value for Covid-19.

A plateau in the number of new cases per day is an indication that we are at a stationary point in $I$. This is because only a fraction of the total infected are counted as cases and if we assume that the case detection rate is uniform across all $I$, then the number of new cases per day is proportional to $I$. Thus, a plateau in cases means we are at a stationary point in $I$, which we saw above only occurs at a single instance in time. One resolution to this paradox would be if the peak is broad so it looks like a plateau. We can compute how broad the peak is from the second derivative, which gives the rate of change of the rate of change. This is the curvature of the peak. Taking the second derivative of the I equation in the SIR model gives

$\frac{d^2I}{dt^2} = \frac{\beta}{N} (\frac{dS}{dt} I + S\frac{dI}{dt}) - \sigma \frac{dI}{dt}$

Using $dI/dt=0$ and the formula for $S^*$ at the peak, the curvature is

$\frac{d^2I}{dt^2} = \frac{\beta}{N} \frac{dS}{dt} I =-\left( \frac{\beta}{N}\right)^2 S^* I^2 =- \frac{I^2\beta^2}{NR_0}$

It is negative because at a peak the slope is decreasing. (A hill is easier to climb as you round the top.)  There could be an apparent plateau if the curvature is very small, which is true if $I^2 \beta^2$ is small compared to $N R_0$. However, this would also mean we are already at the herd immunity threshold, which our paper and recent anti-body surveys predict to be unlikely given what we know about $R_0$.

If a broad peak at the herd immunity threshold does not explain the plateau in new global daily cases then what does? Cochrane’s theory is that $\beta$ depends on $I$.  He postulated that $\beta = \beta_0 e^{-\alpha I/N}$,where $\beta_0$ is the initial infectivity rate, but any decreasing function will do. When $I$ goes up, $\beta$ goes down. Cochrane attributes this to human behavior but it could also be a result of policy and government mandate. If you plug this into the stationary condition you get

$\frac{\beta_0}{N} S^* e^{-\alpha I^*/N} -\sigma = 0$

or

$I^* =-\frac{N}{\alpha} \log(\frac{N\sigma}{S^*\beta_0})$

and the effective reproduction number $R_0$  is one.

However, this is still only a quasi-stationary state because if $I$ is a constant $I^*$, then $S$ will decrease as $dS/dt = -\frac{\beta}{N}SI^*$, which has solution

$S(t) = N e^{-(\beta I^*/N) t}$     (S)

Plugging this into the equation for  $I^*$ gives

$I^* =-\frac{N}{\alpha} \log(\frac{\sigma}{\beta_0}e^{(\beta I^*/N) t}) = \frac{N}{\alpha}\log R_0 - \frac{\beta I^*}{\alpha} t$

which means that $I$ is not really plateaued but is decreasing slowly as

$I^* = \frac{N}{\alpha+\beta t}\log R_0$

We can establish perfect conditions for a plateau if we work backwards. Suppose again that $I$ has plateaued at $I^*$.  Then, $S(t)$ is given by equation (S).  Substituting this into the (Stationary Condition) above then gives $0 = \beta(t) e^{-(\beta I^*/N) t} -\sigma$ or

$\beta(t) = \sigma e^{(\beta I^*/N) t}$

which means that the global plateau is due to us first reducing $\beta$ to near $\sigma$, which halted the spread locally, and then gradually relaxing pandemic mitigation measures so that $\beta(t)$ is creeping upwards back to it’s original value.

The Covid-19 plateau is both good news and bad news. It is good news because we are not seeing exponential growth of the pandemic. Globally, it is more or less contained. The bad news is that by increasing at a rate of a hundred thousand cases per day, it will take a long time before we reach herd immunity. If we make the naive assumption that we won’t reach herd immunity until 5 billion people are infected then this pandemic could blunder along for $5\times 10^9/10^5 = 5\times 10^4 = 50000$ days! In other words, the pandemic will keep circling the world forever since over that time span, babies will be born and grow up. Most likely, it will become less virulent and will just join the panoply of diseases we currently live with like the various varieties of the common cold (which are also corona viruses) and the flu.

# A Covid-19 Manhattan Project

Right now there are hundreds if not thousands of Covid-19 models floating out there. Some are better than others and some have much more influence than others and the ones that have the most influence are not necessarily the best. There must be a better way of doing this. The world’s greatest minds convened in Los Alamos in WWII and built two atomic bombs. Metaphors get thrown around with reckless abandon but if there ever was a time for a Manhattan project, we need one now. Currently, the world’s scientific community has mobilized to come up with models to predict the spread and effectiveness of mitigation efforts, to produce new therapeutics and to develop new vaccines. But this is mostly going on independently.

Would it be better if we were to coordinate all of this activity. Right now at the NIH, there is an intense effort to compile all the research that is being done in the NIH Intramural Program and to develop a system where people can share reagents and materials. There are other coordination efforts going on worldwide as well.  This website contains a list of open source computational resources.  This link gives a list of data scientists who have banded together. But I think we need a world wide plan if we are ever to return to normal activity. Even if some nation has eliminated the virus completely within its borders there is always a chance of reinfection from outside.

In terms of models, they seem to have very weak predictive ability. This is probably because they are all over fit. We don’t really understand all the mechanisms of SARS-CoV-2 propagation. The case or death curves are pretty simple and as Von Neumann or Ulam or someone once said, “give me 4 parameters and I can fit an elephant, give me 5 and I can make it’s tail wiggle.” Almost any model can fit the curve but to make a projection into the future, you need to get the dynamics correct and this I claim, we have not done. What I am thinking of proposing is to have the equivalent of FDA approval for predictive models. However, instead of a binary decision of approval non-approval, people could submit there models for a predictive score based on some cross validation scheme or prediction on a held out set. You could also submit as many times as you wish to have your score updated. We could then pool all the models and produce a global Bayesian model averaged prediction and see if that does better. Let me know if you wish to participate or ideas on how to do this better.

# Covid-19 talk

Here are the slides for my webinar at FDA today .

Here is the medrXiv link to our Covid-19 paper.  We actually have more updated versions with nicer graphs, which we will upload shortly.