Nobel Prize has outlived its usefulness

The Nobel Prize in Physiology was awarded for the discovery of Hepatitis C today. The work is clearly deserving of recognition but this is another case where there were definitely more than three people who played an essential role in the work. I really think that the Nobel Prize should change its rules to allow for more winners. Below is my post when one of the winners of this years prize, Michael Houghton, turned down the Gairdner Award in 2013:

Hepatitis C and the folly of prizes

The scientific world was set slightly aflutter when Michael Houghton turned down the prestigious Gairdner Award for the the discovery of Hepatitis C. Harvey Alter and Daniel Bradley were the two other recipients. Houghton, who had previously received the Lasker Award with Alter, felt he could not accept one more award because two colleagues Qui-Lim Choo and George Kuo did not receive either of these awards, even though their contributions were equally important.

Hepatitis, which literally means inflammation of the liver, was characterized by Hippocrates and known to be infectious since the 8th century. The disease had been postulated to be viral at the beginning of the 20th century and by the 1960’s two viruses termed Hepatitis A and Hepatitis B had been established. However, there still seemed to be another unidentified infectious agent which was termed Non-A Non-B Hepatitis NANBH.

Michael Hougton, George Kuo and Qui-Lim Choo were all working at the Chiron corporation in the early 1980’s.   Houghton started a project to discover the cause of NANBH in 1982 with Choo joining a short time later. They made significant process in generating mouse monoclonal antibodies with some specificity to NANBH infected materials from chimpanzee samples received from Daniel Bradley at the CDC. They used the antibodies to screen cDNA libraries from infected materials but they had not isolated an agent. George Kuo had his own lab at Chiron working on other projects but would interact with Houghton and Choo. Kuo suggested that they try blind cDNA immunoscreening on serum derived from actual NANBH patients. This approach was felt to be too risky but Kuo made a quantitative assessment that showed it was viable. After two years of intensive and heroic screening by the three of them, they identified one clone that was clearly derived from the NANBH genome and not from human or chimp DNA. This was definitive proof that NANBH was a virus, which is now called Hepatitis C. Kuo then developed a prototype of a clinical Hepatitis C antibody detection kit and used it to screen a panel of NANBH blood provided by Harvey Alter of the NIH. Kuo’s test was a resounding success and the blood test that came out of that work has probably saved 300 million or more people from Hepititis C infection.

The question then is who deserves the prizes. Is it Bradley and Alter, who did careful and diligent work obtaining samples or is it Houghton, Choo, and Kuo, who did the heroic experiments that isolated the virus? For completely unknown reasons, the Lasker was awarded to just Houghton and Alter, which primed the pump for more prizes to these two. Now that the Lasker and Gairdner prizes have been cleared, that leaves just the Nobel Prize. The scientific community could get it right this time and award it to Kuo, Choo, and Houghton.

Addendum added 2013-5-2:  I should add that many labs from around the world were also trying to isolate the infective agent of NANBH and all failed to identify the correct samples from Alter’s panel.  It is not clear how long it would have been and how many more people would have been infected if Kuo, Choo, and Houghton had not succeeded when they did.

ICCAI talk

I gave a talk at the International Conference on Complex Acute Illness (ICCAI) with the title Forecasting COVID-19. I talked about some recent work with FDA collaborators on scoring a large number of publicly available epidemic COVID-19 projection models and show that they are unable to reliably forecast COVID-19 beyond a few weeks. The slides are here.

Why it is so hard to forecast COVID-19

I’ve been actively engaged in trying to model the COVID-19 pandemic since April and after 5 months I am pretty confident that models can estimate what is happening at this moment such as the number of people who are currently infected but not counted as a case. Back at the end of April our model predicted that the case ascertainment ratio ( total cases/total infected) was on the order of 1 in 10 that varied drastically between regions and that number has gone up with the advent of more testing so that it may now be on the order of 1 in 4 or possibly higher in some regions. These numbers more or less the anti-body test data.

However, I do not really trust my model to forecast what will happen a month from now much less six months. There are several reasons. One is that while the pandemic is global the dynamics are local and it is difficult if not impossible to get enough data for a detailed fine grained model that captures all the interactions between people. Another is that the data we do have is not completely reliable. Different regions define cases and deaths differently. There is no universally accepted definition for what constitutes a case or a death and the definition can change over time even for the same region. Thus, differences in death rates between regions or months could be due to differences in the biology of the virus, medical care, or how deaths are defined and when they are recorded. Depending on the region or time, a person with a SARS-CoV-2 infection who dies of a cardiac arrest may or may not be counted as a COVID-19 death. Deaths are sometimes not officially recorded for a week or two, particularly if the physician is overwhelmed with cases.

However, the most important reason models have difficulty forecasting the future is that modeling COVID-19 is as much if not more about modeling the behavior of people and government policy than modeling the biology of disease transmission and we are just not very good at predicting what people will do. This was pointed out by economist John Cochrane months ago, which I blogged about (see here). You can see why getting behavior correct is crucial to modeling a pandemic from the classic SIR model

\frac{dS}{dt} = -\beta SI

\frac{dI}{dt} = \beta SI - \sigma I

where I and S are the infected and susceptible fractions of the initial population, respectively. Behavior greatly affects the rate of infection \beta and small errors in \beta amplify exponentially. Suppression and mitigation measures such as social distancing, mask wearing, and vaccines reduce \beta, while super-spreading events increase \beta. The amplification of error is readily apparent near the onset of the pandemic where I grows like e^{\beta t}. If you change \beta by \delta \beta, then the I will grow like e^{\beta t+\delta \beta t} and thus the ratio is growing (or decaying) exponentially like e^{\delta \beta t}. The infection rate also appears in the initial reproduction number R_0 = \sigma/\beta. From a previous post, I derived approximate expressions for how long a pandemic would last and show that it scales as 1/(R_0-1) and thus errors in \beta will produce errors R_0, which could result in errors in how long the pandemic will last, which could be very large if R_0 is near one.

The infection rate is different everywhere and constantly changing and while it may be possible to get an estimate of it from the existing data there is no guarantee that previous trends can be extrapolated into the future. So while some of the COVID-19 models do a pretty good job at forecasting out a month or even 6 weeks (e.g. see here), I doubt any will be able to give us a good sense of what things will be like in January.

There is no herd immunity

In order for an infectious disease (e.g. COVID-19) to spread, the infectious agent (e.g. SARS-CoV-2) must jump from one person to another. The rate of this happening depends on the rate that an infectious person will come into contact with a susceptible person multiplied by the rate of the virus making the jump when the two people are nearby. The reproduction number R is obtained from the rate of infection spread times the length of time a person is infectious. If R is above one then a single person will infect more than one person on average and thus the pandemic will grow. If it is below one, then the pandemic will diminish. Herd immunity happens when enough people have been infected that the rate of finding a susceptible person becomes low enough that R drops below one. You can find the math behind this here.

However, a major assumption behind herd immunity is that once a person is infected they can never be infected again and this is not true for many infectious diseases such as other corona-viruses and the flu. There are reports that people can be reinfected by SARS-CoV-2. This is not fully validated but my money is on there being no lasting immunity to SARS-CoV-2 and this means that there is never any herd immunity. COVID-19 will just wax and wane forever.

This doesn’t necessarily mean it will be deadly forever. In all likelihood, each time you are infected your immune response will be more measured and perhaps SARS-CoV-2 will eventually be no worse than the common cold or the seasonal flu. But the fatality rate for first time infection will still be high, especially for the elderly and vulnerable. Those people will need to remain vigilante until there is a vaccine, and there is still no guarantee that a vaccine will work in the field. If we’re lucky and we get a working vaccine, it is likely that vaccine will not have lasting effect and just like the flu we will need to be vaccinated annually or even semi-annually.

Another Covid-19 plateau

The world seems to be in another Covid-19 plateau for new cases. The nations leading the last surge, namely the US, Russia, India, and Brazil are now stabilizing or declining, while some regions in Europe and in particular Spain are trending back up. If the pattern repeats, we will be in this new plateau for a month or two and then trend back up again, just in time for flu season to begin.

Why we need a national response

It seems quite clear now that we do not do a very good job of projecting COVID-19 progression. There are many reasons. One is that it is hard to predict how people and governments will behave. A fraction of the population will practice social distancing and withdraw from usual activity in the absence of any governmental mandates, another fraction will not do anything different no matter any official policy and the rest are in between. I for one get more scared of this thing the more I learn about it. Who knows what the long term consequences will be particularly for autoimmune diseases. The virus is triggering a massive immune response everywhere in the body and it could easily develop a memory response to your own cells in addition to the virus.

The virus also spreads in local clusters that may reach local saturation before infecting new clusters but the cross-cluster transmission events are low probability and hard to detect. The virus reached American shores in early January and maybe even earlier but most of those early events died out. This is because the transmission rate is highly varied. A mean reproduction number of 3 could mean everyone has R=3 or that most people transmit with R less than 1 while a small number (or events) transmit with very high R. (Nassim Nicholas Taleb has written copiously on the hazards of highly variable (fat tailed) distributions. For those with mathematical backgrounds, I highly recommend reading his technical volumes: The Technical Incerto. Even if you don’t believe most of what he says, you can still learn a lot.) Thus it is hard to predict when an event will start a local epidemic, although large gatherings of people (i.e. weddings, conventions, etc.) are a good place to start. Once the epidemic starts, it grows exponentially and then starts to saturate either by running out of people in the locality to infect or people changing their behavior or more likely both. Parts of New York may be above the herd immunity threshold now.

Thus at this point, I think we need to take a page out of Taleb’s book (like literally as my daughter would say), and don’t worry too much about forecasting. We can use it as a guide but we have enough information to know that most people are susceptible, about a third will be asymptomatic if infected (which doesn’t mean they won’t have long term consequences), about a fifth to a tenth will be counted as a case, and a few percent of those will die, which strongly depends on age and pre-existing conditions. We can wait around for a vaccine or herd immunity and in the process let many more people die, ( I don’t know how many but I do know that total number of deaths is a nondecreasing quantity), or we can act now everywhere to shut this down and impose a strict quarantine on anyone entering the country until they have been tested negative 3 times with a high specificity PCR test (and maybe 8 out of 17 times with a low specificity and sensitivity antigen test).

Acting now everywhere means, either 1) shutting everything down for at least two weeks. No Amazon or Grubhub or Doordash deliveries, no going to Costco and Walmart, not even going to the super market. It means paying everyone in the country without an income some substantial fraction of their salary. It means distributing two weeks supply of food to everyone. It means truly essential workers, like people keeping electricity going and hospital workers, live in a quarantine bubble hotel, like the NBA and NHL or 2) Testing everyone everyday who wants to leave their house and paying them to quarantine at home or in a hotel if they test positive. Both plans require national coordination and a lot of effort. The CARES act package has run out and we are heading for economic disaster while the pandemic rages on. As a recent president once said, “What have you got to lose?”

The battle over academic freedom

In the wake of George Floyd’s death, almost all of institutional America put out official statements decrying racism and some universities initiated policies governing allowable speech and research. This was followed by the expected dissent from those who worry that academic freedom is being suppressed (see here, here, and here for some examples). Then there is the (in)famous Harper’s Magazine open letter decrying Cancel Culture, which triggered a flurry of counter responses (e.g. see here and here).

While some faculty members in the humanities and (non-life) sciences are up in arms over the thought of a committee of their peers judging what should be allowable research, I do wish to point out that their colleagues over on the Medical campus have had to get approval for human and animal research for decades. Research on human subjects must first pass through an Institutional Review Board (IRB) while animal experiments must clear the Institutional Animal Care and Use Committee (IACUC). These panels ensure that the proposed work is ethical, sound, and justified. Even research that is completely noninvasive, such as analysis of genetic data, must pass scrutiny to ensure the data is not misused and subject identies are strongly protected. Almost all faculty members would agree that this step is important and necessary. History is rife of questionable research that range from careless to criminal. Is it so unreasonable to extend such a concept to the rest of campus?

New paper in eLife

I never thought this would ever be finished but it’s out. We hedge in the paper but my bet is that MYC is a facilitator of an accelerator essential for gene transcription.

Dissecting transcriptional amplification by MYC

eLife 2020;9:e52483 doi: 10.7554/eLife.52483

Zuqin Nie, Chunhua Guo, Subhendu K Das, Carson C Chow, Eric Batchelor, S Stoney Simons Jr, David Levens

Abstract

Supraphysiological MYC levels are oncogenic. Originally considered a typical transcription factor recruited to E-boxes (CACGTG), another theory posits MYC a global amplifier increasing output at all active promoters. Both models rest on large-scale genome-wide ”-omics’. Because the assumptions, statistical parameter and model choice dictates the ‘-omic’ results, whether MYC is a general or specific transcription factor remains controversial. Therefore, an orthogonal series of experiments interrogated MYC’s effect on the expression of synthetic reporters. Dose-dependently, MYC increased output at minimal promoters with or without an E-box. Driving minimal promoters with exogenous (glucocorticoid receptor) or synthetic transcription factors made expression more MYC-responsive, effectively increasing MYC-amplifier gain. Mutations of conserved MYC-Box regions I and II impaired amplification, whereas MYC-box III mutations delivered higher reporter output indicating that MBIII limits over-amplification. Kinetic theory and experiments indicate that MYC activates at least two steps in the transcription-cycle to explain the non-linear amplification of transcription that is essential for global, supraphysiological transcription in cancer.

How to make a fast but bad COVID-19 test good

Among the myriad of problems we are having with the COVID-19 pandemic, faster testing is one we could actually improve. The standard test for the presence of SARS-CoV-2 virus uses PCR (polymerase chain reaction), which amplifies targeted viral RNA. It is accurate (high specificity) but requires relatively expensive equipment and reagents that are currently in short supply. There are reports of wait times of over a week, which renders a test useless for contact tracing.

An alternative to PCR is an antigen test that tests for the presence of protein fragments associated with COVID-19. These tests can in principle be very cheap and fast, and could even be administered on paper strips. They are generally much more unreliable than PCR and thus have not been widely adopted. However, as I show below by applying the test multiple times, the noise can be suppressed and a poor test can be made arbitrarily good.

The performance of binary tests are usually gauged by two quantities – sensitivity and specificity. Sensitivity is the probability that you test positive (i.e are infected) given that you actually are positive (true positive rate). Specificity is the probability that you test negative if you actually are negative (true negative rate). For a pandemic, sensitivity is more important than specificity because missing someone who is infected means you could put lots of people at risk while a false positive just means the person falsely testing positive is inconvenienced (provided they cooperatively self-isolate). Current PCR tests have very high specificity but relatively low sensitivity (as low as 0.7) and since we don’t have enough capability to retest, a lot of tested infected people could be escaping detection.

The way to make any test have arbitrarily high sensitivity and specificity is to apply it multiple times and take some sort of average. However, you want to do this with the fewest number of applications. Suppose we administer n tests on the same subject, the probability of getting more than k positive tests if the person is positive is Q(k,n,q) = 1 - CDF(k|n,q), where CDF is the cumulative distribution function of the Binomial distribution (i.e. probability that the number of Binomial distributed events is less than or equal to k). If the person is negative then the probability of  k or fewer positives is R(k,n,r) = CDF(k|n,1-r). We thus want to find the minimal n given a desired sensitivity and specificity, q' and r'. This means that we need to solve the constrained optimization problem: find the minimal n under the constraint that k < n, Q(k,n,q) = \ge q' and R(k,n,r)\ge r'. Q decreases and R increases with increasing k and vice versa for n. We can easily solve this problem by sequentially increasing n and scanning through k until the two constraints are met. I’ve included the Julia code to do this below.  For example, starting with a test with sensitivity .7 and specificity 1 (like a PCR test), you can create a new test with greater than .95 sensitivity and specificity, by administering the test 3 times and looking for a single positive test. However, if the specificity drops to .7 then you would need to find more than 8 positives out of 17 applications to be 95% sure you have COVID-19.

 

using Distributions

function Q(k,n,q)
d = Binomial(n,q)
return 1 – cdf(d,k)
end

function R(k,n,r)
d = Binomial(n,1-r)
return cdf(d,k)
end

function optimizetest(q,r,qp=.95,rp=.95)

nout = 0
kout = 0

for n in 1:100
for k in 0:n-1
println(R(k,n,r),” “,Q(k,n,q))
if R(k,n,r) >= rp && Q(k,n,q) >= qp
kout=k
nout=n
break
end
end
if nout > 0
break
end
end

return nout, kout
end

Remember the ventilator

According to our model, the global death rate due to Covid-19 is around 1 percent for all infected (including unreported). However, if it were not for modern medicine and in particular the ventilator, the death rate would be much higher. Additionally, the pandemic first raged in the developed world and is only recently engulfing parts of the world where medical care is not as ubiquitous although this may be mitigated by a younger populace in those places. The delay between the appearance of a Covid-19 case and deaths is also fairly long; our model predicts a mean of over 50 days. So the lower US death rate compared to April could change in a month or two when the effects of the recent surges in the US south and west are finally felt.

An overview of mechanical ventilation in the intensive care unit

New paper on Wilson-Cowan Model

I forgot to post that my fellow Yahya and I have recently published a review paper on the history and possible future of the Wilson-Cowan Model in the Journal of Neurophysiology tribute issue to Jack Cowan. Thanks to the patient editors for organizing this.

Before and beyond the Wilson–Cowan equations

Abstract

The Wilson–Cowan equations represent a landmark in the history of computational neuroscience. Along with the insights Wilson and Cowan offered for neuroscience, they crystallized an approach to modeling neural dynamics and brain function. Although their iconic equations are used in various guises today, the ideas that led to their formulation and the relationship to other approaches are not well known. Here, we give a little context to some of the biological and theoretical concepts that lead to the Wilson–Cowan equations and discuss how to extend beyond them.

 

PDF here.

How long and how high for Covid-19

Cases of Covid-19 are trending back up globally and in the US. The world has nearly reached 10 million cases with over 2.3 million in the US. There is still a lot we don’t understand about SARS-CoV-2 transmission but I am confident we are no where near herd immunity. Our model is consistently showing that the case ascertainment ratio, that is the ratio of official Covid-19 cases to total SARS-CoV-2 infections, is between 5 and 10. That means that the US has less than 25 million infections while the world is less than 100 million.

Herd immunity means that for any fixed reproduction number, R0, the number of active infections will trend downward if the fraction of the initially susceptible population falls below 1/R0, or the total number infected is higher than 1- 1/R0. Thus, for an R0 of 4, three quarters of the population needs to be infected to reach herd immunity. However, the total number that will eventually be infected, as I will show below, will be

1 -\frac{e^{-R_0}}{1- R_0e^{-R_0}}

which is considerably higher. Thus, mitigation efforts to reduce R0 will reduce the total number infected. (2020-06-27: This expression is not accurate when R0 is near 1. For a formula in that regime, see Addendum.)

Some regions in Western Europe, East Asia, and even the US have managed to suppress R0 below 1 and cases are trending downward. In the absence of reintroduction of SARS-CoV-2 carriers, Covid-19 can be eliminated in these regions. However, as the recent spikes in China, South Korea, and Australia have shown, this requires continual vigilance. As long as any person remains infected in the world, there is always a chance of re-emergence. As long as new cases are increasing or plateauing, R0 remains above 1. As I mentioned before, plateauing is not a natural feature of the epidemic prediction models, which generally either go up or go down. Plateauing requires either continuous adjustment of R0 through feedback or propagation through the population as a wave front, like a lawn mower cutting grass. The latter is what is actually going on from what we are seeing. Regions seem to rise and fall in succession. As one region reaches a peak and goes down either through mitigation or rapid spread of SARS-CoV-2, Covid-19 takes hold in another. We saw China and East Asia rise and fall, then Italy, then the rest of Western Europe, then New York and New Jersey, and so forth in series, not in parallel. Now it is spreading throughout the rest of the USA, South America, and Eastern Europe. Africa has been spared so far but it is probably next as it is beginning to explode in South Africa.

A reduction in R0 also delays the time to reach the peak. As a simple example, consider the standard SIR model

\frac{ds}{dt} = -\beta sl

\frac{dl}{dt} = \beta sl -\sigma l

where s is the fraction of the population susceptible to SARS-CoV-2 infection and l is the fraction of the population actively infectious. Below are simulations of the pandemic progression for R0 = 4 and 2.

GKSTerm

We see that halving R0, basically doubles the time to reach the peak but much more than doubles the number of people that never get infected. We can see why this is true by analyzing the equations. Dividing the two SIR equations gives

\frac{dl}{ds} = \frac{\sigma l -\beta sl}{\beta sl},

which integrates to l =  \frac{\sigma}{\beta} \ln s - s + C. If we suppose that initially s=1 and l = l_0<<1 then we get

l =  \frac{1}{R_0} \ln s + 1 - s + l_0 (*)

where R_0 = \beta/\sigma is the reproduction number. The total number infected will be 1-s for l=0. Rearranging gives

s = e^{-R_0(1+l_0+s)}

If we assume that R_0 s <<1 and ignore l_0 we can expand the exponential and solve for s to get

s \approx \frac{e^{-R_0}}{1- R_0e^{-R_0}}

This is the fraction of the population that never gets infected, which is also the probability that you won’t be infected. It gets smaller as R_0 increases. So reducing R_0 can exponentially reduce your chances of being infected.

To figure out how long it takes to reach the peak, we substitute equation (*) into the SIR equation for s to get

\frac{ds}{dt} = -\beta(\frac{1}{R_0} \ln s + 1 - s + l_0) s

We compute the time to peak, T, by separating variables and integrating both sides. The peak is reached when s = 1/R_0.  We must thus compute

T= \int_0^T dt =\int_{1/R_0}^1 \frac{ds}{ \beta(\frac{1}{R_0} \ln s + 1 - s +l_0) s}

We can’t do this integral but if we set s = 1- z and z<< 1, then we can expand \ln s = -\epsilon and obtain

T= \int_0^T dt =\int_0^{l_p} \frac{dz}{ \beta(-\frac{1}{R_0}z  + z +l_0) (1-z)}

where l_p = 1-1/R_0. This can be re-expressed as

T=\frac{1}{ \beta (l_0+l_p)}\int_0^{l_p} (\frac{1}{1-z} + \frac{l_p}{l_p z + l_0}) dz

which is integrated to

T= \frac{1}{ \beta (l_0+l_p)} (-\ln(1-l_p) + \ln (l_p^2 + l_0)-\ln l_0)

If we assume that l_0<< l_p, then we get an expression

T \approx \sigma \frac{\ln (R_0l_p^2/l_0)}{ R_0 -1}

So, T is proportional to the recovery time \sigma and inversely related to R_0 as expected but if l_0 is very small (say 0.00001) compared to R_0 (say 3) then \ln R_0/l_0 can be big (around 10), which may explain why it takes so long for the pandemic to get started in a region. If the infection rate is very low in a region, the time it takes a for a super-spreader event to make an impact could be much longer than expected (10 times the infection clearance time (which could be two weeks or more)).

 

Addendum 2020-06-26: Fixed typos in equations and added clarifying text to last paragraph

 

Addendum 2020-06-27: The approximation for total infected is not very good when R_0 is near 1, a better one can be obtained by expanding the exponential to quadratic order in which case you get the new formula for the

s = \frac{1}{{R_{0}}^2} ( e^{R_0} - R_0 - \sqrt{(e^{R_0}-R_0)^2 - 2{R_0}^2})

However, for R_0 near 1, a better expansion is to substitute z = 1-s into equation (*) and obtain

l =  \frac{1}{R_0} \ln 1-z + z + l_0

Set l=0, after rearranging and exponentiating,  obtain

1 - z = e^{-R_0(l_0+z)}, which can be expanded to yield

1- z = e^{-R_0 l_0}(1 - R_0z + R_0^2 z^2/2

Solving for z gives the total fraction infected to be

z = (R_0 -e^{R_0l_0} + \sqrt{(R_0-e^{R_0l_0})^2 - 2 R_0^2(1-e^{R_0l_0})})/R_0^2

This took me much longer than it should have.

 

The formal logic of legal decisions

The US Supreme Court ruled today that the ban on sex-based discrimination in Title VII of the 1964 Civil Rights Act protects employees from discrimination based on sexual orientation or gender identity. Justice Gorsuch, who is a textualist (i.e. believes that laws should only be interpreted according to the written words alone without taking into any consideration the intent of the writers), writes “An employer who fires an individual for being homosexual or transgender fires that person for traits or actions it would not have questioned in members of a different sex. Sex plays a necessary and undisguisable role in the decision, exactly what Title VII forbids.” I find this to be an interesting exercise in logic. For example, consider the case of a “man married to a man”. According to Gorsuch’s logic, this cannot be an excuse to fire someone because if you replace “man” with “woman” in the first instance then you end up with the the phrase “woman married to a man”, and since this is not sufficient for firing, the reason is not sex invariant. Dissenting legal scholars disagree. They argue that the correct logic is to replace all instances of “man” with “woman”, in which case you would end up with “woman married to a woman” and thus the reason for firing would be sex invariant. I think in this case Gorsuch is correct because in order to have complete sex invariance, the rule must apply for any form of exchange. The law should be applied equally in all the possible ways that one gender is replaced by the other.

The depressing lack of American imagination

Democratic presidential candidate Andrew Yang made universal basic income a respectable topic for debate. I think this is a good thing because I’m a major proponent of UBI but my reasons are different. Yang is a technology dystopian who sees a future where robots take all of our jobs and the UBI as a way to alleviate the resulting pain and suffering. I think a UBI (and universal health care) would lead to less resentment of the welfare system and let people take more entrepreneurial risks. I believe human level AI is possible but I do not accept that this necessarily implies an economic apocalypse. To believe such a thing is to believe that the only way society can be structured is that a small number of tech companies owns all the robots and everyone else is at their mercy. That to me is a depressing lack of imagination. The society we live in is a human construct. There is no law of nature that says we must live by any specific set of rules or economic system. There is no law that says tech companies must have monopolies. There is no reason we could not live in a society where each person has her own robot who works for her. There is no law that says we could not live in a society where robots do all the mundane work while we garden and bake bread.

I think we lack imagination in every sector of our life. We do not need to settle for the narrow set of choices we are presented. I for one do not accept that elite colleges must wield so much influence in determining the path of one’s life. There is no reason that the US meritocracy needs to be a zero sum game, where one student being accepted to Harvard means another is not or that going to Harvard should even make so much difference in one’s life. There is no reason that higher education needs to cost so much. There is no reason students need to take loans out to pay exorbitant tuition. That fact that this occurs is because we as a society have chosen such.

I do not accept that irresponsible banks and financial institutions need to be bailed out whenever they fail, which seems to be quite often. We could just let them fail and restart. There is no reason that the access to capital needs to be controlled by a small number of financial firms. It used to be that banks would take in deposits and lend out to homeowners and businesses directly. They would evaluate the risk of each loan. Now they purchase complex financial products that evaluate the risk according to some mathematical model. There is no reason we need to subsidize such activity.

I do not accept that professional sports teams cannot be community owned. There is no law that says sports leagues need to be organized as monopolies with majority owners. There is no reason that communities cannot simply start their own teams and play each other. There is no law that says we need to build stadiums for privately owned teams. We only choose to do so.

The society we live in is the way it is because we have chosen to live this way. Even an autocrat needs a large fraction of the population to enforce his rule. The number of different ways we could organize (or not organize) is infinite. We do not have to be limited to the narrow set of choices we are presented. What we need is more imagination.

The fatal flaw of the American Covid-19 response

The United States has surpassed 2 million official Covid-19 cases and a 115 thousand deaths. After three months of lockdown, the country has had enough and is reopening. Although it has achieved its initial goal of slowing the growth of the pandemic so that hospitals would not be overwhelmed, the battle has not been won. We’re not at the beginning of the end; we may not even be at the end of the beginning. If everyone in the world could go into complete isolation, the pandemic would be over in two weeks. Instead, it is passed from one person to the next in a tragic relay race. As long as a single person is shedding the SARS-CoV-2 virus and comes in contact with another person, the pandemic will continue. The pandemic in the US is not heading for extinction. We are not near herd immunity and R0 is not below one. By the most optimistic yet plausible scenario, 30 million people have already been infected and 200 million will never get it either by having some innate immunity or by avoiding it through sheltering or luck. However, that still leaves over 100 million who are susceptible of which about a million will die if they all catch it.

However, the lack of effectiveness of the response is not the fatal flaw. No, the fatal flaw is that the US Covid-19 response asks one set of citizens to sacrifice for the benefit of another set. The Covid-19 pandemic is a story of three groups of people. The fortunate third can work from home, and the lockdown is mostly just an inconvenience. They still get paychecks while supplies and food can be delivered to their homes. Sure it has been stressful and many of have forgone essential medical care but they can basically ride this out for as long as it takes. The second group who own or work in shuttered businesses have lost their income. The federal rescue package is keeping some of them afloat but that runs out in August. The choice they have is to reopen and risk getting infected or be hungry and homeless. Finally, the third group is working to allow the first group to remain in their homes. They are working on farms, food processing plants, and grocery stores. They are cutting lawns, fixing leaking pipes, and delivering goods. They are working in hospitals and nursing homes and taking care of the sick and the children of those who must work. They are also the ones who are most likely to get infected and spread it to their families or the people they are trying to take care of. They are dying so others may live.

A lockdown can only work in a society if the essential workers are adequately protected and those without incomes are supported. Each worker should have an N100 mask, be trained how to wear it and be tested weekly. People in nursing homes should be wearing hazmat suits. Everyone who loses income should be fully compensated. In a fair society, everyone should share the risks and the pain equally.

Covid-19 continues to spread

The global plateau turned out to just be a pause and the growth in new cases continues. The rise seems to be mostly driven by increases in Brazil, India, and until very recently Russia with plateauing in the US and European countries as they relax their mitigation policies. The pandemic is not over by a long shot. There will most certainly be further growth in the near future.

Why middle school science should not exist

My 8th grade daughter had her final (distance learning) science quiz this week on work, or as it is called in her class, the scientific definition of work. I usually have no idea what she does in her science class since she rarely talks to me about school but she so happened to mention this one tidbit because she was proud that she didn’t get fooled by what she thought was a trick question. I’ve always believed that work, as in force times displacement (not the one where you produce economic value), is one of the most useless concepts in physics and should not be taught to anyone until they reach graduate school, if then. It is a concept that has long outlived its usefulness and all it does now is to convince students that science is just a bunch of concepts invented to confuse you. The problem with science education in general is that it is taught as a set of facts and definitions when the only thing that kids need to learn is that science is about trying to show something is true using empirical evidence. My daughter’s experience is evidence that science education in the US has room for improvement.

Work, as defined in science class, is just another form of energy, and the only physics that should be taught to middle school kids is that there are these quantities in the universe called energy and momentum and they are conserved. Work is just the change in energy of a system due to a force moving something. For example, the work required to lift a mass against gravity is the distance the mass was lifted multiplied by the force used to move it. This is where it starts to get a little confusing because there are actually two reasons you need force to move something. The first is because of Newton’s First Law of inertia – things at rest like to stay at rest and things in motion like to stay in motion. In order to move something from rest you need to accelerate it, which requires a force and from Newton’s second law, Force equals mass times acceleration, or F = ma. However, if you move something upwards against the force of gravity then even to move at a constant velocity you need to use a force that is equal to the gravitational force pulling the thing downwards, which from Newton’s law of gravitation is given by F = G M m/r^2, where G is the universal gravitational constant, M is the mass of the earth, m is the mass of the object and r is the distance between the objects. By a very deep property of the universe, the mass in Newton’s law of gravitation is the exact same mass as that in Newton’s second law, called inertial mass. So that means if we let GM/r^2 = g, then we get F = mg, and g = 9.8 m/s^2 is the gravitational acceleration constant if we set r be the radius of the earth, which is much bigger than the height of things we usually deal with in our daily lives. All things dropped near the earth will accelerate to the ground at 9.8 m/s^2. If gravitational mass and inertial mass were not the same, then objects of different masses would not fall with the same acceleration. Many people know that Galileo showed this fact in his famous experiment where he dropped a big and small object from the Leaning Tower of Pisa. However, many probably also cannot explain why including my grade 7 (or was it 8) science teacher who thought it was because the earth’s mass was much bigger than the two objects so the difference was not noticeable. The equivalence of gravitational and inertial mass was what led Einstein to his General Theory of Relativity.

In the first part of my daughter’s quiz, she was asked to calculate the energy consumed by several appliances in her house for one week. She had to look up how much power was consumed by the refrigerator, computer, television and so forth on the internet. Power is energy per unit time so she computed the amount of energy used by multiplying the power used by the total time the device is on per week. In the second part of the quiz she was asked to calculate how far she must move to power those devices. This is actually a question about conservation of energy and to answer the question she had to equate the energy used with the work definition of force times distance traveled. The question told her to use gravitational force, which implies she had to be moving upwards against the force of gravity, or accelerating at g if moving horizontally, although this was not specifically mentioned. So, my daughter took the energy used to power all her appliances and divided it by the force, i.e. her mass times g, and got a distance. The next question was, and I don’t recall exactly how it was phrased but something to the effect of: “Did you do scientifically defined work when you moved?”

Now, in her class, she probably spent a lot of time examining situations to distinguish work from non-work. Lifting a weight is work, a cat riding a Roomba is not work. She learned that you did no work when you walked because the force was perpendicular to your direction of motion. I find these types of gotcha exercises to be useless at best and in my daughter’s case completely detrimental. If you were to walk by gliding along completely horizontally with absolutely no vertical motion at a constant speed then yes you are technically not doing mechanical work. But your muscles are contracting and expanding and you are consuming energy. It’s not your weight times the distance you moved but some very complicated combination of metabolic rate, muscle biochemistry, energy losses in your shoes, etc. Instead of looking at examples and identifying which are work and which are not, it would be so much more informative if they were asked to deduce how much energy would be consumed in doing these things. The cat on the Roomba is not doing work but the Roomba is using energy to turn an electric motor that has to turn the wheel to move the cat. It has to accelerate from standing still and also gets warm, which means some of the energy is wasted to heat. A microwave oven uses energy because it must generate radio waves. Boiling water takes energy because you need to impart random kinetic energy to the water molecules. A computer uses energy because it needs to send electrons through transistors. Refrigerators work by using work energy to pump the heat energy from the inside to the outside. You can’t cool a room by leaving the refrigerator door open because you will just pump heat around in a circle and some of the energy will be wasted as extra heat.

My daughter’s answer to the question of was work done was that no work was done because she interpreted movement to be walking horizontally and she knew from all the gotcha examples that walking was not work. She read to me her very legalistically parsed paragraph explaining her reasoning, which made me think that while science may not be in her future, law might be. I tried to convince her that in order for the appliances to run, energy had to come from somewhere so she must have done some work at some point in her travels but she would have no part of it. She said it must be a trick question so the answer has to not make sense. She proudly submitted the quiz convinced more then ever that her so-called scientist Dad is a complete and utter idiot.

 

 

How much Covid-19 testing do we need?

There is a simple way to estimate how much SARS-CoV-2 PCR testing we need to start diminishing the COVID-19 pandemic. Suppose we test everyone at a rate f, with a PCR test with 100% sensitivity, which means we do not miss anyone who is positive but we could have false positives. The number of positives we will find is f p, where  p is the prevalence of infectious individuals in a given population. If positive individuals are isolated from the rest of the population until they are no longer infectious with probability q, then the rate of reduction in prevalence is fqp. To reduce the pandemic, this number needs to be higher than the rate of pandemic growth, which is given by \beta s p, where s is the fraction of the population susceptible to SARS-CoV-2 infection and \beta is the rate of transmission from an infected individual to a susceptible upon contact. Thus, to reduce the pandemic, we need to test at a rate higher than \beta s/q.

In the initial stages of the pandemic s is one and \beta = R_0/\sigma, where R_0 is the mean reproduction number, which is probably around 3.7 and \sigma is the mean rate of becoming noninfectious, which is probably around 10 to 20 days. This gives an estimate of  \beta_0 to be somewhere around 0.3 per day. Thus, in the early stages of the pandemic, we would need to test everyone at least two or three times per week, provided positives are isolated. However, if people wear masks and avoid crowds then \beta could be reduced. If we can get it smaller then we can test less frequently. Currently, the global average of R_0 is around one, so that would mean we need to test every two or three weeks. If positives don’t isolate with high probability, we need to test at a higher rate to compensate. This threshold rate will also go down as s goes down.

In fact, you can just test randomly at rate f and monitor the positive rate. If the positive rate trends downward then you are testing enough. If it is going up then test more. In any case, we may need less testing capability than we originally thought, but we do need to test the entire population and not just suspected cases.