Why middle school science should not exist

My 8th grade daughter had her final (distance learning) science quiz this week on work, or as it is called in her class, the scientific definition of work. I usually have no idea what she does in her science class since she rarely talks to me about school but she so happened to mention this one tidbit because she was proud that she didn’t get fooled by what she thought was a trick question. I’ve always believed that work, as in force times displacement (not the one where you produce economic value), is one of the most useless concepts in physics and should not be taught to anyone until they reach graduate school, if then. It is a concept that has long outlived its usefulness and all it does now is to convince students that science is just a bunch of concepts invented to confuse you. The problem with science education in general is that it is taught as a set of facts and definitions when the only thing that kids need to learn is that science is about trying to show something is true using empirical evidence. My daughter’s experience is evidence that science education in the US has room for improvement.

Work, as defined in science class, is just another form of energy, and the only physics that should be taught to middle school kids is that there are these quantities in the universe called energy and momentum and they are conserved. Work is just the change in energy of a system due to a force moving something. For example, the work required to lift a mass against gravity is the distance the mass was lifted multiplied by the force used to move it. This is where it starts to get a little confusing because there are actually two reasons you need force to move something. The first is because of Newton’s First Law of inertia – things at rest like to stay at rest and things in motion like to stay in motion. In order to move something from rest you need to accelerate it, which requires a force and from Newton’s second law, Force equals mass times acceleration, or F = ma. However, if you move something upwards against the force of gravity then even to move at a constant velocity you need to use a force that is equal to the gravitational force pulling the thing downwards, which from Newton’s law of gravitation is given by F = G M m/r^2, where G is the universal gravitational constant, M is the mass of the earth, m is the mass of the object and r is the distance between the objects. By a very deep property of the universe, the mass in Newton’s law of gravitation is the exact same mass as that in Newton’s second law, called inertial mass. So that means if we let GM/r^2 = g, then we get F = mg, and g = 9.8 m/s^2 is the gravitational acceleration constant if we set r be the radius of the earth, which is much bigger than the height of things we usually deal with in our daily lives. All things dropped near the earth will accelerate to the ground at 9.8 m/s^2. If gravitational mass and inertial mass were not the same, then objects of different masses would not fall with the same acceleration. Many people know that Galileo showed this fact in his famous experiment where he dropped a big and small object from the Leaning Tower of Pisa. However, many probably also cannot explain why including my grade 7 (or was it 8) science teacher who thought it was because the earth’s mass was much bigger than the two objects so the difference was not noticeable. The equivalence of gravitational and inertial mass was what led Einstein to his General Theory of Relativity.

In the first part of my daughter’s quiz, she was asked to calculate the energy consumed by several appliances in her house for one week. She had to look up how much power was consumed by the refrigerator, computer, television and so forth on the internet. Power is energy per unit time so she computed the amount of energy used by multiplying the power used by the total time the device is on per week. In the second part of the quiz she was asked to calculate how far she must move to power those devices. This is actually a question about conservation of energy and to answer the question she had to equate the energy used with the work definition of force times distance traveled. The question told her to use gravitational force, which implies she had to be moving upwards against the force of gravity, or accelerating at g if moving horizontally, although this was not specifically mentioned. So, my daughter took the energy used to power all her appliances and divided it by the force, i.e. her mass times g, and got a distance. The next question was, and I don’t recall exactly how it was phrased but something to the effect of: “Did you do scientifically defined work when you moved?”

Now, in her class, she probably spent a lot of time examining situations to distinguish work from non-work. Lifting a weight is work, a cat riding a Roomba is not work. She learned that you did no work when you walked because the force was perpendicular to your direction of motion. I find these types of gotcha exercises to be useless at best and in my daughter’s case completely detrimental. If you were to walk by gliding along completely horizontally with absolutely no vertical motion at a constant speed then yes you are technically not doing mechanical work. But your muscles are contracting and expanding and you are consuming energy. It’s not your weight times the distance you moved but some very complicated combination of metabolic rate, muscle biochemistry, energy losses in your shoes, etc. Instead of looking at examples and identifying which are work and which are not, it would be so much more informative if they were asked to deduce how much energy would be consumed in doing these things. The cat on the Roomba is not doing work but the Roomba is using energy to turn an electric motor that has to turn the wheel to move the cat. It has to accelerate from standing still and also gets warm, which means some of the energy is wasted to heat. A microwave oven uses energy because it must generate radio waves. Boiling water takes energy because you need to impart random kinetic energy to the water molecules. A computer uses energy because it needs to send electrons through transistors. Refrigerators work by using work energy to pump the heat energy from the inside to the outside. You can’t cool a room by leaving the refrigerator door open because you will just pump heat around in a circle and some of the energy will be wasted as extra heat.

My daughter’s answer to the question of was work done was that no work was done because she interpreted movement to be walking horizontally and she knew from all the gotcha examples that walking was not work. She read to me her very legalistically parsed paragraph explaining her reasoning, which made me think that while science may not be in her future, law might be. I tried to convince her that in order for the appliances to run, energy had to come from somewhere so she must have done some work at some point in her travels but she would have no part of it. She said it must be a trick question so the answer has to not make sense. She proudly submitted the quiz convinced more then ever that her so-called scientist Dad is a complete and utter idiot.

 

 

How much Covid-19 testing do we need?

There is a simple way to estimate how much SARS-CoV-2 PCR testing we need to start diminishing the COVID-19 pandemic. Suppose we test everyone at a rate f, with a PCR test with 100% sensitivity, which means we do not miss anyone who is positive but we could have false positives. The number of positives we will find is f p, where  p is the prevalence of infectious individuals in a given population. If positive individuals are isolated from the rest of the population until they are no longer infectious with probability q, then the rate of reduction in prevalence is fqp. To reduce the pandemic, this number needs to be higher than the rate of pandemic growth, which is given by \beta s p, where s is the fraction of the population susceptible to SARS-CoV-2 infection and \beta is the rate of transmission from an infected individual to a susceptible upon contact. Thus, to reduce the pandemic, we need to test at a rate higher than \beta s/q.

In the initial stages of the pandemic s is one and \beta = R_0/\sigma, where R_0 is the mean reproduction number, which is probably around 3.7 and \sigma is the mean rate of becoming noninfectious, which is probably around 10 to 20 days. This gives an estimate of  \beta_0 to be somewhere around 0.3 per day. Thus, in the early stages of the pandemic, we would need to test everyone at least two or three times per week, provided positives are isolated. However, if people wear masks and avoid crowds then \beta could be reduced. If we can get it smaller then we can test less frequently. Currently, the global average of R_0 is around one, so that would mean we need to test every two or three weeks. If positives don’t isolate with high probability, we need to test at a higher rate to compensate. This threshold rate will also go down as s goes down.

In fact, you can just test randomly at rate f and monitor the positive rate. If the positive rate trends downward then you are testing enough. If it is going up then test more. In any case, we may need less testing capability than we originally thought, but we do need to test the entire population and not just suspected cases.

The Covid-19 plateau

For the past five weeks, the appearance rate of Covid-19 cases has plateaued at about a hundred thousand new cases per day. Just click on the Daily Cases Tab on the JHU site to see for yourself. This is quite puzzling because while individual nations and regions are rising, falling, and plateauing independently, the global total is flat as a pancake. A simple resolution to this seeming paradox was proposed by economist John Cochrane (see his post here). The solution is rather simple but the implications as I will go into more detail below are far reaching. The short answer is that if the world (either through behavior or policy) reacts to the severity of Covid-19 incrementally then a plateau will arise. When cases go up, people socially distance, and the number goes down, when cases go down, they relax a little and it goes back up again.

This can be made more precise with the now-famous SIR model. For the uninitiated, SIR stands for Susceptible Infected Recovered model. It is a simple dynamical model of disease propagation that has been in use for almost a century. The basic premise of an SIR model is that at any given time, the proportion of the population is either infected with the virus I, susceptible to infection S, or recovered from infection and no longer susceptible R. Each time an S comes across an I, it has a chance of being infected and becoming another I. An I will recover (or die) with some rate and become an R. The simplest way to implement an SIR model is to assume that people interact completely randomly and uniformly across the population and the rate of transmission and recovery is uniform as well. This is of course a gross simplification and ignores the complexity and diversity of social interactions, the mechanisms of actual viral transmission, and the progression of disease within individuals. However, even though it misses all of these nuances, it captures many of the important dynamics of epidemics. In differential equation form, the SIR model is written as

\frac{dS}{dt} = -\frac{\beta}{N} S I

\frac{dI}{dt} = \frac{\beta}{N} S I - \sigma I   (SIR model)

where N is the total number of people in the population of interest. Here, S and I are in units of number of people.  The left hand sides of these equations are derivatives with respect to time, or rates.  They have dimensions or units of people per unit time, say day. From this we can infer that \beta and \sigma must have units of inverse day (per day) since S, I, and N all have units of numbers of people. Thus \beta is the infection rate per day and \sigma is the recovery/death rate per day. The equation assumes that the probability of an S meeting an I is I/N. If there was one infected person in a population of a hundred, then if you were to interact completely randomly with everyone then the chance you would run into an infected person is 1/100. Actually, it would be 1/99 but in a large population, the one becomes insignificant and you can round up. Right away, we can see a problem with this assumption. I interact regularly with perhaps a few hundred people per week or month but the chance of me meeting a person that had just come from Australia in a typical week is extremely low. Thus, it is not at all clear what we should use for N in the model. The local population, the regional population, the national population?

The model assumes that once an S has run into an I, the rate of transmission of the virus is \beta. The total rate of decrease of S is the product of \beta and SI/N. The rate of change of I is given by the increase due to interactions with S and the decrease due to recovery/death \sigma I. These terms all have units of person per day. Once you understand the basic principles of constructing differential equations, you can model anything, which is what I like to do. For example, I modeled the temperature dynamics of my house this winter and used it to optimize my thermostat settings. In a post from a long time ago, I used it to model how best to boil water.

Given the SIR model, you can solve them to get how I and S will change in time. The SIR model is a system of nonlinear differential equations that do not have what is called a closed-form solution, meaning you can’t write down that I(t) is some nice function like e^{t} or \sin(t). However, you can solve them numerically on a computer or infer properties of the dynamics directly without actually solving them. For example, if \beta SI/N is initially greater than \sigma I, then $dI/dt$ is positive and thus $I$ will increase with time. On the other hand, since dS/dt is always negative (rate of change is negative), it will decrease in time. As I increases and S decreases, since S is decreasing at a faster rate than I is increasing because \sigma I is slowing the growth of I, then at some point \beta SI/N will equal \sigma I and dI/dt=0. This is a stationary point of I.  However, it is only a momentary stationary point because  S keeps decreasing and this will make I start to decrease too and thus this stationary point is a maximum point. In the SIR model, the stationary point is given by the condition

\frac{dI}{dt} = 0 = \frac{\beta}{N}SI -\sigma I (Stationary condition)

which you can solve to get either I = 0 or \beta S/N = \sigma. The I=0 point is not a peak but just reflects the fact that there is no epidemic if there are no infections. The other condition gives the peak:

\frac{S}{N} = \frac{\sigma}{\beta} \equiv \frac{1}{R_0}

where R_0 is the now-famous R naught or initial reproduction number. It is the average number of people infected by a single person since \beta is the infection rate and \sigma is the infection disappearance rate, the ratio is a number. The stationary condition gives the herd immunity threshold. When the fraction of S reaches S^*/N = latex 1/R_0 then the pandemic will begin to decline.  This is usually expressed as the fraction of those infected and no longer susceptible, 1-1/R0. The 70% number you have heard is because 1-1/R_0 is approximately 70% for R_0 = 3.3, the presumed value for Covid-19.

A plateau in the number of new cases per day is an indication that we are at a stationary point in I. This is because only a fraction of the total infected are counted as cases and if we assume that the case detection rate is uniform across all I, then the number of new cases per day is proportional to I. Thus, a plateau in cases means we are at a stationary point in I, which we saw above only occurs at a single instance in time. One resolution to this paradox would be if the peak is broad so it looks like a plateau. We can compute how broad the peak is from the second derivative, which gives the rate of change of the rate of change. This is the curvature of the peak. Taking the second derivative of the I equation in the SIR model gives

\frac{d^2I}{dt^2} = \frac{\beta}{N} (\frac{dS}{dt} I + S\frac{dI}{dt}) - \sigma \frac{dI}{dt}

Using dI/dt=0 and the formula for S^* at the peak, the curvature is

\frac{d^2I}{dt^2} = \frac{\beta}{N} \frac{dS}{dt} I =-\left( \frac{\beta}{N}\right)^2 S^* I^2 =- \frac{I^2\beta^2}{NR_0}

It is negative because at a peak the slope is decreasing. (A hill is easier to climb as you round the top.)  There could be an apparent plateau if the curvature is very small, which is true if I^2 \beta^2 is small compared to N R_0. However, this would also mean we are already at the herd immunity threshold, which our paper and recent anti-body surveys predict to be unlikely given what we know about R_0.

If a broad peak at the herd immunity threshold does not explain the plateau in new global daily cases then what does? Cochrane’s theory is that \beta depends on I.  He postulated that \beta = \beta_0 e^{-\alpha I/N},where \beta_0 is the initial infectivity rate, but any decreasing function will do. When I goes up, \beta goes down. Cochrane attributes this to human behavior but it could also be a result of policy and government mandate. If you plug this into the stationary condition you get

\frac{\beta_0}{N} S^* e^{-\alpha I^*/N} -\sigma = 0

or

I^* =-\frac{N}{\alpha} \log(\frac{N\sigma}{S^*\beta_0})

and the effective reproduction number R_0  is one.

However, this is still only a quasi-stationary state because if I is a constant I^*, then S will decrease as dS/dt = -\frac{\beta}{N}SI^*, which has solution

S(t) = N e^{-(\beta I^*/N) t}      (S)

Plugging this into the equation for  I^* gives

I^* =-\frac{N}{\alpha} \log(\frac{\sigma}{\beta_0}e^{(\beta I^*/N) t}) = \frac{N}{\alpha}\log R_0 - \frac{\beta I^*}{\alpha} t

which means that I is not really plateaued but is decreasing slowly as

I^* = \frac{N}{\alpha+\beta t}\log R_0

We can establish perfect conditions for a plateau if we work backwards. Suppose again that I has plateaued at I^*.  Then, S(t) is given by equation (S).  Substituting this into the (Stationary Condition) above then gives 0 = \beta(t)  e^{-(\beta I^*/N) t} -\sigma or

\beta(t) = \sigma e^{(\beta I^*/N) t}

which means that the global plateau is due to us first reducing \beta to near \sigma, which halted the spread locally, and then gradually relaxing pandemic mitigation measures so that \beta(t) is creeping upwards back to it’s original value.

The Covid-19 plateau is both good news and bad news. It is good news because we are not seeing exponential growth of the pandemic. Globally, it is more or less contained. The bad news is that by increasing at a rate of a hundred thousand cases per day, it will take a long time before we reach herd immunity. If we make the naive assumption that we won’t reach herd immunity until 5 billion people are infected then this pandemic could blunder along for 5\times 10^9/10^5 = 5\times  10^4 = 50000 days! In other words, the pandemic will keep circling the world forever since over that time span, babies will be born and grow up. Most likely, it will become less virulent and will just join the panoply of diseases we currently live with like the various varieties of the common cold (which are also corona viruses) and the flu.

New paper in Molecular Psychiatry

Genomic analysis of diet composition finds novel loci and associations with health and lifestyle

S. Fleur W. Meddens, et al.

Abstract

We conducted genome-wide association studies (GWAS) of relative intake from the macronutrients fat, protein, carbohydrates, and sugar in over 235,000 individuals of European ancestries. We identified 21 unique, approximately independent lead SNPs. Fourteen lead SNPs are uniquely associated with one macronutrient at genome-wide significance (P < 5 × 10−8), while five of the 21 lead SNPs reach suggestive significance (P < 1 × 10−5) for at least one other macronutrient. While the phenotypes are genetically correlated, each phenotype carries a partially unique genetic architecture. Relative protein intake exhibits the strongest relationships with poor health, including positive genetic associations with obesity, type 2 diabetes, and heart disease (rg ≈ 0.15–0.5). In contrast, relative carbohydrate and sugar intake have negative genetic correlations with waist circumference, waist-hip ratio, and neighborhood deprivation (|rg| ≈ 0.1–0.3) and positive genetic correlations with physical activity (rg ≈ 0.1 and 0.2). Relative fat intake has no consistent pattern of genetic correlations with poor health but has a negative genetic correlation with educational attainment (rg ≈−0.1). Although our analyses do not allow us to draw causal conclusions, we find no evidence of negative health consequences associated with relative carbohydrate, sugar, or fat intake. However, our results are consistent with the hypothesis that relative protein intake plays a role in the etiology of metabolic dysfunction.

A Covid-19 Manhattan Project

Right now there are hundreds if not thousands of Covid-19 models floating out there. Some are better than others and some have much more influence than others and the ones that have the most influence are not necessarily the best. There must be a better way of doing this. The world’s greatest minds convened in Los Alamos in WWII and built two atomic bombs. Metaphors get thrown around with reckless abandon but if there ever was a time for a Manhattan project, we need one now. Currently, the world’s scientific community has mobilized to come up with models to predict the spread and effectiveness of mitigation efforts, to produce new therapeutics and to develop new vaccines. But this is mostly going on independently.

Would it be better if we were to coordinate all of this activity. Right now at the NIH, there is an intense effort to compile all the research that is being done in the NIH Intramural Program and to develop a system where people can share reagents and materials. There are other coordination efforts going on worldwide as well.  This website contains a list of open source computational resources.  This link gives a list of data scientists who have banded together. But I think we need a world wide plan if we are ever to return to normal activity. Even if some nation has eliminated the virus completely within its borders there is always a chance of reinfection from outside.

In terms of models, they seem to have very weak predictive ability. This is probably because they are all over fit. We don’t really understand all the mechanisms of SARS-CoV-2 propagation. The case or death curves are pretty simple and as Von Neumann or Ulam or someone once said, “give me 4 parameters and I can fit an elephant, give me 5 and I can make it’s tail wiggle.” Almost any model can fit the curve but to make a projection into the future, you need to get the dynamics correct and this I claim, we have not done. What I am thinking of proposing is to have the equivalent of FDA approval for predictive models. However, instead of a binary decision of approval non-approval, people could submit there models for a predictive score based on some cross validation scheme or prediction on a held out set. You could also submit as many times as you wish to have your score updated. We could then pool all the models and produce a global Bayesian model averaged prediction and see if that does better. Let me know if you wish to participate or ideas on how to do this better.