Among the myriad of problems we are having with the COVID-19 pandemic, faster testing is one we could actually improve. The standard test for the presence of SARS-CoV-2 virus uses PCR (polymerase chain reaction), which amplifies targeted viral RNA. It is accurate (high specificity) but requires relatively expensive equipment and reagents that are currently in short supply. There are reports of wait times of over a week, which renders a test useless for contact tracing.
An alternative to PCR is an antigen test that tests for the presence of protein fragments associated with COVID-19. These tests can in principle be very cheap and fast, and could even be administered on paper strips. They are generally much more unreliable than PCR and thus have not been widely adopted. However, as I show below by applying the test multiple times, the noise can be suppressed and a poor test can be made arbitrarily good.
The performance of binary tests are usually gauged by two quantities – sensitivity and specificity. Sensitivity is the probability that you test positive (i.e are infected) given that you actually are positive (true positive rate). Specificity is the probability that you test negative if you actually are negative (true negative rate). For a pandemic, sensitivity is more important than specificity because missing someone who is infected means you could put lots of people at risk while a false positive just means the person falsely testing positive is inconvenienced (provided they cooperatively self-isolate). Current PCR tests have very high specificity but relatively low sensitivity (as low as 0.7) and since we don’t have enough capability to retest, a lot of tested infected people could be escaping detection.
The way to make any test have arbitrarily high sensitivity and specificity is to apply it multiple times and take some sort of average. However, you want to do this with the fewest number of applications. Suppose we administer tests on the same subject, the probability of getting more than
positive tests if the person is positive is
, where
is the cumulative distribution function of the Binomial distribution (i.e. probability that the number of Binomial distributed events is less than or equal to
). If the person is negative then the probability of
or fewer positives is
. We thus want to find the minimal
given a desired sensitivity and specificity,
and
. This means that we need to solve the constrained optimization problem: find the minimal
under the constraint that
,
and
.
decreases and
increases with increasing
and vice versa for
. We can easily solve this problem by sequentially increasing
and scanning through
until the two constraints are met. I’ve included the Julia code to do this below. For example, starting with a test with sensitivity .7 and specificity 1 (like a PCR test), you can create a new test with greater than .95 sensitivity and specificity, by administering the test 3 times and looking for a single positive test. However, if the specificity drops to .7 then you would need to find more than 8 positives out of 17 applications to be 95% sure you have COVID-19.
using Distributions
function Q(k,n,q)
d = Binomial(n,q)
return 1 – cdf(d,k)
endfunction R(k,n,r)
d = Binomial(n,1-r)
return cdf(d,k)
endfunction optimizetest(q,r,qp=.95,rp=.95)
nout = 0
kout = 0for n in 1:100
for k in 0:n-1
println(R(k,n,r),” “,Q(k,n,q))
if R(k,n,r) >= rp && Q(k,n,q) >= qp
kout=k
nout=n
break
end
end
if nout > 0
break
end
endreturn nout, kout
end
This reminds me of many similar examples ranging from the Ehrenfest Urn Model (used to prove the 2nd law of thermodynamics), ones which try to figure out how many times you have to flip a coin to tell if its ‘fair’, how many times you have to flip a coin before you will likely get some particular sequence of heads and tails, or one by some U Wash statistician on relating the probability it will rain to whether you should carry an umbrella which also included some cases of whether the rain will be uniformly distributed throughout the day and geographically uniformly distributed.
(its known for DC and world that the deaths from COVID are not uniformly distributed over population groups and regions.)
I also like the idea of testing 1 person many times and comparing it with testing many people at one time—in analogy to statistical mechanics and ergodic theory. Does your ensemble average equal your time average? Do you get the same result flipping 1 coin many times, versus flipping many coins one time?
I guess many people would call these examples of Bayesian reasoning. (I had never heard of Bayes until i saw E T Jaynes papers on this –and Bayes apparently preceded Boltzmann by decades. )
I actually sometimes try to translate these arguments into just arithmatic –whole numbers– so anyone can get the basic idea. I guess Agent Based Models –eg shelling’s on segregation or ‘sugarscape’ by axtell and others may be easier to understand—visualizations.
I’ve tried to do this also for issues like UBI and reperations, and MMT (modern monetary theory—a sort of competitor to UBI, somewhat associated with the ‘green new deal’ and guaranteed jobs’). .
It seems the to me the real issue is whether there is going to be herd immunity effectively (or just a continual baseline of COVID — people will have to adjust to wearing masks as they do clothes) or whether quarantining etc. can effecivtely isolate COVID . (Same issue may apply to the BLM and other protests– can they be contained, or will they be ‘the new normal’, or lead to a ‘social bifurcation’.)
LikeLike
Here’s another perspective arguing for the same thing (I think) —
LikeLike
I love this idea (and wrote about it in March :D).
But do we know that test errors are approximately iid?
I can imagine that things like previous exposure to morphologically ‘similar enough’ corona viruses could yield some level of false positives from cross-reactivity, i.e. correlated errors for sequential tests of an individual.
I can imagine false negatives coming from bad reagents or processing, i.e. correlated errors for everybody tested in a lab doing flawed tests.
No idea if either of these effects exist or are significant. Just thinking about how things could be non-iid and scupper or at least mute the benefits of re-testing to make a bad test good.
Does anyone have access to anonymized data that could help answer the question?
LikeLike
Well, there is no defense against flawed testing no matter what the test. As for whether the tests are independent, I don’t know. It’s not clear to me, nor anyone I’ve asked, what alll the sources of the fluctuations are. Cross-reactivity problem is a technology problem and should be solvable since the anti-body test was solvable.
LikeLike
Agreed re. flawed tests and that cross-reactivity is solvable. Just ‘wondering aloud’ whether the assumption of iid errors has been checked, even roughly. I’d guess that any serial dependence in testing error would be small enough to allow repeated testing to boost net accuracy a lot, but I don’t understand the details well enough to have confidence in my guess.
LikeLike
They are evalulating antigen tests at NIH. There does seem to be a dose dependence effect in that antigen tests do have a lower floor of detection at some PCR CT .
LikeLike