How to make a fast but bad COVID-19 test good

Among the myriad of problems we are having with the COVID-19 pandemic, faster testing is one we could actually improve. The standard test for the presence of SARS-CoV-2 virus uses PCR (polymerase chain reaction), which amplifies targeted viral RNA. It is accurate (high specificity) but requires relatively expensive equipment and reagents that are currently in short supply. There are reports of wait times of over a week, which renders a test useless for contact tracing.

An alternative to PCR is an antigen test that tests for the presence of protein fragments associated with COVID-19. These tests can in principle be very cheap and fast, and could even be administered on paper strips. They are generally much more unreliable than PCR and thus have not been widely adopted. However, as I show below by applying the test multiple times, the noise can be suppressed and a poor test can be made arbitrarily good.

The performance of binary tests are usually gauged by two quantities – sensitivity and specificity. Sensitivity is the probability that you test positive (i.e are infected) given that you actually are positive (true positive rate). Specificity is the probability that you test negative if you actually are negative (true negative rate). For a pandemic, sensitivity is more important than specificity because missing someone who is infected means you could put lots of people at risk while a false positive just means the person falsely testing positive is inconvenienced (provided they cooperatively self-isolate). Current PCR tests have very high specificity but relatively low sensitivity (as low as 0.7) and since we don’t have enough capability to retest, a lot of tested infected people could be escaping detection.

The way to make any test have arbitrarily high sensitivity and specificity is to apply it multiple times and take some sort of average. However, you want to do this with the fewest number of applications. Suppose we administer n tests on the same subject, the probability of getting more than k positive tests if the person is positive is Q(k,n,q) = 1 - CDF(k|n,q), where CDF is the cumulative distribution function of the Binomial distribution (i.e. probability that the number of Binomial distributed events is less than or equal to k). If the person is negative then the probability of  k or fewer positives is R(k,n,r) = CDF(k|n,1-r). We thus want to find the minimal n given a desired sensitivity and specificity, q' and r'. This means that we need to solve the constrained optimization problem: find the minimal n under the constraint that k < n, Q(k,n,q) = \ge q' and R(k,n,r)\ge r'. Q decreases and R increases with increasing k and vice versa for n. We can easily solve this problem by sequentially increasing n and scanning through k until the two constraints are met. I’ve included the Julia code to do this below.  For example, starting with a test with sensitivity .7 and specificity 1 (like a PCR test), you can create a new test with greater than .95 sensitivity and specificity, by administering the test 3 times and looking for a single positive test. However, if the specificity drops to .7 then you would need to find more than 8 positives out of 17 applications to be 95% sure you have COVID-19.

 

using Distributions

function Q(k,n,q)
d = Binomial(n,q)
return 1 – cdf(d,k)
end

function R(k,n,r)
d = Binomial(n,1-r)
return cdf(d,k)
end

function optimizetest(q,r,qp=.95,rp=.95)

nout = 0
kout = 0

for n in 1:100
for k in 0:n-1
println(R(k,n,r),” “,Q(k,n,q))
if R(k,n,r) >= rp && Q(k,n,q) >= qp
kout=k
nout=n
break
end
end
if nout > 0
break
end
end

return nout, kout
end

Talk at SIAM Annual Meeting 2017

I gave an invited plenary talk at the 2017 Society of Applied and Industrial Mathematics Annual Meeting in Pittsburgh yesterday. My slides are here. I talked about some very new work on chaos and learning in spiking neural networks. My fellow Chris Kim and I were producing graphs up to a half hour before my talk! I’m quite excited about this work and I hope to get it published soon.

During my talk, I made an offhand threat that my current Mac would be the last one I buy. I made the joke because it was the first time that I could not connect to a projector with my laptop since I started using Mac almost 20 years ago. I switched to Mac from Linux back then because it was a Unix environment where I didn’t need to be a systems administrator to print and project. However, Linux has made major headway in the past two decades while Mac is backsliding. So, I’m seriously thinking of following through. I’ve been slowly getting disenchanted with Apple products over the past three years but I am especially disappointed with my new MacBook Pro. I have the one with the silly touch screen bar. The first thing the peeves me is that the activate Siri key is right next to the delete key so I accidentally hit and then have to reject Siri every five minutes. What mostly ties me to Mac right now is the Keynote presentation software, which I like(d) because it is easy to embed formulas and PDF files into. It is much harder to do the same in PowerPoint and I haven’t found an open source version that is as easy to use. However, Keynote keeps hanging on my new machine. I also get this situation where my embedded equations will randomly disappear and then reappear. Luckily I did a quick run through just before my talk and noticed that the vanished equations reappeared and I could delete them. Thus, the Keynote appeal has definitely diminished. Now, if someone would like to start an open source Keynote project with me… Finally, the new Mac does not seem any faster than my old Mac (it still takes forever to boot up) and Bard Ermentrout told me that his dynamical systems software tool XPP runs five times slower. So, any suggestions for a new machine?

Proving I’m me

I have an extremely difficult time remembering the answers to my security questions for restoring forgotten passwords. I don’t have an invariant favourite movie, or book, or colour. I have many best friends from childhood and they have various permutations of their names. Did I use their first name, nick name, full name? Even my Mother’s maiden name can be problematic because there are various ways to transliterate Chinese names and I don’t always remember which I used. The city I met my wife is ambiguous. Did I use the specific town per se or the major city the town is next to? Did I include the model of my first car or just the make. Before I can work my way through the various permutations, I’m usually locked out of my account forever.

As much as I appreciate and rely on computers, software, and the internet, objectively they all still suck. My iPhone is perhaps better than the alternative but it sucks. My laptop sucks. Apple makes awful products. Google, Amazon, Uber and the rest are not so great either. I don’t remember all the times Google Maps has steered me wrong. The tech landscape may be saturated but there is definitely room for something better.

R vs Matlab

It has now been almost half a year since I switched from Matlab to open source software and I’ve been amazed at how easy the transition has been. I had planned to replace Matlab with Python, Julia, and R but I have found that R and Julia have been sufficient for my requirements. Maxima is also more than adequate to replace Mathematica. I have become particularly attached to R especially after I started to use R Studio as the interface. I had only used R before as just a statistics tool but it really is a complete programming platform like Matlab. It has very nice graphics capabilities and I find the language very nice to program in. I really like its use of lists where I can pile sets of any type and any size into one object. I also like how R Studio can save your work into Projects, which keeps the whole environment and history in one place. I can then switch between multiple projects and everything comes back. The only thing I miss from Matlab is the command completion history feature, where I could easily find a previous command by just typing the first few letters. Also, I haven’t quite figured out how to output R data into a text file seamlessly yet. I seem to always get extraneous row or column information. I use Julia when I want to write a code that needs to loop fast but for everything else I’ve been reaching for R.

Paper on new version of Plink

The paper describing the updated version of the genome analysis software tool Plink has just been published.

Second-generation PLINK: rising to the challenge of larger and richer datasets
Christopher C Chang, Carson C Chow, Laurent CAM Tellier, Shashaank Vattikuti, Shaun M Purcell, and James J Lee

GigaScience 2015, 4:7  doi:10.1186/s13742-015-0047-8

Abstract
Background
PLINK 1 is a widely used open-source C/C++ toolset for genome-wide association studies (GWAS) and research in population genetics. However, the steady accumulation of data from imputation and whole-genome sequencing studies has exposed a strong need for faster and scalable implementations of key functions, such as logistic regression, linkage disequilibrium estimation, and genomic distance evaluation. In addition, GWAS and population-genetic data now frequently contain genotype likelihoods, phase information, and/or multiallelic variants, none of which can be represented by PLINK 1’s primary data format.

Findings
To address these issues, we are developing a second-generation codebase for PLINK. The first major release from this codebase, PLINK 1.9, introduces extensive use of bit-level parallelism, View MathML-time/constant-space Hardy-Weinberg equilibrium and Fisher’s exact tests, and many other algorithmic improvements. In combination, these changes accelerate most operations by 1-4 orders of magnitude, and allow the program to handle datasets too large to fit in RAM. We have also developed an extension to the data format which adds low-overhead support for genotype likelihoods, phase, multiallelic variants, and reference vs. alternate alleles, which is the basis of our planned second release (PLINK 2.0).

Conclusions
The second-generation versions of PLINK will offer dramatic improvements in performance and compatibility. For the first time, users without access to high-end computing resources can perform several essential analyses of the feature-rich and very large genetic datasets coming into use.

Keywords: GWAS; Population genetics; Whole-genome sequencing; High-density SNP genotyping; Computational statistics

 

This project started out with us trying to do some genomic analysis that involved computing various distance metrics on sequence space. Programming virtuoso Chris Chang stepped in and decided to write some code to speed up the computations. His program, originally called wdist, was so good and fast that we kept asking him to put in more capabilities. Eventually,  he had basically replicated the suite of functions that Plink performed so he contacted Shaun Purcell, the author of Plink, if he could just call his code Plink too and Shaun agreed. We then ran a series of tests on various machines to check the speed-ups compared to the original Plink and gcta. If you do any GWAS analysis at all, I highly recommend you check out Plink 1.9.

Open source software for math and science

Here is a list of open source software that you may find useful.  Some, I use almost every day, some I have not yet used, and some may be so ubiquitous that you have even forgotten that it is software.

1. XPP/XPPAUT. Bard Ermentrout wrote XPP in the 1980’s as a dynamical systems tool for himself. It’s now the de facto tool for the Snowbird community.  I still find it to be the easiest and fastest way to simulate and visualize differential equations.  It includes the equally excellent bifurcation continuation software tool AUTO originally written by Eusebius Doedel with contributions from a who’s who list of mathematicians.  XPP is also available as an iPad and iPhone App.

2. Julia. I only learned about Julia this spring and now I use it for basically anything I used to use Matlab for.  It’s syntax is very similar to Matlab and it’s very fast. I think it is quickly gaining a large following and may be as comprehensive as Python some day.

3. Python often seems more like a way of life than a software tool. I would probably be using Python if it were not for Julia and the fact that Julia is faster. Python has packages for everything. There is SciPy and NumPy for scientific computing, Pandas for statistics, Matplotlib for making graphs, and many more that I don’t yet know about.  I must confess that I still don’t know my way around Python but my fellows all use it.

4. R. For statistics, look no further than R, which is what academic statisticians use. It’s big in Big Data.  So big that I heard that Microsoft is planning to write a wrapper for it. I also heard that billionaire mathematician James Simons’s hedge fund Renaissance Technologies uses it.  For Bayesian inference there is now Stan, which implements Hamilton Monte Carlo.  We tried using it for one of our projects and had trouble getting it to work but it’s improving very fast.

5. AMS-Latex. The great computer scientist Donald Knuth wrote the typesetting language TeX in 1978 and he changed scientific publication forever. If you have ever had to struggle putting equations into MS Word, you’ll realize what a genius Knuth is. Still TeX was somewhat technical and thus LaTeX was invented as a simplified interface for TeX with built-in environments that are commonly used. AMS-Latex is a form of LaTeX that includes commands for any mathematical symbol you’ll ever need. It also has very nice equation and matrix alignment tools.

6. Maxima. Before Mathematica and Maple there was Macsyma. It was a symbolic mathematics system developed over many years at MIT starting in the 60’s. It was written in the programming language Lisp (another great open source tool but I have never used it) and was licensed by MIT to a company called Symbolics that made dedicated Lisp machines that ran Macsyma.  My Thesis advisor at MIT bought one of these machines (I think it cost him something like 20 thousand dollars, which was a lot of money back then) and I used it for my thesis. I really loved Macysma and got quite adept at it. However, as you can imagine the Symbolics business plan really didn’t pan out and Macysma kind of languished after the company failed. However, after many trials and tribulations, Macsyma was reborn as the open source software tool Maxima and it’s great.  I’ve been running wmMaxima and it can do everything that I ever needed Mathematica for with the bonus that I don’t have to find and re-enter my license number every few months.

7. OpenOffice. I find it reprehensible that scientific journals force me to submit my papers in Microsoft Word. But MS Office is a monopoly and all my collaborators use it.  Data always comes to me in Excel and talks are in PowerPoint. For my talks, I use Apple Keynote, which is not open source. However, Apple likes to completely overhaul their software so my old talks are not even compatible with the most recent version. I also dislike the current version. The reason I went to Keynote is because I could embed PDFs of equations made in LaTeXiT (donation ware). However, the new version makes this less convenient. PDFs looked terrible in PowerPoint a decade ago. I have no idea if this has changed or not.  I have flirted with using OpenOffice for many years but it was never quite 100% compatible with MS Office so I could never fully dispense with Word.  However, in my push to open source, I may just write my next talk in OpenOffice.

8. Plink The standard GWAS analysis tool is Plink, originally written by Shaun Purcell.  It’s nice but kind of slow for some computations and was not being actively updated.  It also couldn’t do some of the calculations we wanted.  So in steps my collaborator Chris Chang who took it upon himself to write a software tool that could do all the calculations we needed. His code was so fast and good that we started to ask him to add more and more to it. Eventually, it did almost everything that Plink and gcta (tool for estimating heritability) could do and thus he asked Purcell if he could just call it Plink. It’s currently called Plink 1.9.

9. C/C++  We tend to forget that computer languages like C, Java, Javascript, Ruby, etc. are all open source software tools.

10. Inkscape is a very nice drawing program, an open source Adobe Illustrator if you will.

11. GNU Project. Computer scientist Richard Stallman kind of invented the concept of open software. He started the free software foundation and the GNU Project, which includes GNU/Linux, the editor emacs, gnuplot among many other things.

Probably the software tools you use most that are currently free (but may not be forever) are the browser and email. People forget how much these two ubiquitous things have completely changed our lives.  When was the last time you went to the library or wrote a letter in ink?

Code platform update

It’s a week away from 2015, and I have transitioned completely away from Matlab. Julia is winning the platform attention battle. It is very easy to code in and it is very fast. I just haven’t gotten around to learning python much less pyDSTool (sorry Rob). I kind of find the syntax of Python (with all the periods between words) annoying. Wally Xie and I have also been trying to implement some of our MCMC runs in Stan but we have had trouble making it work.  Our model requires integrating ODEs and the ODE solutions from Stan (using our own solver) do not match our Julia code or the gold standard XPP. Maybe we are missing something obvious in the Stan syntax but our code is really very simple. Thus, we are going back to doing our Bayesian posterior estimates in Julia. However, I do plan to revisit Stan if they (or we) can write a debugger for it.

The perils of Word

Many biology journals insist on receiving manuscripts in Microsoft Word prior to publication. Even though this probably violates some anti-trust law, I generally comply, to the point of painfully converting Latex manuscripts into Word on more than one occasion. Word is particularly unpleasant when writing papers with equations. Although the newer versions have a new equation editing system, I don’t use it because once in a past submission, a journal forced me to convert all the equations to the old equation editor system (the poor person’s version of MathType). It worked reasonably well in Word versions before 2008 but has now become very buggy in Word 2011. For instance, when I double-click on an equation to edit it, a blank equation editor window also pops up, which I have to close in order for the one I wanted to work. Additionally, when I reopen papers saved in the .docx format, the equations lose their alignment.  Instead of the center of the equation aligned to a line, the base of the equation is aligned making inline equations float above the line. Finally, a big problem is equation numbering.  Latex has a nice system where you give each equation a label and then it assigns numbers automatically when you compile. This way you can insert and delete equations without having to renumber them all. Is there a way you can do this in Word? Am I the only one with these problems? Are there work arounds?