Today’s New York Times has a poignant article about the cold side of randomized clinical trials. It describes the case of two cousins with melanoma and a promising new drug to treat it. One cousin was given the drug and is still living while the other was assigned to the control arm of the trial and is now dead. The new treatment seems to work better but the drug company and trial investigators want to complete the trial to prove that it actually extends life and that implies that the control arm patients need to die before the treatment arm patients.
Ever since my work on modeling sepsis a decade ago, I have felt that we need to come up with a paradigm for testing the efficacy of treatments. Aside from the ethical concern of depriving a patient of a treatment just to get better statistics, I felt that we would hit a combinatorial limit where it would just be physically impossible to test a new generation of treatments. Currently, a drug is tested in three phases before it is approved for use. Phase I is a small trial that tests the safety of the drug in humans. Phase II then tests for the efficacy of the drug in a larger group. If the drug passes these two phases then it goes to Phase III, which is a randomized clinical trial with many patients and at multiple centers. It takes a long time and a lot of money to make it through all of these stages.
The reason we even need clinical trials is that people and conditions are not all the same. Thus, we need to test on a lot of people to see if a drug actually works. The size of a trial is set by weighing the effect of the treatment with the variability of the patient pool. Variability is reduced by increasing the patient pool but the reduction only drops with the square root of the number of patients. Variability could also be reduced by selecting subjects that are more similar. This has been attempted recently with race based drugs such as BiDill for heart disease in African Americans. Ultimately, the goal is to classify people according to genotype, although that will bring up a whole host of ethical issues.
To ensure that conditions are as similar as possible, we also insist that the randomized clinical trial tests the treatment against control in the same study. I think that this is something we should think more about. We should probably design better experimental and statistical methods so that we can compare results across different trials and even eras. Instead of running a control trial with an existing treatment, as in the melanoma example in the New York Times article, perhaps it should be sufficient to compare against the results obtained when that previous treatment was tested or subsequent trials since.
However, the real big problem I see is that many treatments in the future will require complex combinations of drugs, either existing or novel. The lack of progress in treating chronic diseases like diabetes and arthritis is probably due to the fact that they involve the maladjustment or misalignment of many different simultaneous biological pathways. Any treatment will require slight modifications to many different pathways as opposed to the sledge-hammer approach of banging on a single receptor as we do now. This will then require multiple drugs with carefully adjusted doses. Aside from the difficulty of finding these combinations, how will we test them? Suppose each drug on is detrimental on its own and only works in combination. Suppose a novel combination of drugs was found to work in an animal model but the dose of the combination was crucial and variable. How would we test that in humans? We can’t try all combinations on everyone. These are issues that require completely new ways of thinking about treatments.