I was reading an article about the controversial Dr. Oz this morning when a quote from a doctor struck a nerve. In reaction to Dr Oz’s embrace of alternative medicine, he stated: “I’m guided by the evidence.” That’s a wonderful and comforting sentiment to any logical person. We have a methodology called science which helps us move towards the truth through a repeated, disciplined process of experimentation. This process allows us to build confidence in our opinions and actions when we have accumulated sufficient evidence or can appeal to previous authority. The problem is that evidence in medicine is rarely imbued with absolute authority, yet the dogma of medicine is that peer-reviewed journal results are the primary guide to treatment. Clinical trials should be viewed as the starting point in the practice of medicine, not the destination.
Scientific methodology was largely developed in the context of the physical sciences such as physics, chemistry, and materials. In these branches of science, we can mostly control experimental conditions such that the test of a system in experiment A is nearly identical to the test of another system in experiment B. In medicine, however, the application of the scientific method is rarely so clean. Every person we experiment with is different, often profoundly so, and what works in one person may well fail in another.
To gain insight into what actions will help or hurt a person with a particular symptom or condition, we rely on statistics derived from analysis of cause and effect in large populations of similar people. To the extent that this property of similarity holds, these statistics can be useful information. However, given the heterogeneity of real populations, this evidence can only be a guide to action, not a prescription. The dominant approach to statistics in medicine is called “statistical hypothesis testing”;we evaluate whether one group of people subject to a carefully controlled change (e.g. a drug) differs from a population not subject to that change. The goal of the test is to establish that the size of the observed difference was unlikely to happen by chance. This presumes that there are no “significant” differences between the two groups that would provide an alternative explanation for the effect. There are several consequences to this approach to generating knowledge that are not questioned often enough.
The first is that most trials don’t give us much insight into how a given individual will respond. We don’t know who will have the side effects, who will benefit the most, or who will be harmed. Acting from the advice of a clinical trial is like taking a roll of the dice; the odds may be biased in your favor, but that is no guarantee for your specific case. The interesting part of medicine lies in understanding “how similar” we are, improving the odds, and knowing what to do when the dice don’t go your way.
Secondly, to get a legitimate result from statistical hypothesis testing, we need to carefully control for variables, blind the participants, and randomize over large populations to rule out other possible causes for changes in the population’s average response . The costs of doing this kind of controlled population science can be staggering. As a consequence, there are huge financial forces at play that influence the social, industrial, financial, personal, and scientific tools we use to generate evidence; this influence can introduce distortions that are probably unimaginable to the early founders of the scientific method.
Commercial entities are utterly dependent on the success of trials and even government-funded or academic researchers are influenced by the need to be involved in successful research. For most researchers, keeping the grants flowing is a job requirement, and who is going to give grants (or tenure) to researchers who consistently run trials that show no effects? As a result, the scientific establishment tends to bias its investigations to research questions that are likely to have ‘yes’ answers and rarely invest the time and expense to report ‘no’ answers. Further, given the expense, the system can only afford to invest in research for which those ‘yes’ answers will result in large financial returns.
In practice, trials that result in a ‘no’ are often abandoned as not cost effective, yet the reporting of those null results could have a profound benefit for society as a whole. The combined effect of for-profit healthcare, grant processes, and the academic tenure system is profoundly damaging to our collective interests. Sadly, even the positive results we get may be suspect. This may be one area where market forces are actually counter-productive.
To the point, I recently was in a meeting with a doctor, biomedical resarcher, and informatician who worked at the World Health Organization for several years helping develop a registry of clinial trials to combat exactly these problems of trial bias. She said the result of that experience convinced her that the entire system is so distorted and corrupted by money interests that almost nothing produced by the research community can be trusted. John Ioannidis has amassed evidence of this widespread distortion (especially read “Why most published research findings are false.”), and provides recommendations to fix it.
The good news / bad news is that the apparatus of medical evidence generation we have relied on for the past 50 years is slowly crashing down around us. It is too expensive, too limited in scope, and too corrupt to survive. The quote in the Oz article says “guided by the evidence”, but what evidence should that be? We need to move to a model of medicine that embraces the complexity and nuance of the real world. Doctors need to be more scientific and evidence-generating in their day-to-day practice and less influenced by the pre-packaged evidence reported in journals. Even when we can trust journal-based evidence, it may not apply directly to the current situation and, as Dr. Oz argues, many of the things we want to try to improve health have no evidence at all since there is insufficient reward for research into “unprofitable” treatments (e.g. diet).
All of this ignores the fundamental and unfortunate fact that the bulk of society’s investment in medical science is focused exclusively on the search for drugs to correct illness when the biggest killers of our time, stroke and heart disease, are largely social in nature (a turn of phrase I owe to Mark Hyman’s story). The rise of these diseases is not fully understood, but is believed to be largely governed by changes in food supply, environment, work habits, and other behavior. It’s unlikely to be a virus, genetic mutation or other external cause. This observation should encourage us to instead make massive investments in exploring behavioral interventions and/or food policies (e.g. no more sugary drinks!) that can prevent and reverse these illnesses. This would have more impact on health and decreasing early mortality than any handfull of drugs, and would likely cost less in aggregate than any one drug company’s pipeline.
Fortunately, we don’t have to wait for the restructuring of institutional science to
start down this road. My work at New Media Medicine on Personal Experiments and collaborations with the C3N Project,Lybba, Open mHealth, and the Quantified Self crew have shown me a future where we are able to gather increasing evidence from individual patients, develop much richer personalized models of treatment effects, embrace therapies that treat the whole person (behavioral, psychological, environmental, social as well as biochemical), and put population evidence from clinical trials into context using real-world experience working with groups of patients. In the future, the bulk of discovery about how to improve our lives may very well come from aggregation of practice experience and self-experimentation, not from traditional clinical trials.