A discussion I had earlier today reminded me of an argument I’ve had with friends in the scientific community on multiple occasions. The argument revolves around the belief that conclusions of science, such as the effect of cholesterol on heart disease, suggests specific interventions, such as reducing the dietary fat that we believe causes high cholesterol. In essence, we debate the means by which new scientific evidence should be used to influence public policy and private behavior. Taking strong evidence of a specific causal link between a cause and an undesireable outcome as prescription for a population intervention to remove the causative factor is fraught with danger. There are many reasons for this, but the two most salient are confounding and the law of intendended consequences.
Confounding is an intrinsic problem in medical science, one that is much discussed though often ignored in policy debates. My favorite quote on this comes from Robins who stated that in epidemiological contexts “there are always unmeasured confounders”. If cholesterol is tightly associated, is it a true cause or co-varying evidence of a common underlying disease process? We still don’t know the answer to that today.
Even if we have strong evidence, it does not stand logical scrutiny that removing the ostensible cause will have the desired effect; rarely can we modulate a single cause in human physiology without perturbing other parts of the system. Without a controlled experiment, we cannot know for certain if the overall harm that existed prior to the intervention will be lessened afterwards. As controlled experiemnts are all but impossible in large scale population health, so we need to carefully hold our fire to ensure that we study not just the underlying phenomenion, but everything we can about the downstream effects of the possible interventions under consideration. The present known harm may very well be less than the harm resulting post-intervention. For example, we now know that removing all dietary fat has direct and measurable negative effects on human physiology which direclty lead to an increased incidence of heart disease starting in the 1970’s.
Adding fuel to the fire are extrinsic factors wherein a policy as interpreted by individuals and the market lead to behaviors very different than that intended by the policy. In a very well-written 2002 article about dietary standards Gary Taubes he quotes an exchange that happened during the hearings for establishing the original low fat dietary standards.
‘All reformers would do well to be conscious of the law of unintended consequences,” says Alan Stone, who was staff director for McGovern’s Senate committee. Stone told me he had an inkling about how the food industry would respond to the new dietary goals back when the hearings were first held. An economist pulled him aside, he said, and gave him a lesson on market disincentives to healthy eating: ”He said if you create a new market with a brand-new manufactured food, give it a brand-new fancy name, put a big advertising budget behind it, you can have a market all to yourself and force your competitors to catch up. You can’t do that with fruits and vegetables. It’s harder to differentiate an apple from an apple.”
And that is the crux of it. What is healthiest for us are fresh, whole foods, but it is very hard for the market to innovate and differentiate on commodities. So we are taught that ‘low fat’ is healthy and marketed ‘low fat’ products with low-nutrients and a high load of simple sugars.
I can see a similar law of unintended consequences playing out in the proliferation of extremely unhealthy gluten free foods. Gluten is bad for many people, but foods labeled gluten free for the most part are worse in every way than the gluten-carrying foods (bread, cereals, snack bars, etc) that they replace. They are typically made of simple starches (to replace whole wheat) with fewer nutrients and often more additives and sugar.
Baby formula is another example. Every few years we learn something new about the massive protective and beneficial effects of human breast milk on babies, yet we see wonderful advertising telling us how healthy and wonderful the latest additive is for baby’s development. This means, of course, that the benefits of those additives have been unavailable in formula products for the last 50 years with unknown impacts on infant development. What else remains missing today?
While the topic is far too broad for a proper treatment in this post, a similar effect is playing out today in the world of pharmaceuticals. The transformation of medical science towards a factory model of diagnosis and prescription has led to a huge demand for drugs. The drugs that are meant to help us live longer are interacting with each other leading to increasing fatalities. It is unclear that the potential increased longevity of geriatric drug cocktails will truly lead to an increased overall quality of life for everyone, at the now known cost of premature death for some.
As with physicists at the dawn of the nuclear age, I believe that epidemiologists and medical scientists are undergoing a cultural awakening as we come to realize how much harm can be created by the products of medical science as interpreted through the lense of public health policy. The populace and related market interpretations of the work we do can be exceedingly dangerous, even more so with the clear decline in educational quality and rise in anti-science sentiment. I’m often amazed that people worry so much about terrorists overseas when we’re far more likely to die ‘this year’ from medical error in our home town than from an act of terror over the course of our lives.
So what are we, as scientists, to do? We shouldn’t stick our head in the sand, nor should we hold back from influencing the policy debate, but we need to make sure that we look for the unanticipated influnces of interventions and work with behavioral economists, industry, and advocacy organizations to understand the likely unintended consequences of changes. I think one central change that is needed is to move our notion of public policy implementation from singular ‘all in’ bets into an active, multi-pronged exploration of indicated new standards of care.
An analog to this process exists in the statistical literature, it’s called the multi-armed bandit problem. The problem is phrased as allocating a budget to slot machines in a casino where different slots have different payout amounts and frequency. Every patient treatment decision is akin to pulling on one slot machine – there is a cost to play and some probability of payout which are initially unknown. Every time you pull a slot, you learn a little more about the underlying probability functions at play and allocate more of your money to pulling on the levers that maximize the probability of a payout. Over time, we can learn which levers are good for which kinds of patients and what outcomes inform us as to the next lever we should pull. Ideal decision making under uncertainty has been studied formally in the machine learning and AI fields for decades, leading to some powerful formalisms such as the Markov Decision Process (or the more realistic parti
ally observable MDP).
The barriers keeping us from a future where we can exploit these formalisms is both cultural (professionals are not inclined to ask the machine what the odds are) and practical — we simply don’t have the processes in place to generate, aggregate and exploit this data. The possible benefits of such a system are legion. We could:
- disseminate the best known standard of practice
- explore the unknown consequences of many possible health recommendations
- look at treatment response as another form of diagnosis
- explore and systematically model lifestyle effects at population scale
- incorporate the massive investment that the population makes in alternative medication and treatment modalities into traditional care delivery
- identify predictors of adverse responses
- uncover new drug interactions
Most importantly, our regulatory structure inhibits our ability to share and aggregate patient data on the scale that would make this possible. I think this represents one possible grand challenge for healthcare overhaul – to prove that large scale sharing, modeling, and data consultations at the point of care has a huge effect on improving outcomes and decreasing error. In the long haul, I think it will be considered immoral not to share medical data across institutional boundaries (responsibly and with deference to the rights of the individual) because we will know that the hidden data could have saved or improve another person’s life.
Inspirational related references:
Cancer Commons
Computational Cures
Transforming trials (Science paywall, discussion here)
Wikileaks claims that the mass publication and openness of information inherently speeds progress and helps in the long run. However, that idea has not proven particularly popular among those who control the information. Asymmetric information is one of the most valuable things individuals hold. How do we convince them to give it up?That’s not to say I don’t think you’re completely right: I just don’t know how to convince a 24 year old heroin addict to "come out". What would he or she gain by entering a a public database as a Hep C patient with psychiatric problems. Why not just keep that between her two or three providers and herself? (I think the answer is partly going to lie in demonstrating the health benefits of observation. The act of acquiring and disseminating health information draws the attention of the patient and providers – hastening action.)P.S. I think you would enjoy Sam Harris’ book called "Lying". And whether you agree with my comparison or not, you may want to read up on wikileaks dogma.
That has been a common big problem that no one from the government is giving attention to it. I guess time to revise certain law for this. Cheers,Debbie, Chicago Group of <a href="http://www.malmanlaw.com/workers-compensation-lawyer" rel="dofollow">workers compensation lawyer</a>