The health care debate is about a lot of things: tax code arcana, economic models, the proper role of government. But it’s also about epidemiology. Prominent progressives have argued that a succession of health care reform bills put forth by Republican lawmakers since the arrival of a similarly fixated Trump administration would lead to tens of thousands of deaths. Meanwhile, some conservatives have responded that such projections are overblown — and that it’s not clear whether government health plans like Medicaid make people healthier at all.
It’s a striking empirical disconnect between left and right — and one that is playing out to high drama on Capitol Hill this week as Senate Republicans successfully, albeit barely, pushed through a motion to begin debate on dismantling the health care reforms put in place by President Barack Obama just a few short years ago. Any effort to understand this intellectual and political divide — and it’s worth understanding, given the stakes — inevitably turns up a common touchstone known as the Oregon Health Insurance Experiment.
As studies of large-scale healthcare outcomes go, the Oregon Health Insurance Experiment was always slated to be a blockbuster: it’s the only time that researchers have managed to approximate a large, randomized, controlled trial on the effects of giving uninsured people coverage. But in the years since the OHIE results were first published, the study has also become a political flashpoint, and a sobering example of how researchers can inadvertently — and perhaps unavoidably — leave their work open to partisan spin.
The study started with a bit of good luck: In 2008, Oregon announced that it would add 10,000 people to its Medicaid program, using a lottery to decide which eligible, interested citizens would receive coverage. For the state, this was just an easy and fair way to distribute health care. But for researchers, it presented the rare chance to run a randomized, controlled trial on the effects of health care. After all, the Oregon policy naturally created a treatment group (people who won the lottery and got access to coverage) and a control (people who entered the lottery but did not).
Between 2009 and 2010, a team of researchers led by the healthcare economists Amy Finkelstein, of MIT, and Katherine Baicker, an incoming dean at the University of Chicago and current professor at Harvard, did health evaluations of nearly 21,000 Oregonians who had participated in the Medicaid lottery. Then they spent years going over their data, trying to understand if and how health outcomes differed between the people who got Medicaid and the people who did not.
The results were mixed. The team did not find statistically significant gains in three indicators of physical health. And they did not find that having access to Medicaid made people any less likely to go to the emergency room to get care. (In fact, they visited the emergency room more.) But the Oregon team did find that people who got Medicaid had significantly lower rates of depression and were much more likely to say that they felt their health had gotten better. And of course, they were also less likely to find themselves saddled with crippling medical bills.
In any political climate, this study would have gotten attention. But in the period between 2008 and 2012, when they published their first results, a lot happened — including passage of the Affordable Care Act and its provisions for Medicaid expansion. “Of course we knew that it was an important policy issue,” Baicker told me in recalling how the Oregon study’s findings landed smack in the middle of the defining domestic policy battle of the Obama presidency. “But we had no way of knowing quite how timely and scrutinized the results were going to be.”
The researchers released most of their key findings in a major 2013 paper in the New England Journal of Medicine. In the years since then, conservatives have invoked the study to suggest that Medicaid doesn’t do much for people’s health, and that it’s a waste of money. Just this year alone, a Cato Institute analyst used the OHIE to try to persuade the Kansas State Legislature not to expand Medicaid; the Trump administration’s Medicaid chief, Seema Verma, cited it as evidence that health care needs the kinds of reforms proposed in Republican health bills; and writers at the National Review repeatedly invoked it as evidence that Obamacare probably didn’t save many lives.
Grace-Marie Turner, who runs the Galen Institute, a conservative think-tank focused on health policy, said the Oregon study offered evidence of the need to overhaul the Medicaid program. “I think the Oregon Study showed us that the time is now,” she told me.
In response, some progressives have tried to discredit the study. “The study has been heralded as the gold standard because it was ‘randomized,’” wrote Topher Spiro, vice president of health policy at the liberal Center for American Progress, after the OHIE team’s big 2013 paper. “But in key respects, this study is far from the gold standard on the question of Medicaid’s effects on physical health.”
But others have argued that the OHIE actually does suggest that Medicaid works. Last month, Bernie Sanders entered an entire paper from the Annals of Internal Medicine into the congressional record, trying to muster evidence that programs like Medicaid can save lives. The paper drew, in part, on the Oregon study. “It never proved there was no effect,” the physician and public health scholar Stephanie Woolhandler, a co-author on the Annals paper who has advised Sanders on policy, told Undark. “It found an effect. It could not prove it was significant.”
The study’s authors themselves have taken a more measured approach to the results. “The study produced nuanced enough findings that there was a little something for everyone to hate,” said Baicker. “We were able to help dispel both the unduly optimistic view of Medicaid and the unduly pessimistic view of Medicaid.” The OHIE’s findings on depression, financial security, and self-reported health really do matter, she stressed, when we think about whether health insurance makes people better off.
Still, some critics suggest that the authors of the Oregon Health Insurance Experiment could have done more to ensure that their work didn’t become a political football. Austin Frakt, a health care economist for the Department of Veterans Affairs and a professor at Boston University who also oversees a popular blog, The Incidental Economist, has argued that the researchers should have said from the start that they probably wouldn’t be able to detect statistically significant changes in discrete metrics like blood pressure and cholesterol levels. After all, even if the health improvements were there and fairly large, they might not show up unequivocally in the results, just based on the sample size.
“Then, instead of having a paper that said, ‘Oh, gosh, this is a little bit startling that we didn’t find any of these health outcomes,’ it would be ‘We knew that we probably couldn’t detect those,’” Frakt said. (In an email, Baicker defended the research team’s methods, saying it was more informative to do the study first and then, once they knew how big the sample sizes were, acknowledge limitations in their findings. Pointing out that the population of diabetics, in particular, had been smaller than expected, she added that, “given the rare opportunity and the relatively low incremental cost of adding this very important health measure, I think it would have been foolish not to collect and analyze those data.”)
It might also have been possible, in retrospect, to predict how certain claims would be taken out of context. Probably the most quoted line from any of the Oregon study team’s papers is that “this randomized, controlled study showed that Medicaid coverage generated no significant improvements in measured physical health outcomes.” In context, that’s a fairly narrow statement about three health metrics that didn’t achieve statistical significance. Out of context, it sounds like proof that Medicaid doesn’t make people healthier.
In our interview, Baicker talked about how the researchers prepared for scrutiny. They were careful, for example, to declare all the hypotheses they planned to test and analyses they planned to perform before actually doing them, so that nobody could accuse them of cherry-picking data. She also said she has no significant regrets about the study, and while she acknowledges that the results have been misused, she chooses not to respond directly to individual claims.
“I don’t think there’s anything we can do about misuse of the data,” she said. “It’s an inherent hazard to producing things that people care about.”
Baicker has found other ways to respond, though. This June, the New England Journal of Medicine rushed through a literature review that she wrote with two Harvard colleagues, Benjamin Sommers and Atul Gawande, publishing it at the height of the Congressional healthcare debate. The paper evaluated a few major studies of the relationship between insurance and health.
The conclusion of the paper was straightforward: “Arguing that health insurance coverage doesn’t improve health,” the authors declared, “is simply inconsistent with the evidence.”
Michael Schulson is an American freelance writer covering science, religion, technology, and ethics. His work has been published by Pacific Standard magazine, Aeon, New York magazine, and The Washington Post, among other outlets, and he writes the Matters of Fact and Tracker columns for Undark.
Comments are automatically closed one year after article publication. Archived comments are below.
“The study’s authors themselves have taken a more measured approach to the results. “The study produced nuanced enough findings that there was a little something for everyone to hate,” said Baicker. “We were able to help dispel both the unduly optimistic view of Medicaid and the unduly pessimistic view of Medicaid.” The OHIE’s findings on depression, financial security, and self-reported health really do matter, she stressed, when we think about whether health insurance makes people better off.”
This is part of the issue with studies being misinterpreted/misunderstood by the public. Misuse by politicians is bound to happen when they want to gain support for their position. I just wish the authors of said studies would testify in congress and or speak in public to refute these politicians when they are blatantly misusing evidence from the study.
Belief perseverance is a real thing, at the least getting clarification from the authors may prove beneficial.
PF-T