Last year, just months after she received a share of the Nobel Prize in Chemistry, Caltech professor Frances Arnold published her group’s latest research on enzyme catalysts in the journal Science. The paper described a highly precise method for creating molecules called lactams, key building blocks for certain pharmaceutical compounds. In a review published within the same edition of Science, the study was lauded for its contribution to pharmaceutical chemistry. Physics World praised the work’s potential to deliver new antibiotics at a time when health experts are gravely concerned about antibiotic resistance.
Then, in a tweet this January, Arnold announced that she was retracting the paper. Her team had been unable to replicate the results, and when Arnold revisited her student’s original lab notebooks, she found that entries and raw data were missing for key experiments, which went undetected due to a lack of supervision on her part.
Arnold’s public mea culpa stirred an animated discussion on Twitter. Most commenters praised Arnold for her transparency. But amid the landslide of kudos were critics who seized the opportunity to discredit the entire scientific community. “Want to know why people don’t trust science? This crap is why,” one commenter wrote. Another questioned the credibility of the entire Nobel Prize institution, suggesting that the decision to honor Arnold had been politically motivated.
Arnold, like nearly every innovator throughout history, made a mistake. In her rush to change the face of modern medicine, she exercised poor judgment, and her subsequent retraction was the correct response. Yet the public reaction to Arnold’s disclosure highlights a common misunderstanding about science. As much as we’d like to believe that published research and scientific facts are infallible, they aren’t. But the credibility of the scientific enterprise has never rested on the veracity of individual experiments; rather, it is about the collective advancement of knowledge and understanding. And to that end, a flawed experiment — and the subsequent discovery of the flaw — can be as valuable as the perfect study.
Take, for instance, a 2018 study published in Nature that introduced a new way of calculating the heat uptake of the world’s oceans. The study’s authors — an international team made up of researchers from the U.S., China, France, and Germany — calculated that oceans were absorbing heat at a rate 60 percent greater than leading models had previously estimated. The results spurred panic from activists: If the calculations were correct, then even the most ambitious policy proposals would be powerless to stop the destruction of the global ecosystem.
But scientists were skeptical — and so were climate change critics. As Patrick Galey wrote at the science news site Phys.org: “Some Twitter users suggested the study was funded by the Democrats, that human-induced planetary warming was invented by former presidential hopeful Al Gore so he could buy a house, and that decades of evidence-based research into the phenomenon constituted ‘pseudoscience.’”
Shortly after the study was published, independent climate researcher and critic Nicholas Lewis revealed fundamental errors in the authors’ calculations; once corrected, the results were more in line with previous estimates. Within a year, the article was retracted.
Yet many climate scientists still believe that the approach described in the Nature study has the potential to revolutionize how scientists measure the ocean’s temperature. Ralph Keeling, who co-authored the Nature report with Laure Resplandy, said they will continue to fine-tune the approach. The botched study wasn’t the step back for climate science that some climate change skeptics have made it out to be. Rather, it was a confirmation that the checks and balances of the scientific method did their job. Ultimately, science may still gain a valuable new tool for assessing climate change.
Sometimes a study fails to deliver immediate progress but succeeds at raising the collective awareness of a topic. Such was the case with Dr. Horace Wells, whom many historians believe was the first dentist to use nitrous oxide — laughing gas — as an anesthetic during routine tooth extractions. During the 1840s, Wells conducted a handful of human trials with the anesthetic, including one on himself. In his race to share the new method with the world, he hastily planned a public demonstration, in which he would extract a tooth from a test patient in front of an audience of physicians and students.
However, Wells hadn’t yet figured out how to adjust dosages to accommodate the variety of human metabolisms. And so six weeks after claiming nitrous oxide would change the face of modern dentistry, he suffered a public humiliation: His test patient suddenly screamed in pain — probably because Wells had given him the wrong dosing of nitrous oxide.
The debacle appeared to nullify Wells’ ambitious research, and he never published on the subject again. But even though his demonstration failed, Wells’ contribution to the field of dentistry and pain management endured. It has since been determined that nitrous oxide is safer and more effective than the other forms of inhaled dental anesthetics that were being studied at the time, including ether and chloroform. Had Wells not shared his flawed methodology, his successors may never have taken up work with nitrous oxide, which is still used in dental practices today.
So, applaud Frances Arnold’s retraction of her enzyme study, but not because of her honesty, bravery, or humility. Rejoice in the glorious fallibility of human experimentation. Some mistakes inevitably slip through the cracks of peer review, and the resulting social media avalanche tends to bury the point that science is conducted by humans. But science is about asking questions, not necessarily about providing all the answers in the first attempt. One meta study found that the majority of published scientific studies are eventually superseded by later work. The path to greater understanding is not always linear. Not every idea represents the solitary lightbulb of enlightenment; some merely show the next step along the path.
Perhaps with some minor tweaks, Arnold’s methodology could be improved and salvaged, just as Keeling and Resplandy are working to do with their Nature paper. Or perhaps the basic framework adopted by Arnold’s Caltech group will be picked up and advanced by other groups, as was the case with Wells’ nitrous oxide experiments.
Plato once wrote, “Science is nothing but perception.” Maybe the true value of a high-profile mistake is that it forces us to change vantage points.
Mary Widdicks is a cognitive psychologist turned novelist and freelance journalist specializing in the psychology of parenting, mental health, and education. Her articles have been featured in The Washington Post, Quartz, Elemental, Vox, Your Teen Magazine, and more. Follow her at http://marywiddicks.com or on Twitter.
Comments are automatically closed one year after article publication. Archived comments are below.
I don’t disagree at all with the thrust of this article, but the sad fact is that the damage done by an episode like this can’t be undone. We scientists may be rational about it but many (perhaps the majority of the population) don’t get it, and never will. As Ben Goldacre wrote in ‘Bad Science’ the humanities graduates making up most of the media see (and present) science as simply an alternative belief system. Concealment would have been absolutely wrong, but retracting the paper is still terrible news. “Bravo!” does not belong anywhere in this story.