For Watchdog Scientists, Using Software to Fight Dubious Cancer Research
While trawling through scientific studies on cancer research in 2015, Jennifer Byrne noticed something strange. One after another, papers were describing strikingly similar experiments involving a particular gene associated with breast cancer and childhood leukemia. Byrne, a professor of molecular oncology at The University of Sydney, recognized the gene immediately because she was part of a team that cloned it two decades earlier.
The problem, she realized upon closer inspection, was that the papers, all of them from China, referred to the wrong nucleotide sequence — a unique series of letters that describes the makeup of a given piece of DNA — being used to deactivate the gene and observe the resulting effects in cancer cells. Either the experiments weren’t examining what they claimed, or they hadn’t been done as described.
“The sequence was being described as one thing, but was sometimes used as if it were something different,” Byrne says. “It’s a bit like applying the same barcode to different items in a supermarket, so you get charged for a pair of shoes when you are actually buying a bag of lettuce.”
What’s worse, each dubious paper contained the seeds of potentially more bad research.
“You can see by building up this argument, by looking at the same gene in different cancer types, these papers potentially reinforce each other,” Byrne says. “The argument starts to look more and more convincing because there are more and more papers that say the same thing.”
Shocked by her discovery, Byrne resolved to do what she could to expose academic fraud and junk science within her field. With the help of Cyril Labbé, a colleague at Grenoble Alpes University in France, she has managed to identify about 50 suspicious papers since then with only limited time and resources. In response to her efforts, scientific journals have retracted at least 10 papers, with several more retractions confirmed to be in the pipeline. In recognition of her work, the scientific journal Nature named Byrne among its “10 people who mattered” last year.
Despite all this, Byrne, who also leads the Children’s Cancer Research Unit at the Children’s Hospital at Westmead, Sydney, is adamant that she has only scratched the surface of a much deeper problem. Certain there is more to expose, she is working to establish a team capable of elevating her side project into a dedicated operation. Fraudulent, plagiarized, or otherwise shoddy research is a problem not just in cancer research, after all, but across all scientific disciplines, and over the last two decades, retractions have exploded. During the early 2000s, about 30 retraction notices appeared each year, according to data compiled by the information firm Thomson Reuters. By 2016, Retraction Watch, a blog founded in 2010 to catalog retractions and monitor scientific misconduct, recorded nearly 1,400 in that year alone.
To be sure, these numbers make up a small fraction of all published studies. But while a report analyzing U.S.-funded research between 1992 and 2012 estimated that the cost of scientific misconduct amounted to just a fraction of a percent of the National Institutes of Health budget during these years, its authors also noted that “the greatest costs of misconduct — preventable illness or the loss of human life due to misinformation in the medical literature — were not directly assessed.”
Having lost her mother to cancer last year, Byrne is only too aware of the stakes involved when time and resources are wasted on faulty publications. A single paper may be just a drop in the bucket, but as Byrne told Retraction Watch in January, it’s often the starting point for expensive research pipelines for cancer treatments.
“I have to do what I can to bring attention to this problem,” she says.
Acting as a watchdog is no small task, but thanks to Byrne and her partner’s development of a fast and efficient method for checking studies on cancer, she believes even a single pair of extra hands could have a big impact on her project. Using a software program developed by Labbé — which is currently available in a test phase — Byrne and other researchers can instantly extract a DNA sequence in a paper, run it through an established database, and verify that it acts as claimed. If things don’t match up, chances are there are fundamental problems with the research. The software also looks at text similarity and identifiers that correspond to contaminated or otherwise incorrectly identified cell lines.
“It’s not like doing experimental science, where you may have to invest a lot to start to get results,” she says. “If someone starts work, in half an hour they’ll be doing something useful, because the program that we have written can be applied and the results can be analyzed immediately.”
Still, even a small staff has a cost, and Byrne is now waiting on the outcome of a grant application and exploring several other potential sources of funding in order to expand. Part of the challenge is convincing her community of the scale of the problem, which she believes is immense. “We’re not trained to read papers and think ‘This isn’t real,’” Byrne says. Rather, scientists are “programmed to trust to some degree,” she says, in order to be able to carry out their work.
“There’s a lot more funding available for the construction or derivation of new knowledge than there is for the checking of existing knowledge, if I can put it very simply,” Byrne adds.
When it comes to the factors driving the publication of fraudulent work, Ivan Oransky, co-founder of Retraction Watch, points to a range of perverse incentives within academia. “It all comes down to publishing papers, right? If you want tenure, if you want grants, then you have to publish in as high an impact-factor journal as you can,” he says.
Whether rising retraction rates reflect more fraud or just better detection is a matter of debate. “What’s undeniably true is that we are better at finding it,” Oransky says. “Projects like Jennifer’s actually are a key part of that. There are a lot more people looking at papers, there are a lot more available online, so people are finding more issues.”
There are also simply more scientific papers being published than ever before, and much of this output comes from China, which has risen rapidly in recent years to become the world’s largest publisher of scientific research. In 2016, China produced more than 426,000 studies — a more than four-fold increase since 2003 — to overtake the United States for the first time.
Accompanying this meteoric rise, however, have been a raft of high-profile incidents of fraud that have called standards into question. Last year, publisher Springer retracted 107 papers from the journal Tumor Biology, most of them originating from China, after it was revealed they were submitted with fake peer reviews. The move set a record for the greatest number of retractions by a journal in a single day. Among all the suspect papers identified through Byrne’s work, the common thread has been Chinese authorship.
Many observers say that an extreme emphasis on publication in China, even for clinical doctors, has encouraged fabricated, rushed, or otherwise shoddy research. Oransky describes the system of incentives, which can reward publication with promotion and even cash bonuses, as “warped on steroids.” One study published by Chinese researchers last year found that scientists in China can receive rewards of up to $165,000 for getting published in the most prestigious Western journals.
“First, the authorities at all levels in China must realize that academic misconduct is a serious problem and has severely damaged the reputation of the research and researchers in China, which I believe they do,” says Harry Xia, president of the Alliance for Scientific Editing in China. “Then, they must take seriously measures in reforming the research assessment system and impose severe punishments on those who have committed misconduct, which I hope they can do more.”
Last year, the Chinese government moved on those punishments when officials announced that they would take disciplinary action against some 400 researchers involved in peer-review fraud.
Still, it’s not entirely clear to what extent China’s contribution to academic fraud is the result of standards within the country, as opposed to its colossal size. Since 2012, the country has retracted more papers over faked peer reviews than any other, according to Retraction Watch, although Xia says the overall rate of retractions is comparable to the global average. A comparison of retraction ratios across countries published in the journal PLOS One in 2012 does show a spike in China in 2010, though the authors note it can be largely attributed to two individual researchers.
In any case, Byrne lays much of the responsibility closer to home. Too often, she argues, scientific journals aren’t acting as the gatekeepers they are supposed to be.
“I think journals that publish these papers that are dubious or suspect, they’re the people that are actually benefiting from these publications in the sense that that’s their revenue model,” she says. “They get paid to publish people’s work, and I think that probably journals need to also devote greater resources to then cleaning up when they don’t publish things that are sound.”
Byrne also hopes to see greater awareness among her community that the research they come across may not always be what it seems. And while she is focused on cancer research for the time being, she emphasizes that with additional resources, this sort of work could be extended to many other areas of biomedicine.
“I think that this whole process could be accelerated if more people could be encouraged to look at the kind of papers published in their fields, to see whether there are papers where things start to fall into these patterns,” she says.
John Power is a journalist whose work has appeared in some three dozen media outlets including The Guardian, The Christian Science Monitor, Quartz, The Age, The South China Morning Post, The Irish Times, The Nikkei Asian Review, and Al Jazeera. Previously based in Seoul, South Korea, he currently reports from Melbourne, Australia.
Comments are automatically closed one year after article publication. Archived comments are below.
Wouldn’t it be easier on everyone if the journals that publish these articles used the method for checking described in this article before they sent it to the printer?
It’s too bad we have a system of healthcare for profit in the US. It makes you not trust a whole lot of what the medical community says.
Public institutions in many countries tend to promote quantity over quality of articles to attract more funding. Most journals have now made it mandatory that the data be deposited in an online repository for open access. It’s a great move to discourage shoddy research.
Another interpretation of this “research” might be to destroy what little remains of the public’s acceptance of science and its results. Thus things like global warming and vaccinations can just by assumed to be part of some conspiracy of the elite to siphon money and vigor off from “legitimate” efforts.