Science’s Quality-Control Process Gets a Makeover
When Michelle Kahlenberg contacted researchers for their input on a scientific paper for an immunology journal earlier this year, she had a hard time. Kahlenberg, a rheumatologist at the University of Michigan in Ann Arbor, tried 25 researchers to find just two to weigh in on the paper — her record as an editor.
“All perspective reviewers I approached had documented subject expertise in the field,” Kahlenberg says. She personally knew some and asked the others based on their listed expertise. Regardless, her request was met with widespread rejection. Most claimed they were too busy.
Peer review is supposed to be academia’s gold-standard quality check. It usually requires two or three experts who are not authors of a study to conduct independent review before publication. The process requires contributors to participate almost solely out of good faith. On average, Kahlenberg says she contacts between six and 10 researchers per manuscript.
Kahlenberg’s experience is not unique. “I get asked to do 100 to 200 reviews a year and I have several colleagues who are all crushed under the weight of reviews,” says Jay Van Bavel, a social neuroscientist at New York University. “I also know people who turn down pretty much every review request.”
But research into reviewer fatigue has been largely neglected, in part because most scholarly publishers are secretive about data underlying their peer review process — possibly because there are no clear incentives for publishers or journals to make such figures available. Doing so might even draw scrutiny.
Now, a new movement aims to improve the process: Many academics and journals are pushing to acknowledge and incentivize peer review. Reviewers are also recording their activities on new online platforms, which opens a new source of data about the process. And several new studies, some of which reveal worrying trends, are among the first looks at the nuances of peer review.
Whether these changes will help editors like Kahlenberg remains to be seen. In the meantime, much of the process remains a labor of love, carried out by academics for early exposure to new findings in their discipline and to keep the system running.
In 1831, William Whewell, a professor at the University of Cambridge, convinced the Royal Society of London, the world’s first scientific publisher, to commission public reports on manuscripts, initiating peer review close to as we know it today. But it wasn’t until more recently that independent quality checks became a requirement; the high-profile journal Nature only made it mandatory in 1973.
Perhaps, then, it’s no surprise that the mechanisms underlying the process have largely remained a mystery. The series of new reports released look to answer some of these questions.
According to one report — released by Publons, a website that allows researchers to record their peer review activity online — academics worldwide spend an estimated 70 million hours reviewing papers every year. Most of the burden falls on scholars based in richer countries, who are closer to journal editors, most of whom also reside in Western nations. This is despite the fact that reviewers in developing countries are more willing to accept requests and turn around reviews more quickly, the report found.
Scholarly publishers and journals are also beginning to increase transparency, possibly in response to Peer Review Week, an annual event that falls in September and celebrates how the process helps maintain the scientific record. The prominent biomedical journal eLife, for instance, made available to researchers its manuscript submission data from its 2012 launch to 2017. A study based on this data, published online in August on the preprint server bioRxiv, found women and scholars in non-Western nations to be underrepresented as peer reviewers, journal editors, and senior authors.
In September, IOP Publishing, part of the U.K.-based Institute of Physics, also released a report analyzing peer review at its journals. The report found that physicists in the United States carry out a disproportionate amount of peer review: 30 percent of invited reviewers are in the U.S., but only 10 percent of submissions are from U.S.-based authors. By contrast, academics in China are asked to review only 7 percent of papers while contributing a quarter of submissions.
The lack of distribution is a problem, says Van Bavel, the social neuroscientist at NYU: “I have done reviews on Christmas Day, New Year’s Eve, and pretty much every other day of the year. And I am not alone — the increased rate of publication is a burden that falls on a small subset of reviewers.”
Van Bavel thinks reviewers who do the most work need to be acknowledged. That could, for example, include recognizing their efforts publicly. When Publons launched in 2013, it aimed to provide a platform for academics to claim credit for reviews. The site now has more than 470,000 registered users and 2.7 million reviewer reports, both of which have tripled since last year. The website also has an annual Peer Review Awards.
Ahmed Zakaria Hafez Mohamed, an engineer at the University of Nottingham in the U.K., has been rated among the top 1 percent of peer reviewers in engineering by Publons this year. The ranking is based on the number of pre-publication peer reviews added to the platform. In the last year, for instance, Mohamed reviewed 89 papers. Peer review “is not just revising the papers,” he says, “but also the way for enhancing our own knowledge.” His review work often spills into weekends to cater to deadlines.
The award “makes the often invisible, behind-the-scenes and voluntary work of peer review more visible,” says Christoph Lutz, a communications and culture researcher at BI Norwegian Business School in Oslo, who was also rated as a top reviewer by Publons this year. “It’s a nice sign of appreciation and shows that my work as a peer reviewer is being valued.”
Some journals have gone a step further: Rather than giving reviewers public kudos, why not pay them? A few years ago, the Journal of Public Economics tested the effectiveness of three options: rewarding researchers with $100 a review, setting shorter deadlines, and informing reviewers their turnaround times would be posted publicly. The experiment found both shorter deadlines and cash incentives speed up turnaround times, but the latter required reminders about the reward. Tenured professors, who are more likely to be financially stable, were more responsive to the public posting of their review times than the other interventions. (Some journals, such as American Economic Review, regularly pay reviewers for timely reviews.)
A study published in September, however, suggests that non-monetary rewards — such as offering to publish reviewers’ names in the journal along with a thank you or issuing a review certificate — may not be effective, and in some cases may discourage reviewers from participating. Marco Seeber, a sociologist at Ghent University in Belgium and co-author of the study, was surprised by the results. “We tend to assume that some sort of reward spurs any behavior,” he says, but “this is a special kind of effort that is also driven by ethical commitment and pleasure.”
One source of potential reviewers with extensive knowledge and enough free time may be retired academics, noted Eleftherios Diamandis, a cancer researcher at Mount Sinai Hospital in Toronto, Canada, in a 2015 letter to the journal Nature. Small cash payments may make them more willing to participate, he noted. Lutz agrees: “I think paying reviewers for high quality peer review should become the norm rather than the exception.”
Not all researchers are keen on the idea, however. Thibault Derrien, a physicist at the Max Planck Institute in Hamburg, Germany and the Academy of Science of the Czech Republic, thinks paying reviewers raises ethical problems. “Researchers are requested to be impartial and objective,” he says. “Being paid would too much corrupt this already fragile process.”
Paying reviewers isn’t unreasonable, at least from a financial perspective, as traditional academic publishers have notoriously high profit margins. Paywall: The Business of Scholarship, a new documentary about the scholarly publishing industry, noted that the top for-profit publishers have 35 to 40 percent profit margins, often higher than the likes of Apple, Facebook, and Google. In recent years, some countries have reacted by cutting ties with major publishers and funders have drawn plans to head academia towards open-access, a movement that aims to fix the power imbalance between traditional publishers and researchers, making all scholarly content freely available online.
The open-access model has implications for peer review. Many open-access journals, such as PLOS One, emphasize publishing any paper that is scientifically robust, in contrast to traditional publishing, which favors work that is novel or impactful. Although many argue the shift is good for science, it increases demand for reviewers simply because there are more papers to review. This, in turn, decreases the quality of reviews, since the most qualified reviewers are likely tied up with other requests, notes Martijn Arns, a biological psychologist and director of the Brainclinics Research Institute in the Netherlands.
In the last few years, open-access publishers have tried to streamline reviews by tweaking their processes. Some researchers also have been making their own suggestions. Arns, an open-access proponent, proposes a two-tier system: high-profile articles that are more likely to get a broad audience should get a thorough peer review before publication, while studies with less societal impact, such as those that highlight a particular method or confirm previous findings, should be reviewed after publication. This would lower the burden on reviewers, Arns says, although he notes that it might be difficult to predict which papers fit into each category.
Van Bavel suggests another option. Publishers should knock $200 off their “obscene profit,” he says, and pay it towards reviewers’ subscription costs, open-access fees, or membership costs. The system already exists among book publishers, he notes. For instance, Van Bavel once received $1,500 in credits for sitting on the editorial board of an open-access journal; coincidentally, at the time, he had a paper in press at the same journal, so he used the money to pay the article processing charge. The approach, he says, “seems like a great way to give something back to the editors or reviewers who are doing the hard work to keep the journal running smoothly.”
At the Journal of Medical Internet Research, researchers can collect “karma points” for reviewing, editing, or authoring papers for the journal; authors submitting papers can claim the equivalent of their points off their article processing fees. Collabra: Psychology, the journal of the Society for the Improvement of Psychological Science, gives reviewers the option to pay themselves a small fee for refereeing papers, or choose to pay it forward to the journal’s waiver fund for cash-strapped scientists or their institution’s open-access fund.
“I would like to see a system where people can put forward their names to take on reviews,” Van Bavel says. “This would allow editors to find potential reviewers and help people who might not be on the radar to have the opportunity to complete reviews.”
Dalmeet Singh Chawla is a freelance science journalist based in London.
Comments are automatically closed one year after article publication. Archived comments are below.
Honestly, as an assistant professor going up for tenure now looking at what happened in the last 6 years, I feel like paying reviewers is quite just. I don’t make the comfortable living that full professors make, and the hours I spend reviewing papers comes potentially at the cost of writing grants, doing research, publishing my own papers, etc. that would over time quite likely paid more of my summer salary. If senior professors who are living comfortably would rather get other perks/benefits/recognitions, than so be it, but right now I am at the point where I feel like reviewing papers have robbed me in a small, but real way, of too much time that has come at some personal financial cost.
Dear,
PLOS One and scientifically robust don’t really go together, based on my experience. But that is a more general problem, as I recently came across 2-3 articles in Nature Comms, where authors, with vested interests, were presenting research/results convenient to the companies with which they have ties. The whole system is ridden with flaws.
I believe the solution is relatively simple. National agencies overseeing scientific research should create a space to allow for anyone “anonymously” or not (pending registration using institutional emails or credit cards , and other safeguards, to avoid hidden wars and paid for/self reviews) to judge and score posted articles according to a number of metrics such as scientific merit, experimental robustness, reproducibility, etc. This system would enable tens, hundreds, thousands of true reviews from qualified peers working in the same field. Such system would result in robust scores, weighted over a large number of reviewers that would be doing that for free as reviewing literature is part of their daily job anyway. That would also give to funding and research institutions a more objective and fair tool to evaluate research proposals and hiring candidates
How does one stop pal review? How can you stop journal editors assigning sympathetic reviewers to their favoured ideas / authors?
PS: By ‘sympathetic reviewers’ I mean those who will rubber stamp the review.