The job of a peer reviewer is thankless. Collectively, academics spend around 70 million hours every year evaluating each other’s manuscripts on the behalf of scholarly journals — and they usually receive no monetary compensation and little if any recognition for their effort. Some do it as a way to keep abreast with developments in their field; some simply see it as a duty to the discipline. Either way, academic publishing would likely crumble without them.
In recent years, some scientists have begun posting their reviews online, mainly to claim credit for their work. Sites like Publons allow researchers to either share entire referee reports or simply list the journals for whom they’ve carried out a review. Just seven years old, Publons already boasts more than 1.7 million users.
The rise of Publons suggests that academics are increasingly placing value on the work of peer review and asking others, such as grant funders, to do the same. While that’s vital in the publish-or-perish culture of academia, there’s also immense value in the data underlying peer review. Sharing peer review data could help journals stamp out fraud, inefficiency, and systemic bias in academic publishing. In fact, there’s a case to be made that open peer review — in which the content of their reviews is published, sometimes with the name of reviewers who carried out the work — should become the default option in academic publishing.
The open peer review model is already gathering some support. Some academic journals, including the interdisciplinary publication F1000 Research and medical journalBMC Medicine have been posting referee reports online for years. And a recent survey of more than 3,000 researchers found that the majority of respondents thought open reviewing should be mainstream practice — but only if the reviewers remained anonymous. But even that might be enough to provide some insights into the inner workings of what has traditionally been an opaque process.
For instance, a wordcount analysis of more than 300,000 referee reports posted on Publons found that physicists tend to write shorter reviews than their counterparts in psychology, Earth and space sciences, and the life sciences. One can imagine that more sophisticated analyses could begin to tease out differences in the actual quality of reviews in different scientific disciplines. But, currently, very few journals allow peer reviews to be published and reviewers to reveal their names. According to a 2017 analysis by Publons, only around 2 percent of the approximately 3,700 journals with peer review policies in its database at the time permitted the content of reviews to become public.
Peer review data could also help root out bias. Last year, a study based on peer review data for nearly 24,000 submissions to the biomedical journal eLife found that women and non-Westerners were vastly underrepresented among peer reviewers. Only around one in every five reviewers was female, and less than two percent of reviewers were based in developing countries. (Women and researchers based in non-Western nations were also underrepresented among journal editors and among the coveted “last author” position in group-authored papers, the position usually reserved for the most senior scientist on the research team.)
The eLife results weren’t terribly surprising; similar trends probably exist at other journals. Nevertheless, eLife did the right thing by unveiling the data and opening its review process to public scrutiny. Informed with that data, the journal can now go one step further and begin eliminating potential biases to make the process fairer. At many other journals, decisions about the peer review process will continue to be informed with little more than anecdote. One may argue that publishers and journals could — and maybe already do — analyze their peer review processes internally. But for the sake of transparency and to justify any decisions the journal does make, the wiser option is to open up the data to the public.
Openly publishing peer review data could perhaps also help journals address another problem in academic publishing: fraudulent peer reviews. For instance, a minority of authors have been known to use phony email addresses to pose as an outside expert and review their own manuscripts. More than 500 studies, most authored by scientists in China, have already been retracted due to such manipulation of peer review. Merely knowing how long it took to complete a peer review — that is, knowing when the reviewer was invited to evaluate the manuscript and when they submitted their completed report — could help give readers a sense of how rigorous the review was. Increased transparency could also dissuade peer reviewers from stealing papers and publishing them as their own, asking authors to cite their work as a quid pro quo for a positive review, and other unethical practices.
Opponents of open peer review commonly argue that confidentiality is vital to the integrity of the review process; referees may be less critical of manuscripts if their reports are published, especially if they are revealing their identities by signing them. Some also hold concerns that open reviewing may deter referees from agreeing to judge manuscripts in the first place, or that they’ll take longer to do so out of fear of scrutiny.
But a recent study of more than 18,000 reviews from five journals published by Elsevier — the world’s biggest academic publisher — found that publishing referee reports alongside manuscripts does not compromise the reviewing process, though only around 8 percent of referees chose to sign their names to their reports. Open peer review also didn’t take longer, it didn’t result in fewer referees agreeing to carry out the work, and it didn’t affect whether academics accepted or rejected papers, the study found. In light of the results, Elsevier said in February that it is considering rolling out open peer review at more of its journals.
Even when the content of reviews and the identity of reviewers can’t be shared publicly, perhaps journals could share the data with outside researchers for study. Or they could release other figures that wouldn’t compromise the anonymity of reviews but that might answer important questions about how long the reviewing process takes, how many researchers editors have to reach out to on average to find one who will carry out the work, and the geographic distribution of peer reviewers.
Of course, opening up data underlying the reviewing process will not fix peer review entirely, and there may be instances in which there are valid reasons to keep the content of peer reviews hidden and the identity of the referees confidential. But the norm should shift from opacity in all cases to opacity only when necessary.
The change will not be easy. For the study of Elsevier’s open peer review trial, it took researchers two years of back-and-forth with legal professionals to get access to the data underlying the peer review process. To simplify the process for the future, Flaminio Squazzoni, a sociologist at the University of Milan in Italy who co-authored the original study, along with his colleagues developed a standard protocol that they hope publishers will use to share peer review data. “Our experience shows that journals that share information on all aspects of the peer-review process can foster transparency and accountability in publishing, while protecting the interests of authors, reviewers, editors and researchers,” they write.
Let’s just hope that the academic publishers are paying attention.
Dalmeet Singh Chawla is a freelance science journalist based in London.
Comments are automatically closed one year after article publication. Archived comments are below.
Please see my proposal for peer-review reform.
https://www.timeshighereducation.com/opinion/peer-review-should-be-two-stage-science-first-process (Free registration required)
Here is an excerpt:
“Far too much emphasis is placed on who is proposing to do the research and the institutions with which they are associated, rather than on the actual science. Therefore, I recommend that review be conducted in two stages. Reviewers should initially receive only descriptions of the proposed research, written in the third person, with no preliminary results section or indication of the authors’ identities or affiliations. This would require that the proposal be evaluated and scored solely on the detail of its merits.”
“Accountability could be further improved by attributing each review to its author. Some might object that confidentiality allows reviewers to be more honest, fearing retaliation less. In fact, confidentiality allows reviewers more scope to favour friends, retaliate against foes and exploit their privileged access to the information in the proposal to advance their own research programmes. The National Institutes of Health reports having detected examples of these forms of misconduct.”
“Collectively, academics spend around 70 million hours every year evaluating each other’s manuscripts on the behalf of scholarly journals — and they usually receive no monetary compensation and little if any recognition for their effort.”
https://www.thebookseller.com/news/elsevier-records-2-lifts-revenue-and-profits-960016
“Adjusted operating profit also grew 2% year-on-year to £942m, giving the publisher a profit margin of 37.1%, flat with last year’s 37%.”
Academics cannot figure it out?
I see the point of peer review differently — it is to 1)improve the quality of scientific information and communication and 2) determine whether this work is of the quality that justifies it to be shared with the wider scientific community.
The fact that publications have become currency for hiring, promotions, tenure, and reputation (with all the distortions such a currency-system imparts) does not negate the basic fact that a publication is meant to be a public communication with others — and as such, review by your peers should be considered a integral component of that public sharing. While every system has risks — such as the one identified by CW above – public reviews also protect those who are less powerful or who are presenting ideas heterodox to the reigning dominant views from harmful interference. We need to right the academic ship — to may views, policies and humble opinions are formed in reaction to the distortions that have crept in – returning to first principles might not be a bad way forward.
The point of peer review is to enable those commenting to be free from fear of retribution and thereby able to be candid in their statements. Allowing reviews to ‘go public’ would greatly hamper the ability of some (less powerful or reputed) researchers to speak openly. It seems like it is counter to the very idea of peer review that it be made open. This is why Publons has not taken off, IMHO.
I see your point.
Here’s a counterpoint. By staying anonymous, peer reviewers have and sometimes take the chance to (1) steal data or use it before publication and unreasonably trash a paper, and/or (2) help their friends and collaborators, while hampering their enemies and competitors. I’ve seen it up close and in person. It exists as a real thing, but I suspect the frequency of this is decreasing.
In my opinion, reviews that go public would not hamper the ability of some less powerful or reputed researchers to speak openly. Times are changing. If a reviewer feels unable to speak freely and in public in peer review, there is something wrong with the system and it needs to be fixed. Maybe paper submissions should be anonymous and peer review public. If politics is probative evidence, sunlight (transparency) on the exercise of power tends to kill rot (corruption and abuse). Why shouldn’t that also apply to peer review?