Will AI Help or Hinder Scientific Publishing?

Republish

Last year, Mohammad Hosseini, an artificial intelligence ethics researcher at Northwestern University, worked with a team to evaluate about 500 article submissions as an editor of the journal of Accountability in Research. A fraction of those, he said, seemed to have been obviously generated by artificial intelligence. “At this point, I think we have a good nose for that kind of paper because they can be very incoherent.” Excessive use of the em dash, abrupt logical jumps, and disjointed text are also telltale signs, he added.

But AI capabilities are getting better. And self-reported surveys and studies of scientific papers reveal artificial intelligence is becoming an unavoidable part of scientific publishing, both at the level of writing manuscripts as well as the peer review process.

Indeed, tools like ChatGPT can help scientists, particularly those who are non-native English speakers, make sense of a vast literature and streamline the writing process, said Roy Perlis, vice chair for research in the Department of Psychiatry at Mass General Brigham and editor in chief of the journal JAMA+AI. “I think it’s important to recognize that for some authors and scientists, this is a game changer.”

Many who work in science publishing agree that, with appropriate human oversight, AI tools can be used at different stages of the publication process. “AI has the potential to improve the quality, efficiency, and inclusivity of scholarly communications,” Renee Hoch, head of publication ethics at PLOS, wrote in an email to Undark. But while artificial intelligence can help with long-standing problems in the scientific publishing world, it also present concerns, including compromising quality and breaching confidentiality in peer review. “Unfortunately, it can also empower bad actors to expand and expedite fraudulent activities, such as fabricating articles, datasets, and reviews,” Hoch added.

As more researchers use the tool, journals and publishers are having to grapple with this balance, said Emilio Quaia, a radiologist and the editor of the journal Tomography. “I think that AI will be an avalanche, unfortunately,” for scientific publishing, he said


In a survey conducted by Nature of 5,000 international academics, around 8 percent of scientists reported using AI to write a first draft, translate a paper, or make summaries of other articles for use in their own paper; 28 percent reported using AI to edit their research articles. In another study, published last year in Science Advances, researchers analyzed more than 15 million biomedical abstracts, looking for excessive use of certain words associated with AI, such as “delves,” “underscores,” and “showcasing.” They found that at least 13.5 percent of abstracts published in 2024 were likely processed with language models.

Scientists have reported that they’ve found AI to be helpful in the writing process by identifying and summarizing literature that researchers may have missed, translating drafts written in the researchers’ original language, and correcting grammar, according to survey findings from Oxford University Press. In particular, it can help those who are not native English speakers, and who are at a disadvantage in the sciences, improving readability, said Hosseini.

“I think it’s important to recognize that for some authors and scientists, this is a game changer.”

But large language models are prone to hallucinations and, because they are trained on already published data, scientists who use them may risk inadvertently plagiarizing or misrepresenting studies. Some phrases and content provided by AI tools, Hosseini said, may even be taken verbatim from other sources.

And while data fabrication predates AI, it certainly makes it easier for bad actors to generate fake papers from scratch, said Perlis. Already, low quality papers, produced by paper mills and likely written with the help of AI, have become a problem for preprint servers and across multiple areas of research from cancer to nutrition science, quickly overloading a burdened system.

Meanwhile, scientific publishing is confronting AI in another realm: peer review. As an editor, Perlis has noticed that, since the pandemic, it has gotten more difficult to recruit reviewers willing to evaluate manuscripts, as researchers became more burnt out and more began declining review requests, or simply did not respond to editors’ calls. Allowing researchers to use AI could “broaden the pool of people who can contribute to science in some way,” he said. (A 2025 survey of about 1,600 scientists by the publishing company Frontiers found about half reported using AI to help conduct peer review.)

Many who work in science publishing agree that, with appropriate human oversight, AI tools can be used at different stages of the publication process.

AI may also be a more neutral alternative to human reviewers who could have certain biases against a particular hypothesis or group of researchers, Quaia said. On the other hand, because generative AI systems are usually trained on data that reflect historical biases in publishing, it could amplify those disparities in the peer review process, he added. Some research backs this up: For example, one 2025 study published on a preprint server, meaning it has not yet been peer reviewed, tested four large language models as peer reviewers and found that three exhibited a bias towards well-known authors and all four showed a bias toward authors from prominent institutions. Another preprint found that nine large language models similarly gave better rankings to papers from high-status institutions. But when specifically prompted to take gender and geographical diversity into consideration, a large language model was able to overcome these biases when identifying expert sources for peer review, another study found. This, Quaia said, could be limited by including a broad spectrum of research across disciplines, regions, and demographics for training algorithms.

With more researchers resorting to these tools, Perlis said, editors and publishers are wondering how they can use AI to empower peer review without replacing humans in the process. Despite its potential advantages, scientists are still better at identifying the novelty of a study and how it contributes to our knowledge, Perlis said. “I think that’s really hard to automate.”


In light of increasing AI use among researchers, journals are establishing policies and strategies to safeguard the publishing process. Among the top 100 scientific journals, 87 percent provide guidance on the use of generative artificial intelligence, according to a 2024 study.

These guidelines are quite consistent, in part due to organizations such as the Committee on Publication Ethics, the World Association of Medical Editors, and the International Committee of Medical Journals Editors that have helped to create some cross-industry standards, said Hoch.

While fabricating content with AI is clearly not permitted, most publications and publishers, in agreement with researchers, allow AI tools to be used for data analysis and language editing, usually with a detailed explanation of the purpose and extent of use. For example, PLOS requires authors to “include the name(s) of any tools used, a description of how the authors used the tool(s) and evaluated the validity of the tool’s outputs, and a clear statement of which aspects of the study, article contents, data, or supporting files were affected/generated by AI tool usage.”

For peer review, most publishers allow the use of AI, but one of the main concerns is maintaining the confidentiality of unpublished data. For example, some of the leading journal publishers, including Springer Nature, Elsevier, and JAMA Network, ask reviewers not to upload unpublished manuscripts to generative AI tools.

Major publishers also prohibit citing AI tools as co-authors and do not permit the use of AI-assisted tools to create or alter images. Researchers, publications, and organizations have also urged caution and transparency when using AI tools. For example, the PLOS guidelines state that it falls to researchers to confirm that any content created or edited using AI is accurate and valid, that there are no concerns about potential plagiarism, and that all relevant sources are cited.

Other journal editors also emphasize the importance of human oversight. “Ultimately, this is a case where the human authors are the ones who are responsible for every word and every number in the paper,” Perlis said of JAMA.

“AI is forcing us to evolve, on some level, every aspect of the publishing process.”

However, some researchers may not disclose their AI use, so journals are also making use of AI detection tools. But those tools are still quite limited, said Hosseini.

The publishing system will have to continuously keep an eye on quickly developing tools and adapt accordingly, said Perlis. “AI is forcing us to evolve, on some level, every aspect of the publishing process,” he added, ultimately making the field “take another look at every aspect of how scientific knowledge gets created and presented and refined.”

Republish

Claudia López Lloreda is a senior contributor at Undark and a freelance science journalist covering life sciences, health care, and medicine.