Publisher backs ‘responsible’ AI use in peer review

Frontiers says survey showing reviewers already using artificial intelligence supports transparent, “supervised” approach The scholarly publishing company Frontiers is encouraging researchers to adopt a “responsible, supervised” use of artificial intelligence in the peer review process for academic papers. Frontiers published a report this month focused on “unlocking AI’s untapped potential” in scholarly publishing, which includes the results of a survey it conducted on AI use by researchers around the world. Of about 900 researchers who replied to a question about their use of AI in peer review, 2 per cent said they use it always, 8 per cent often and 17 per cent sometimes. Just over a quarter said they use it rarely, and just under half said they never use it. Some 24 per cent of respondents said their use of AI in peer review had increased over the past year, with 70 per cent reporting no change and 5 per cent reporting a decrease. Frontiers, which is based in Switzerland, described these results as “a key turning point” that paves the way “for a more efficient and consistent” publishing system. Use of AI in peer review is undergoing “progressive normalisation”, its report says, but adds that this is “only the first step toward confident adoption”. The report says: “Harnessing AI purposefully is essential to shaping the industry’s conversation about the future of peer review, a process long challenged by issues of sustainability, transparency, and fairness. “Used responsibly, AI can ease these pressures and reinforce trust in the evaluation of science. Peer review is where the next wave of meaningful progress will come from.” AI as a ‘collaborative tool’ In a foreword to the paper, Frontiers chief executive and co-founder Kamila Markram says AI is often used in peer review “for surface tasks, like polishing language, drafting text or handling administration, rather than for deeper analytical and methodological work where it could truly elevate rigor, reproducibility and scientific discovery”. She says the unethical use of AI to help produce low-quality papers increases the need to use AI for checks to prevent such papers being shared, while the report says use of AI could “ease reviewer fatigue, improve consistency and support integrity checks”. Based on the survey findings, the report says there is “a need to raise awareness about how AI can most empower reviewers”, adding that such use should “always [be] with human oversight and accountability”. “Effective use of AI is not about replacing human judgment but about training researchers to prompt, guide and critically interpret AI outputs, transforming the technology from a mechanical assistant into a collaborative tool that enhances human creativity in research,” the report says. Its recommendations say research institutions should provide training on the opportunities and challenges of AI and provide clear guidelines for its use. Funders should mandate transparency on AI use and ensure accountability, it adds. Alignment with guidelines Key organisations have previously published guidance on whether AI should be used in peer review. These include a 2023 guide from the STM Association of scientific publishers, which says: “GenAI should not be used to create a review of a paper…Reviewing implies responsibilities and tasks that can be attributed to and performed only by humans.” A 2024 guideline published by the European Commission says researchers should “refrain from using generative AI tools substantially in sensitive activities…for example peer review”. It adds that using AI to find background information is not substantial, but that “delegating the evaluation or the assessment of a paper is a substantial use”. Frontiers told Research Professional News its policy “is that peer review remains strictly human-led”, with editors and reviewers being held responsible. It said papers must not be uploaded to external or public AI tools, but that Frontiers “provides internal AI tools within [a] secure review environment” that help “summarise manuscripts, highlight areas for analysis and assist in drafting constructive feedback”. It said this policy “is aligned with both EU and STM guidance and is designed to sit within these frameworks”, adding that STM “explicitly recognises as appropriate” use of publisher-governed, non-public AI. ‘Reflecting the reality’ “Where Frontiers adds clarity is in recognising that responsible, supervised AI support…is not the same as [delegating] peer review to AI,” the company said. “Frontiers believes the greater risk to peer review quality comes from unregulated, opaque or undisclosed AI use, which is already occurring across the research ecosystem. “Our whitepaper reflects the reality that many researchers are already using AI tools during peer review, sometimes outside existing guidance. Frontiers’ position is that clear boundaries, human accountability and well-governed, secure tools are more effective than blanket prohibitions in protecting and strengthening research integrity. “By setting clear boundaries, maintaining human accountability, and providing secure tools with governance and monitoring, our approach is designed to protect trust, quality and reliability of peer review, while contributing to an evolving, evidence-based conversation about responsible AI use in scholarly publishing.”
AI Article