eLife December 23, 2025

Are peer reviewers influenced by their work being cited?

Open annotations (there are currently Altmetric provides a collated score for online attention across various platforms and media. Are peer reviewers influenced by their work being cited? https://doi.org/10.7554/eLife.108748.4 study explored a number of issues related to citations in the peer review process. An analysis of more than 37000 peer reviews at four journals found that: (i) during the first round of review, reviewers were less likely to recommend acceptance if the article under review cited the reviewer's own articles; (ii) during the second and subsequent rounds of review, reviewers were more likely to recommend acceptance if the article cited the reviewer's own articles; (iii) during all rounds of review, reviewers who asked authors to cite the reviewer's own articles (a practice known as 'coercive citation') were less likely to recommend acceptance. However, when an author agreed to cite work by the reviewer, the reviewer was more likely to recommend acceptance of the revised article. The evidence to support these claims is https://doi.org/10.7554/eLife.108748.4.sa0 : Findings that have theoretical or practical implications beyond a single subfield : Appropriate and validated methodology in line with current state-of-the-art During the peer-review process the editor and reviewers write an eLife Assessment that summarises the significance of the findings reported in the article (on a scale ranging from landmark to useful) and the strength of the evidence (on a scale ranging from exceptional to inadequate). Learn more about eLife Assessments Peer reviewers sometimes comment that their own journal articles should be cited by the journal article under review. Comments concerning relevant articles can be justified, but comments can also be unrelated coercive citations. Here, we used a matched observational study design to explore how citations influence the peer review process. We used a sample of more than 37,000 peer reviews from four journals that use open peer review and make all article versions available. We find that reviewers who were cited in versions after version 1 were more likely to make a favourable recommendation (odds ratio = 1.61; adjusted 99.4% CI: 1.16–2.23), whereas being cited in the first version did not improve their recommendation (odds ratio = 0.84; adjusted 99.4% CI: 0.69–1.03). For all versions of the articles, the reviewers who commented that their own articles should be cited were less likely to recommend approval compared to the reviewers who did not, with the strongest association after the first version (odds ratio = 0.15; adjusted 99.4% CI: 0.08–0.30). Reviewers who included a citation to their own articles were much more likely to approve a revised article that cited their articles compared to a revised article that did not (odds ratio = 3.5; 95% CI: 2.0–6.1). Some reviewers’ recommendations depend on whether they are cited or want to be cited. Reviewer citation requests can turn peer review into a transaction rather than an objective critique of the article. Peer review is an integral part of scientific publishing in which active researchers – who may also serve as editors at scientific journals – review and assess the standard of submitted research. Peer review is therefore central to science, as it determines which articles are published in high-profile journals, and in turn, influences the careers of scientists. Reviewers are expected to be relevant experts in the field of research they are reviewing. But this can sometimes create a conflict of interest as the reviewers’ own articles may be cited in the manuscript under review, which could influence their peer review. Conversely, reviewers whose work is not cited may feel that relevant work has been overlooked and may suggest their own papers be added – a practice that has been exploited in the past. To find out whether citations influence peer review recommendations, Barnett analysed more than 37,000 peer reviews from four journals that operate fully open peer review, making all versions of submitted manuscripts and reviews publicly available. Using a matched design, he compared two or more reviewers who evaluated the same manuscript. The analysis showed that being cited in the first round of review did not increase the likelihood of a favourable recommendation. However, in the second round of review, reviewers who were cited were more likely to approve the article. Contrary, reviewers who requested a citation to their own work were much less likely to approve the article. However, a similar pattern was observed for reviewers who suggested citing work other than their own. This indicates that some citation requests may reflect legitimate concerns about missing context rather than purely self-serving behaviour. These findings provide valuable insights into how the peer review process can be improved. Some journals – including those analysed by Barnett have already taken steps to reduce inappropriate requests for citations from reviewers. Other journals could consider implementing automated systems to flag self-citation requests, which could help improve the fairness and integrity of peer review systems, ultimately benefiting both scientists and science. In 2024, a published peer-reviewed article included this remarkable sentence: ‘As strongly requested by the reviewers, here we cite some references (35-47) although they are completely irrelevant to the present work’ ( ). This was a rare public example of coerced citations, where a reviewer exploits the peer review process to increase their citation counts and hence further their own career ( ). Reviewers should be relevant experts, so some suggestions to cite their articles will be appropriate. However, excessive citation requests or requests to cite unrelated articles are unethical ( Committee on Publication Ethics, 2019 ). Coerced citations can also come from editors trying to boost their journal’s ranking ( Coerced citations are reported as a common problem in peer review. In author surveys, two-thirds reported pressure from peer reviewers to cite unrelated articles ( ) and 23% had experienced a reviewer that ‘required them to include unnecessary references to their publication(s)’ ( ). Publishers have investigated whether ‘hundreds of researchers’ have manipulated the peer review process to increase their own citations ( ). Some reviewers may be exploiting their power over authors who ‘have a strong incentive to […] accept all “suggestions” by the referees even if one knows that they are misleading or even incorrect’ ( As reviewers are often in the same field as the article’s authors, they may already be cited in the article without the need for coerced citations. Reviewers who are cited may give a more favourable peer review and be more willing to overlook flaws ( ). Some authors may try to exploit this using ‘referee baiting’ ( ) by favourably citing a reviewer’s work. The interactions during peer review between authors and reviewers can determine whether an article is accepted ( ) and what results are included in the published version ( ). Given the importance of peer review for science, studies that examine how peer review works in practice are needed ( Tennant and Ross-Hellauer, 2020 ). Here, we examine interactions between peer reviewers and authors using four journals that publish all article versions and all peer reviews. We had two research questions: Do peer reviewers give a more or less favourable recommendation when they are cited in the article? Do peer reviewers give a more or less favourable recommendation when their review includes a citation to their own articles? A flow chart of the included reviews is shown in . The final sample size was over 37,000 reviews. There were more than 3500 articles that were not included because they had not yet been peer reviewed, especially recent articles. More than 2000 reviewers did not have a record in and so