12. Peer Review: Anonymity Versus Identification 2

This is adapted from our recent paper in F1000 Research, entitled “A multi-disciplinary perspective on emergent and future innovations in peer review.” Due to its rather monstrous length, I’ll be posting chunks of the text here in sequence over the next few weeks to help disseminate it in more easily digestible bites. Enjoy!

This section describes part 2 the lively debate around whether peer review reports should be anonymised or not. This is a big topic, so I’ll continue to slice it into a few smaller posts to make it easier to read. Previous sections:

  1. An Introduction.
  2. An Early History
  3. The Modern Revolution
  4. Recent Studies
  5. Modern Role and Purpose
  6. Criticisms of the Conventional System
  7. Modern Trends and Traits
  8. Development of Open Peer Review
  9. Giving Credit to Referees
  10. Publishing Review Reports
  11. Anonymity Versus Identification

Eponymous versus anonymous peer review (II)

Reviewer anonymity can be difficult to protect, as there are ways in which identities can be revealed, albeit non-maliciously. For example, through language and phrasing, prior knowledge of the research and a specific angle being taken, previous presentation at a conference, or even simple Web-based searches. Baggs et al. (2008) investigated the beliefs and preferences of reviewers about blinding. Their results showed double blinding was preferred by 94% of reviewers, although some identified advantages to an un-blinded process. When author names were blinded, 62% of reviewers could not identify the authors, while 17% could identify authors ≤10% of the time. Walsh et al. (2000) conducted a survey in which 76% of reviewers agreed to sign their reviews. In this case, signed reviews were of higher quality, were more courteous, and took longer to complete than unsigned reviews. Reviewers who signed were also more likely to recommend publication. In one study from the reviewers’ perspectives, Snell & Spencer (2005) found that they would be willing to sign their reviews and felt that the process should be transparent. Yet, a similar study by Melero & Lopez-Santovena (2001) found that 75% of surveyed respondents were in favor of reviewer anonymity, while only 17% were against it.

A randomized trial showed that blinding reviewers to the identity of authors improved the quality of the reviews (McNutt et al., 1990). This trial was repeated on a larger scale by Justice et al. (1998) and Van Rooyen et al. (1999), with neither study finding that blinding reviewers improved the quality of reviews. These studies also showed that blinding is difficult in practice, as many manuscripts include clues on authorship. Jadad et al. (1996) analyzed the quality of reports of randomized clinical trials and concluded that blind assessments produced significantly lower and more consistent scores than open assessments. The majority of additional evidence suggests that anonymity has little impact on the quality or speed of the review or of acceptance rates (Isenberg et al., 2009Justice et al., 1998van Rooyen et al., 1998), but revealing the identity of reviewers may lower the likelihood that someone will accept an invitation to review (Van Rooyen et al., 1999). Revealing the identity of the reviewer to a co-reviewer also has a small, editorially insignificant, but statistically significant beneficial effect on the quality of the review (van Rooyen et al., 1998). Authors who are aware of the identity of their reviewers may also be less upset by hostile and discourteous comments (McNutt et al., 1990). Other research found that signed reviews were more polite in tone, of higher quality, and more likely to ultimately recommend acceptance (Walsh et al., 2000). As such, the research into the effectiveness and impact of blinding, including the success rates of attempts of reviewers and authors to deanonymize each other, remains largely inconclusive (e.g., Blank (1991)Godlee et al. (1998)Goues et al. (2017)Okike et al. (2016)van Rooyen et al. (1998)).

The dark side of identification

This debate of signed versus unsigned reviews, independently of whether reports are ultimately published, is not to be taken lightly. Early career researchers in particular are some of the most conservative in this area as they may be afraid that by signing overly critical reviews (i.e., those which investigate the research more thoroughly), they will become targets for retaliatory backlashes from more senior researchers (Rodríguez-Bravo et al., 2017). In this case, the justification for reviewer anonymity is to protect junior researchers, as well as other marginalized demographics, from bad behavior. Furthermore, author anonymity could potentially save junior authors from public humiliation from more established members of the research community, should they make errors in their evaluations. These potential issues are at least a part of the cause towards a general attitude of conservatism and a prominent resistance factor from the research community towards OPR (e.g., Darling (2015)Godlee et al. (1998)McCormack (2009)Pontille & Torny (2014)Snodgrass (2007)van Rooyen et al. (1998)). However, it is not immediately clear how this widely-exclaimed, but poorly documented, potential abuse of signed-reviews is any different from what would occur in a closed system anyway, as anonymity provides a potential mechanism for referee abuse. Indeed, the tone of discussions on platforms where anonymity or pseudonymity is allowed, such as Reddit or PubPeer, is generally problematic, with the latter even being referred to as facilitating “vigilante science” (Blatt, 2015). The fear that most backlashes would be external to the peer review itself, and indeed occur in private, is probably the main reason why such abuse has not been widely documented. However, it can also be argued that by reviewing with the prior knowledge of open identification, such backlashes are prevented, since researchers do not want to tarnish their reputations in a public forum. Under these circumstances, openness becomes a means to hold both referees and authors accountable for their public discourse, as well as making the editors’ decisions on referee and publishing choice public. Either way, there is little documented evidence that such retaliations actually occur either commonly or systematically. If they did, then publishers that employ this model, such as Frontiers or BioMed Central, would be under serious question, instead of thriving as they are.

In an ideal world, we would expect that strong, honest, and constructive feedback is well received by authors, no matter their career stage. Yet, there seems to be the very real perception that this is not the case. Retaliations to referees in such a negative manner can represent serious cases of academic misconduct (Fox, 1994Rennie, 2003). It is important to note, however, that this is not a direct consequence of OPR, but instead a failure of the general academic system to mitigate and act against inappropriate behavior. Increased transparency can only aid in preventing and tackling the potential issues of abuse and publication misconduct, something which is almost entirely absent within a closed system. COPE provides advice to editors and publishers on publication ethics, and on how to handle cases of research and publication misconduct, including during peer review. The Committee on Publication Ethics (COPE) could continue to be used as the basis for developing formal mechanisms adapted to innovative models of peer review, including those outlined in this paper. Any new OPR ecosystem could also draw on the experience accumulated by Online Dispute Resolution (ODR) researchers and practitioners over the past 20 years. ODR can be defined as “the application of information and communications technology to the prevention, management, and resolution of disputes” (Katsh & Rule, 2015), and could be implemented to prevent, mitigate, and deal with any potential misconduct during peer review alongside COPE. Therefore, the perceived danger of author backlash is highly unlikely to be acceptable in the current academic system, and if it does occur, it can be dealt with using increased transparency. Furthermore, bias and retaliation exist even in a double blind review process (Baggs et al., 2008Snodgrass, 2007Tomkins et al., 2017), which is generally considered to be more conservative or protective. Such widespread identification of bias highlights this as a more general issue within peer review and academia, and we should be careful not to attribute it to any particular mode or trait of peer review. This is particularly relevant for more specialized fields, where the pool of potential authors and reviewers is relatively small (Riggs, 1995). Nonetheless, careful evaluation of existing evidence and engagement with researchers, especially higher-risk or marginalized communities (e.g., Rodríguez-Bravo et al. (2017)), should be a necessary and vital step prior to implementation of any system of reviewer transparency. More training and guidance for reviewers, authors, and editors for their individual roles, expectations, and responsibilities also has a clear benefit here. One effort currently looking to address the training gap for peer review is the Publons Academy (publons.com/community/academy/), although this is a relatively recent program and the effectiveness of it can not yet be assessed.


Tennant JP, Dugan JM, Graziotin D et al. A multi-disciplinary perspective on emergent and future innovations in peer review [version 3; referees: 2 approved]F1000Research 2017, 6:1151 (doi: 10.12688/f1000research.12037.3)