This is adapted from our recent paper in F1000 Research, entitled “A multi-disciplinary perspective on emergent and future innovations in peer review.” Due to its rather monstrous length, I’ll be posting chunks of the text here in sequence over the next few weeks to help disseminate it in more easily digestible bites. Enjoy!
This section describes some of the limitations regarding the ways which peer review has been decoupled from traditional journals, including via preprints and overlay journals. Previous sections:
- An Introduction.
- An Early History
- The Modern Revolution
- Recent Studies
- Modern Role and Purpose
- Criticisms of the Conventional System
- Modern Trends and Traits
- Development of Open Peer Review
- Giving Credit to Referees
- Publishing Review Reports
- Anonymity Versus Identification
- Anonymity Versus Identification (II)
- Anonymity Versus Identification (III)
- Decoupling Peer Review from Publishing
- Preprints and Overlay Journals
- Two-stage peer review and Registered Reports
- Peer review by endorsement
Despite a general appeal for post-publication peer review and considerable innovation in this field, the appetite among researchers is limited, reflecting an overall lack of engagement with the process (e.g., Nature (2010)). Such a discordance between attitudes and practice is perhaps best exemplified in instances such as the “#arseniclife” debate. Here, a high profile but controversial paper was heavily critiqued in settings such as blogs and Twitter, constituting a form of social post-publication peer review, occurring much more rapidly than any formal responses in traditional academic venues (Yeo et al., 2017). Such social debates are notable, but however have yet to become mainstream beyond rare, high-profile cases.
As recently as 2012, it was reported that relatively few platforms allowed users to evaluate manuscripts post-publication (Yarkoni, 2012). Even platforms such as PLOS have a restricted scope and limited user base: analysis of publicly available usage statistics indicate that at the time of writing, PLOS articles have each received an average of 0.06 ratings and 0.15 comments (see also Ware (2011)). Part of this may be due to how post-publication peer review is perceived culturally, with the name itself being anathema and considered an oxymoron, as most researchers usually consider a published article to be one that has already undergone formal peer review. At the present, it is clear that while there are numerous platforms providing decoupled peer review services, these are largely non-interoperable. The result of this, especially for post-publication services, is that most evaluations are difficult to discover, lost, or rarely available in an appropriate context or platform for re-use. To date, it seems that little effort has been focused on aggregating the content of these services (with exceptions such as Publons), which hinders its recognition as a valuable community process and for additional evaluation or assessment decisions.
While several new overlay journals are currently thriving, the history of their success is invariably limited, and most journals that experimented with the model returned to their traditional coupled roots (Priem & Hemminger, 2012). Finally, it is probably worth mentioning that not a single overlay journal appears to have emerged outside of physics and math (Priem & Hemminger, 2012). This is despite the fast growth of arXiv spin-offs like biorXiv, and potential layered peer review through services such as the recently launched Peer Community In (peercommunityin.org).
Axios Review was closed down in early 2017 due to a lack of uptake from researchers, with the founder stating: “I blame the lack of uptake on a deep inertia in the researcher community in adopting new workflows” (Davis, 2017). Combined with the generally low uptake of decoupled peer review processes, this suggests the overall reluctance of many research communities to adapt outside of the traditional coupled model. In this section, we have discussed a range of different arguments, variably successful platforms, and surveys and reports about peer review. Taken together, these reveal an incredible amount of friction to experimenting with peer review beyond that which is typically and incorrectly viewed as the only way of doing it. Much of this can be ascribed to tensions between evolving cultural practices, social norms, and the different stakeholder groups engaged with scholarly publishing. This reluctance is emphasized in recent surveys, for instance the one by Ross-Hellauer (2017) suggests that while attitudes towards the principles of OPR are rapidly becoming more positive, faith in its execution is not. We can perhaps expect this divergence due to the rapid pace of innovation, which has not led to rigorous or longitudinal evidence that these models are superior to the traditional process at either a population or system-wide level (although see Kovanis et al. (2017)). Cultural or social inertia, then, is defined by this cycle between low uptake and limited incentives and evidence. Perhaps more important is the general under-appreciation of this intimate relationship between social and technological barriers, that is undoubtedly required to overcome this cycle. The proliferation of social media over the last decade provides excellent examples of how digital communities can leverage new technologies for great effect.
Tennant JP, Dugan JM, Graziotin D et al. A multi-disciplinary perspective on emergent and future innovations in peer review [version 3; referees: 2 approved]. F1000Research 2017, 6:1151 (doi: 10.12688/f1000research.12037.3)