Peer Review: Reform and Renewal in Scientific PublishingSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please contact email@example.com to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Types of Peer Review
For a number of decades, peer review generally operated with either one of two main models: single blind or double blind. Though there were variations of approach, almost all peer-reviewed journals could be classified in this way. However, since the late 1990s, a plethora of alternatives to the traditional models have emerged. The various kinds of peer review now available do not fit neatly into discrete categories, as the types might differ in only one aspect (or in many).
The question of anonymity in peer review, for instance, differs across models. Under single-blind review, the identities of the reviewers are kept secret from the authors; under double-blind review, the identities of the authors are also kept secret from the reviewers. There is also triple-blind review, though this is rarer, where the identities of the authors are kept secret from the editors. Opposing these blind forms of peer review is open peer review, where the identities of the reviewers are revealed to the authors. Open peer review might then be extended to the publication of the reviewers’ names as well as the content of their reviews alongside the final article.
Another aspect of peer review is whether it should operate discretely for each journal or whether reviews can be transferred between journals to accelerate the review process and reduce redundancy. A third aspect is whether reviewers operate separately from each other and from the authors or whether the process should be more collaborative. Finally, there is the question of whether peer review should be conducted prior to publication or whether there is a role for postpublication review, either in addition to or instead of prepublication review.
One of the unexplained phenomena related to the development of peer review is the split between single- and double-blind review across subject areas. For example, across Wiley journals, 95 percent of physical science and health science journals operate single-blind peer review, 72 percent of life sciences journals are single blind, but only 15 percent of social sciences and humanities journals. Social sciences and humanities journals are also the only subject category employing triple-blind review (1 percent). There seems no obvious factor or factors that should lead to a preference for author anonymity in these subject areas. Indeed, one might argue that author anonymity would be more important in fields where the potential incentives for bias were more numerous, such as for medical journals. The Wiley statistics also indicate that although the world is changing, it is changing slowly. Of their 1,593 journals, only 8 now offer some form of open peer review and only 4 others offer collaborative review. Single and double blind remain the dominant paradigms for peer review.
Anonymity of Reviewers (Single Blind versus Open)
The most common form of peer review, particularly among science journals, is single-blind review. This seems to have been the model of peer review widely adopted when peer review itself became commonplace, and this preference has not radically changed. Under single-blind review, the author does not know who the reviewer is. The key benefit of the anonymity is that it protects reviewers from criticism or the displeasure of authors and thus encourages reviewers to be candid in their evaluation of the manuscript without fear of reprisal. Those who have worked for journals will have experienced the occasional disgruntled author, whether this is a reactionary outburst that can be ignored or a more measured and/or persistent objection that might turn into a formal appeal. In either case, the reviewers are protected from becoming embroiled in this follow-up due to their anonymity.
It is not only the spontaneous form of unpleasantness from which the reviewers are protected. Peer review, as generally operated, is a process that happens in confidence and under the auspices of a specific journal. Reviewers might review several versions of the same paper, but their involvement with the paper is still limited by the requirements of the journal. However, revealing the identities of the reviewers exposes them to the potential for an open-ended and ongoing discussion with the authors outside the auspices of the journal. This is a particular concern for journals working in small fields where powerful personalities can wield excessive influence. While some reviewers might feel comfortable with this open collaboration with authors, others are likely to want some limitation on their commitments. Anonymity protects reviewers from any temptation among authors to contact the reviewers outside the confines of the formal review process.
The major criticism of blinded (or closed) review, however, is that the anonymity offered to reviewers limits their accountability for the recommendations they make, which might determine whether a paper is rejected. Given the importance of being published for career development and for securing research funding, reviewers have a significant position of responsibility. Richard Smith, former editor of the British Medical Journal (BMJ), writes, “The primary argument against closed peer review is that it seems wrong for somebody making an important judgment on the work of others to do so in secret. A court with an unidentified judge makes us think immediately of totalitarian states and the world of Franz Kafka” (Smith, “Opening”).
However, this desire to make reviewers accountable might be misguided. After all, reviewers are already accountable to the editors they serve; they are unlikely to be used for future reviews if their comments are unhelpful or derogatory. In contrast, what sort of accountability is it that opens up reviewers to possible blowback from authors without any mediation or regulation? Also the court analogy is unjustified because authors are free to choose where they submit and are free to publish elsewhere if rejected (“Striving for Excellence”). Comparing reviewers to judges misses the fact that it is the editor who is ultimately responsible for what is published, and the identity of the editor is always known; reviewers are not judge and jury but more akin to expert witnesses. This is not to say that accountability in journal publishing is not important. It is good practice for journals to have an appeal process for investigating complaints from authors, particularly if there are accusations of misconduct by editors or reviewers. However, such processes will only be improved by the increased adoption of ethical standards across journals, not by exposing reviewers.
The ideological impulse toward transparency is not the only motivation for open peer review. When Nursing Research adopted a policy of publishing reviewer reports, the reason was not primarily ethical; they wanted to provide instruction to new authors on the workings of the editorial office. By publishing the full paper trail, authors could see how manuscripts are revised and how decisions were made (Dougherty). This kind of guidance is important for new authors; however, it does not seem to require the naming of reviewers or even the publication of real reviews. Illustrative examples, alongside the author guidelines, would serve the same purpose.
Another claimed benefit of open peer review is that it allows reviewers to take credit for their work (Groves). However, it is not evident that reviewers are seeking this kind of credit. According to a survey, the primary reasons for reviewing were duty to the academic community (91 percent agree) and the joy of improving the paper (78 percent agree). In contrast, reviewers were less likely to review if their name was published with the article (38 percent) and if their signed report was published (47 percent; Ware 8–9). Moreover, the benefits that might accrue from being identified as a reviewer on a paper are limited because they will only be apparent to the readers of that paper. If reviewers do want credit, then it is credit that they can use as evidence of their academic activities and thus further their career, such as Continuing Medical Education (CME) credit (for those working in medicine). Publons, a new initiative that aims to give reviewers credit for their reviewing activity, is not predicated on open peer review, though it does facilitate the publication of reviews (https://publons.com). In addition, ORCID, Faculty of 1000, and the Consortia Advancing Standards in Research Administration Information (CASRAI) initiated a community working group to address this question in an effort to develop a standard for reviewer recognition using ORCID identifiers (Padula).
The other main motivation for open peer review is the intuition that it might improve the quality of reviews if reviewers are required to provide their names. However, evidence from trials has not substantiated this intuition. In two trials, open peer review produced no statistically significant increase in quality compared to blinded (van Rooyen 25; Walsh 49). In the third study, blinded reviews received a higher average rating (+ 0.41; 8 percent) than unblinded reviewers. More significant was the increased proportion of reviews rated excellent among the blinded reviews (McNutt 1375). Anecdotally, it was noted that, on balance, open reviews were more courteous and less abusive, but since only a small minority of reviews are abusive, this was not considered to be a significant result (Walsh 48–49; McNutt 1374). Furthermore, though it does seem possible in theory that reviewers might be more motivated to provide lengthier reviews if their names are attached to them, it does not follow that the reviews will be better, primarily because so many reviewers have never received adequate (or indeed any) training on how to conduct good peer review, an issue discussed elsewhere in this book.
On the other hand, there are several problems with open peer review. Some commentators have noted the potential for junior reviewers to have their careers hindered by criticizing more senior colleagues (Walsh 50; “Striving for Excellence”). Others point to potential for damage to working or personal relationships, particularly in a small field: “I do not particularly want the author to know that I was the reviewer who pointed out the boneheaded thing he or she did” (Albanese). There is also potential for nepotism if named reviewers attempt to curry favor with senior colleagues by praising their work in review.
There are added potential problems if reviewers are named and their reviews published, as any published statement is subject to libel law. Any statement made in a review that might be considered defamatory against the author could potentially be the subject of legal action against the reviewer.
The other problem with open peer review is the potential for loss of reviewers. Ware (2008) found that reviewers were less likely to review if their names were published, and trials of peer review found that a number of reviewers will decline to participate in an open review system (Walsh; van Rooyen; Khan). Given that editors often struggle to secure reviewers, the concern about losing reviewers will certainly make editors reluctant to switch to open review unless reviewers’ attitudes change.
Anonymity of Authors (Double Blind)
The most common form of peer review among social science and humanities journals is double-blind review, where the identities of the authors are kept from the reviewers and vice versa. Though double blind is not widely practiced among science journals, double-blind review has been found to be the preferred model of a majority of respondents (56 percent), and most considered it to be effective (71 percent; Ware 18).
The principle reason for operating double-blind review is to remove the possibility of bias from the review process, so that papers are judged on their merits and not on gender, nationality, status, or other factors related to the author. There is some anecdotal evidence, and some high-profile cases, that indicates that bias does occur, but it is difficult to judge how common bias actually is. Lee et al. (2012) judged that the evidence was inconclusive as to whether double-blind peer review actually reduces bias (11). This might seem counterintuitive—surely anonymity would eliminate any possibility of bias?—but the explanation might be that in order for double-blind review to reduce bias, we would first have to assume that bias actually takes place; Lee et al. found little evidence of routine bias in peer review. Nevertheless, if bias does occur—even on a small scale—it would be preferable to remove the possibility of that bias rather than allowing it to remain. It is perhaps pertinent to ask why triple blind is not more widely used, since the possibility of bias (even subconscious bias) is likely to be an issue for editors as much as for reviewers. The probable explanation is that editors usually like to retain a high level of oversight in the editorial office, whereas triple blind would require them to divest a significant level of oversight to an administrator. Some journals have no administrator other than the editor, so triple-blind review would be impossible to implement.
The major problem identified with double-blind peer review is the practical issue of maintaining anonymity. A survey of 370 chemistry editors and editorial board members found that the most common reason for not adopting a double-blind process was that such protections were perceived to be “pointless” because the identity of the author could be all too easily guessed (Brown 133). Cho et al. found that reviewers were able to identify at least one author for about 40 percent of the papers included in their study. Experience shows that anonymizing papers can be difficult, as it involves not only removing the title page but often removing certain references (if the authors draw attention to their earlier work). Additionally, the name of the research location must be removed, and in some cases, maintaining anonymity requires rewriting certain passages in the third person. Even anonymizing the paper itself might be insufficient, as many submissions begin life as working papers or conference presentations, so that an Internet search on the manuscript title can reveal the author names. Some fields are so specialized that an engaged researcher is likely to know the topics that others in the field are working on and so are likely to make a fair guess about the identity of the author based only on the topic of the paper.
There are other criticisms of double-blind peer review. Various studies have found that using a double-blind process had no effect on the quality of reviews (Ware 17; Goldbeck-Wood; Cho et al.), though Justice et al. did find an improvement in quality. Yet there seems to be no particular reason we should expect double blind to improve the quality of reviews, so this doesn’t seem like a pertinent criticism. Another criticism is that hiding the identity of the authors from the reviewers prevents reviewers from raising any conflicts of interest. However, since the reason for raising conflicts of interest is due to potential bias, any such conflicts of interest would be irrelevant if double blind was operating effectively. It is also worth mentioning that under double-blind review, editors and the editorial office can also look for conflicts. Determining conflicts of interest is not solely the responsibility of the reviewer, nor should it be.
Perhaps the only substantial objection to double-blind review, apart from the practical difficulty of operating it, is the assertion that reviewers are limited by not knowing the identities of the authors. Does knowing the identities of the authors prompt the reviewers to ask appropriate questions? For example, it might allow the reviewer to compare the prior work by the authors to determine whether the present submission was new research or derivative (Nature Publishing Group 606). However, one might ask how relevant such questions are for the review process. After all, peer review is not about judging whether the authors have made a substantial advance in their own understanding but whether their submission constitutes a substantial advance for the field.
Transferable/Portable Peer Review
One relatively new development in peer review is the idea that authors could, having been rejected from one journal, be allowed to transfer reviews, along with their manuscripts, when they submit to another journal. This does not guarantee that the second journal will not seek further reviews but is intended to speed up the review process and reduce the pressure on reviewers. After all, it is not uncommon for someone to be asked to review the same paper for two different journals.
There are various models for transferable peer review. Some journals, particularly society titles, have one or more “sister” journals that fall within the same broad subject area while having a different focus. An editor might feel that a paper is not suitable for the journal he or she edits but that it would fall within the scope of the sister journal. Another motivation, this time for publishers, is to encourage authors to remain within their “stable” of journals or encourage submissions to their open-access titles. The option to transfer a rejected paper can be attractive to an author if it is perceived as expediting the road to publication elsewhere. Peerage of Science have introduced a new model that operates peer review outside any specific journal. Instead, affiliated journals can look through articles that have already been reviewed and offer to publish them. Another example of “portable” peer review is services offered by organizations such as Rubriq (http://www.rubriq.com/) or Axios (http://www.axiosreview.org). In these instances, papers undergo external peer review and then Rubriq or Axios provide feedback to the authors in an attempt to save time and effort for both authors and journal editorial offices. In some instances, these services might even refer authors to submit their research to a specific journal or set of journals.
One practical problem with transferable peer review is that journals often assess articles by very different criteria, meaning that reviews conducted under the auspices of one journal might not meet the requirements of another. Journals that agree to transfer papers need to have some alignment in their reviewer forms. This variation in standards can also limit the expansion of portable peer-review services if their reviews cannot meet the requirements of potential publications.
Transferable peer review breaks down some of the traditional boundaries in journal publishing, where peer review has often been viewed as something that happens in confidence under the oversight of the journal. Transferability of reviews not only suggests that reviews are a filtering device for that journal but implies a more general statement on the quality of the paper. Transferring reviews also raises some ethical considerations—most pertinently, who owns reviews? There is currently no consensus on this issue. Some authors think journals own the reviews and that reviewers have given their consent to this when agreeing to review; others think that review is a form of privileged communication and therefore still owned by the reviewer (Moylan). Whatever the case, it seems both prudent and honest to be clear with potential reviewers concerning how their reviews will be used. Transferring reviews with papers might break the confidentiality of the review and thus lead potential reviewers to be more reluctant to accept invitations to review.
Independence of Reviewers (Collaborative Review)
When journals solicit two or more reviews of a paper, they will typically approach the reviewers separately and ask them to produce independent reviews. It is rare for journals to encourage collaboration among reviewers, especially in the context of blind peer review. The rationale for this confidential approach is that two or more reviewers reporting independently can achieve greater objectivity than can the same two or more reviewers coming to a joint conclusion. However, changes in technology have encouraged some journals to try innovative, collaborative approaches to peer review that challenge this status quo.
The European Molecular Biology Organization (EMBO) journals offer reviewers the opportunity to provide feedback on each other’s reports. The editorial office then combines these comments into a single report. Another variation is operated by eLife, where there is discussion among reviewers, overseen by an editor, until a consensus emerges and a single report is generated from the discussion. After an initial independent review, Frontiers allows real-time online dialogue among authors and review editors until consensus is reached.
Collaborative review could even be employed as a mechanism to train the authors and reviewers of tomorrow. On the assumption that learning how to deconstruct a paper is a great way to train an author to construct a paper of his or her own, the journal Headache runs a monthly virtual meeting whereby the United States–based Fellows in Headache Medicine receive instruction on some aspect of evaluating a paper. Following the signing of a confidentiality agreement, the fellows then review a “live” paper collectively. The review is then written up by the meeting chair, typically a member of the editorial board who possesses both subject expertise and a strong background in methodological design and reporting. The journal reports that the discussion is usually very detailed and results in a very thorough review from multiple perspectives.
Anecdotal evidence is that reviewers enjoy the collaboration these models permit. There is also the perceived advantage that if reviewers collaborate with others, they are more likely to work with care and consideration. Authors also benefit from receiving a single report if it removes the possibility of receiving conflicting recommendations from two different reviewers.
Collaboration can feel more positive and constructive than traditional approaches to peer review. Reviewers and editors might see such collaboration as focusing peer review more on improving the paper rather than seeking reasons to reject it, and while this might seem a worthy aspiration, the reality is that busy reviewers rarely have the time or inclination to be unpaid supervisors for junior researchers. Journals, after all, are not vehicles whereby reviewers educate other reviewers, like heavily supervised student coursework, but vehicles whereby authors are able to disseminate knowledge to readers.
Public Peer Review
Another new approach to peer review takes advantage of platforms reminiscent of our familiar social media outlets, such as Facebook or comment sections in an online newspaper. Public peer review (sometimes also called “open” peer review) makes the submitted paper available to a large pool of individuals and allows them to add unsolicited comments. This review process has much in common with postpublication peer review (see the following), inasmuch as the paper is made public while the review process is ongoing. The difference between public and postpublication peer review, as we shall see, is that public peer review occurs prior to the publication of a final copyedited and typeset version—the version of record.
Experiments with public peer review have proved less than successful (Björk and Hedlund). In 2006, Nature conducted a trial in which authors were asked if they would be willing to have their manuscripts placed on an open server where comments could be placed by any reader. Meanwhile, manuscripts were also reviewed in the normal way. The trial was judged to be a failure. Of 1,369 authors, only 71 (5 percent) agreed to have their papers placed on the open server. Despite healthy traffic on the server, 33 percent of papers received no comments at all. None of the public comments left for the papers were deemed to be particularly significant or to have influenced the decision of the editors. The general reaction to the trial seems to have been indifference: “Anecdotally, potential commenters felt that open peer review is ‘nice to do’ but did not want to provide any feedback on the papers on the server” (Nature).
Those who run editorial offices will know that editors must typically approach several people before someone assents to review a paper. Reviewers, after all, are trying to fit reviewing into a busy schedule—not least their paid employment—and the more sought-after reviewers are forced to decline invitations quite regularly. Given this response rate for blind reviews solicited by editors, it is unsurprising that the response rate for unsolicited public reviews is so poor. For that matter, editors select reviewers they can trust because of their demonstrated expertise and are not interested in the random comments of those potentially unqualified to review. Here again, it seems unsurprising that the unsolicited comments of public peer review would produce such patchy results.
Björk and Hedlund (2015) raise the additional question of maintaining consistency in the peer review process, suggesting that leaving the review process to the whimsy of unsolicited reviewers does not give equal opportunity for rigorous peer assessment to all submissions (90). Some suggest that public peer review can only work if there is an incentive for reviewers to contribute, such as giving DOIs for published reviews (ScienceOpen) or awarding points for comments (PeerJ; Winker 144).
There is an intuitive appeal in the idea of public peer review, making a gesture as it does to the virtues of transparency and a democracy of opportunity for all to be heard, thus exposing manuscripts (in principle) to all peers rather than to a select few. There is, however, a paradox at the heart of public peer review: the point of publication is the dissemination of knowledge and the point of peer review is to determine what is to be disseminated. Public peer review confuses these priorities and potentially leaves neither publication nor reviewing as well off as they found them.
Postpublication Review / Dynamic Articles
Peer review developed as a prepublication process, essentially as an extension of the editor’s decision concerning what should be accepted for publication and what should be rejected (and, more commonly now, what should be revised before acceptance). However, in recent years, there have been calls from some in the research community to change this paradigm in an effort to increase transparency by opening up manuscripts to review after publication. Defined broadly, we can say that postpublication review already exists even in traditional journals, through letters to editors, commentaries, and other associated media (Winker 143). However, letters to the editor engage with the published article rather than assessing it or revising it, since traditional journal publishing has sought to maintain the version of record after publication. The status quo does permit some changes to that version of record, such as a retraction if the paper is found to be fraudulent or a correction to be published in a later issue of the journal, but the goal of the publisher is to produce a final and authoritative version of an article.
There have been a number of attempts to encourage postpublication commentary, such as those journals that set up comments pages to accompany articles. Beyond having a comments page, F1000 Prime allows members of the “faculty” to review papers recently published in a variety of journals and provide recommendations to subscribers about what they should read. The F1000 Prime model provides an additional filter for readers, over and above what is selected by editors for publication. A different approach, adopted by PubPeer, allows individuals to post anonymous comments about any published paper. PubPeer boasts some success, having exposed misconduct that has led to the retraction of some papers. On the other hand, PubPeer has also come under criticism for allowing individuals to post anonymously, which might cultivate an environment to “vent spleen at the imperfections of colleagues” (Blatt). Good journals already have policies for dealing with allegations of misconduct in a balanced and confidential manner; it is not clear what advantage there is in posting these allegations to the Internet rather than making them in confidence to the journal.
Postpublication review, more narrowly defined, undertakes peer review after publication either in addition to or instead of prepublication review. An appealing aspect of postpublication review is that it speeds up publication, allowing papers to be placed before the public within days rather than after many months. Another appealing aspect is the transparency of the review process, with readers able to see the manuscript both before and after review. There might also be some appeal in introducing some dynamism into journal publishing, allowing articles to be revised after publication.
But there are also some concerns about postpublication review. What is the status of a published paper that no one has reviewed? Readers have the apparent benefit of immediate access to the paper but no basis on which to determine whether it is worth reading. Researchers already struggle to stay on top of the literature in their fields, so publishing additional papers without any filter would only add to that difficulty. Furthermore, the current model for scientific literature depends on the philosophy that there is a static version of record, an article that can be cited and referred back to. There is a danger that making journal articles dynamic undermines that standard and would require a new understanding of the role of articles in preserving and disseminating scientific knowledge.
One example of this postpublication paradigm is F1000 Research, which brands itself an “Open Science publishing platform.” Authors are encouraged to submit their papers, regardless of whether the results are negative or inconclusive. An in-house team checks papers against their basic standards. Assuming everything is satisfactory, the paper is rapidly published online. After publication, invited experts conduct peer review. While the review process is public (or “open”), the reviews are solicited. Articles that pass peer review are indexed in PubMed and elsewhere. The article remains published regardless of the reviewer reports. Authors are encouraged to respond to reviewer comments and publish revisions. F1000 Research is, in some sense, a “halfway house.” On the one hand, publication happens before peer review. On the other hand, certain features usually associated with publication (such as indexing) are held back until after review. All versions of the article are citable, including the prereviewed version, but the versions are linked.
Some have argued that allowing articles to be revised postpublication would reduce the flood of scientific papers now being published, since authors would not need to publish new papers that update their findings (Winker 145). However, this misreads the main drivers of publication (i.e., career advancement and research funding) and misrepresents the nature of the scientific literature. Journals are not encyclopedias and articles are not definitive summaries of the state of scientific knowledge. Research articles (as opposed to review articles) report on specific findings and thus are discrete markers along the progress of knowledge. It would be a mistake to confuse the scientific literature for a Wikipedia-style outlet that can be constantly refined and updated.
Peer Review without Publishers
One of the criticisms of peer review, and journal publishing in general, is the role of commercial publishers. Given that peer review differentiates journal content from unfiltered content posted on, say, an Internet blog, undertaking peer review is one way by which publishers add value to submitted content. Obviously publishers themselves do not provide peer review, but they do often provide the infrastructure and underwrite the cost of such review. While publishers usually mandate that editors have editorial autonomy when choosing what to publish, there are still those who are uncomfortable with the role of publishers in peer review. For example, open-access advocate Jan Velterop argued at the Royal Society’s Future of Scholarly Scientific Communication Conference in 2015 that the role of publishers should be limited to the technical processes that produce a publishable paper and that peer review should be carried out by the academy (Jump). Ellison (2010) has also speculated that “new institutions may arise and perform many of the same functions as the current peer-review system more efficiently” (657).
There does not currently seem to be much impetus behind such ideas. Academic institutions are not well placed to take on the role of independent peer review. Would they review only the research of their own institutions? In which case, does that constitute independence of review? Or, if some new body is to take on the mantle of peer review, how is it to be funded? Publishers are only able to operate peer review processes because they generate revenue, either through subscriptions or open-access charges. Some other institution would either require public funding or other benefactors who would bring with them their own interests and pressures.
Without Peer Review
Peer review can slow publication of articles, its operational costs increase the cost of publication, and it is burdensome on reviewers’ time. Given criticisms of how effective peer review is, some have argued that science would be better off if peer review was abandoned. Genuine science must be falsifiable, after all, so inevitably what is published will never be unshakable truth. Ultimately it is future observation and experimentation that will demonstrate whether the study was right. Therefore, it might be argued, why try to guarantee the validity of what is published? Just disseminate the paper and let the progress of science determine what stands and falls.
Richard Smith, former editor of the BMJ, has argued that it would be left to readers to determine “what matters and what doesn’t” (Jump). Some even argue that “peer review has gained its sacred cow status on the basis of little evidence,” and there is the further methodological problem at the base of any study on peer review: “How do you assess whether the referee’s recommendation is right?” (Smith 10–11).
Another challenge for peer review, which is also a challenge of journals in general, is the ease of dissemination afforded by the Internet. Ellison observes a growing trend in economics for top authors to avoid traditional journals and disseminate by other means. He argues that these authors are able to draw attention to their work, and thus garner citations, by other means. While junior researchers do not have the option to depend on their reputations for readership and citations, there is an increasing use of preprint servers to disseminate articles prior to publication. There is a view that everything should be posted on a preprint server, and then readers would just wait to see what “floats to the top.” These preprint outlets allow authors to gain comments and criticism, but their work can also be cited (Curry). These other methods of dissemination challenge the centrality of journals and thus of peer review. Academia is inherently conservative, however, and the current paradigm of academic incentives (career progression, research funding, etc.), either by design or otherwise, helps maintain the centrality of journal publishing.
Despite the criticisms of peer review and the challenges of new models, research suggests that peer review will remain central to journal publishing for many years to come. A recent study of around 3,650 researchers conducted for the Alfred P. Sloan Foundation found that peer review is “increasing its influence” (Nicholas et al. 15). First, peer review is important in determining what should be cited. The study found that researchers are willing to cite conference papers, for example (especially in engineering and computer science), but these conference papers are also not regarded as authoritative (17).
Second, peer review is important for authors in determining where to publish. The study found that peer review was the second-highest criteria, after relevance to the field (Nicholas et al. 17). There is also external pressure to publish in peer-reviewed journals, most obviously to obtain tenure: “The survey revealed that the more prolific the researcher in publication terms, the greater the belief that peer-reviewed journals are the most trustworthy information sources and most prestigious places to publish” (19).
Third, the question as to whether a journal is peer-reviewed is the most important factor for readers in determining what to read, even above personal recommendation or impact factor. Rapid expansion of scientific output has led to researchers gripping more firmly to peer review (Nicholas et al. 21). Researchers will only trust what is disseminated via social media if it is linked to traditional, peer-reviewed sources (19). It is interesting to note that the respondents did recognize the need to assess what they read for themselves—peer review does not confer the status of gospel—but peer review is important for the initial selection of what to read.
The study did find plenty of criticisms of peer review. There were concerns about peer review being slow, about its varying quality, about reviewer bias, and about the lack of transparency, among other things. But there were also concerns about newer models. Some believe that open peer review inhibits reviewers and that postpublication peer review is too easily gamed (Nicholas et al. 16). Repositories, or crowd-sourced peer review (i.e., public peer review), were not mentioned by a single respondent as an option for the future (21). There also seems to be a general uncertainty about the quality of peer review at open-access journals. Yet despite the varied criticisms of peer review, there is still no consensus about how peer review might be improved: “One of the things that struck us was the lack of any plan for a transformed scholarly communication system, even among those who strongly attacked the present one” (21).
Peer review, then, remains engrained in academia, determining where to publish, what to read, and what to cite. Peer review is also central to academic incentives for reputation and progress in one’s career; it will not go away until these factors change.
Many of the alternatives to traditional peer review, including scrapping it altogether, are proposed by critics to address specific problems or failings, but it seems that a more basic and fundamental question is often ignored: What is the purpose of journal publishing? Who does it serve? Are journals there to serve the needs and interests of authors, to give them prestige and kudos? Do journals exist to serve publishers or their society owners, to boost subscriptions and/or article processing charges? Do journals exist to serve readers, to provide interesting and informative content? Do journals exist to serve the wider public? Without a clear idea of why we have journals, how can we determine what peer review should do and when it is succeeding?