cbmpub9944026 in
  • RSS

What Is Peer Review?

The term “peer review” means different things to different people and in different contexts. In its most basic sense, peer review is simply the evaluation of one’s work by one’s peers. What distinguishes peer review in the realm of scholarly communication from simply asking for the opinion of your friends, family, or coworkers, however, is that it is a more formal mechanism—­sometimes blinded, sometimes not—­whereby an official decision is made to designate which peers should be asked to assess a body of work. Within academia and other research milieus, the identity of that additional party varies. The peer reviewer could be a grant-­awarding body, a research ethics committee, or a journal. (Though books also constitute a great deal of peer-­reviewed, research-­directed publishing, this book primarily concerns itself with the work of scientific or research journals.)

Peer review typically serves two purposes:

  1. To function as a gatekeeper, determining which papers should be accepted for publication and thus become a part of the body of literature for a specific field of study.
  2. To burnish papers, ensuring that an article realizes its full potential. Burnishing a paper also means ensuring that sufficient information is included in the published article to enable both the validation of results and the replication of the study.

This chapter will explore both of these objectives of peer review and also identify the roles and responsibilities of a peer reviewer and the commonplace expectations various stakeholders in the publication process place on peer review. Very importantly, we will question how the peer reviewers themselves have come to be trusted as appropriately qualified experts in their fields.

Peer Review as Gatekeeper

While we are primarily concerned with the modern peer review of journal articles, early scientific research was, in fact, typically communicated in books, in private correspondence, or in periodic scientific/academic meetings.

Observed in retrospect, we can see certain obvious limits to these early forms of scholarly communication in science. The book is best suited either to the publication of years of accumulated research or to the lengthy exposition of grand theories, while private correspondence restricts the distribution of new ideas only to a few recipients. Perhaps we could say that the limited distribution of research results through private correspondence suited early researchers, who might have preferred to restrict discourse to members of their rather exclusive circles; today, however, it is accepted that the journal article is the most appropriate vehicle to publish the results of a single scientific experiment or study.

For the 300 years following the launch of the Royal Society’s Philosophical Transactions in 1665, scientific journals proliferated around the world, and their formats changed little in that time. Journal publishers generated a huge volume of published material, and this established the scientific journal as the accepted medium by which researchers communicated new ideas and the results of their experiments (Björk).

Yet the development of modern peer review is not, oddly enough, tied directly to the development of these journals. While scientific journals have long assumed the role of guardians of the “official” literature, acting as curators of what could be considered valid research and establishing the standards whereby membership in their exclusive society is granted, the conduct of peer review at these journals has actually evolved considerably. The new forms of peer review reflect the maturity and complexity of various fields of study, the increasing sophistication of research methods, and those recent technological developments, related to the Internet, that allow for various new forms of peer review, such as collaborative review from researchers scattered across the globe who have never met, which would have been inconceivable until quite recently.

The earliest journals were the self-­published products of learned societies whose membership comprised various practitioners, clinicians, and subject experts and whose research coalesced around these emerging journals. The notion that these journals required some mechanism of peer assessment and validation was not, however, apparent at first. Authors did not expect their work to be peer-­reviewed in the manner to which modern authors are now accustomed, as Albert Einstein’s reaction to the external review of his article by Physical Review demonstrates, as noted in the previous chapter (Kenne­fick). These earlier journals typically handled the assessment of a paper “in house,” usually by calling on a select group of the members of the society who published a particular journal (Spier).

The advent of peer review in these early journals could be interpreted, in retrospect, as an organic response to the proliferation of articles submitted to them, with journals outsourcing the increasing burden of assessment to external peer reviewers (Burnham). This emerging mechanism of peer review also provided readers with the assurance that what they were reading had been validated, tested, and corrected by fellow experts before it was published. This validation was a welcome benefit, but it was not the sole motivation for developing the modern peer-­review process.

Many publications, indeed, continued even until the 1960s to rely on the whim of the designated editor in chief to determine which articles were selected for publication (Fyfe), but nonetheless, it remains clear that the use of experts as peer reviewers had become, by the mid-­20th century, the modus operandi by which the scientific communities gathered around particular journals came to vet the flow of research papers. Journals began to use the very same people who either published in or read the journal to collectively validate the results and provide highly technical knowledge in an ever-­specializing world.

The system of peer review that emerged in this period was one that allowed scientific and academic output to be granted a stamp of approval while also maintaining an internal, self-­regulating, and peer-­enabled form of authority. Peer review is precisely a process whereby an author’s peers were asked, as they still are today, to consider whether they believed that a manuscript deserved to be published, either in its submitted form or following successful revision based on their comments, suggestions, and corrections. This thumbs-­up / thumbs-­down component of peer review—­the acceptance or rejection of a paper—­is what we understand to be its gatekeeper function. In other words, peers ask whether the research paper under consideration should be allowed to pass through the metaphorical gates (i.e., the threshold for acceptability) barring entry into the corpus of “officially approved” published literature in any given field of study.

At this point, we need to consider a subtle distinction in what we understand the gatekeeper function to mean as applied to two (maybe three, depending on the journal) different stakeholders: the reviewer, the editor in chief, and for those journals that operate such a workflow, an editorial board member / associate editor who can help the editor in chief select reviewers and post a recommendation or decision regarding the worthiness of a paper for publication. Peer reviewers are usually asked whether they would accept, revise (often split in to some variant of minor and major revision), or reject a paper. However, the specific act of posting a decision is typically charged to only one person, the editor in chief.

Why? Primarily because the editor in chief oversees the entire publication plan for a journal. Reviewers see only the article they have been asked to review. Editors will also be conscious of economic restraints, such as editorial page budgets. Typically, journals have a limited number of pages they can publish in a given year imposed on them by their publishers. Editors, therefore, must ensure that they do not accept more material than they can publish and do not exceed the budget. As a result, an overextended editor might be compelled to reject an article that in leaner years would have been accepted; a paper does not clear peer review because of quality alone. The editor in chief can also stand up to interference from other agents seeking to influence the direction of the journal through its decision-­making processes, such as powerful thought leaders, people with vested financial interests, governments, or those chasing improved impact factors. Conversely, weak editors can be influenced by such individuals. Most obviously, however, along with their editorial boards, editors will have determined the thresholds of acceptability for their journals, the details of which few reviewers would be aware. The difference between editor-­based decision making today and in the prewar years is that editors now take into consideration the opinions of peer reviewers rather than acting as the sole arbiter of what is acceptable for publication.

With the dynamic of editorial power in mind, we should acknowledge that peer review is no guarantee of quality, and peer reviewers do not always decide whether a paper should be published. To be accepted for publication in some journals represents the pinnacle of academic achievement, indicated by the journal’s demand that the paper provide very high levels of evidence, originality of thinking, or the likelihood of maximal impact on current thought. At the other end of the spectrum, however, some journals are simply looking to fill their page budgets and are content to publish a somewhat weak paper. So while reviewers often think that they have the power to determine the fate of a manuscript, ultimately, the gatekeeper is still the editor. This can, and frequently does, lead to (sometimes vocal) reviewer disgruntlement when the editor’s decision runs counter to the reviewer’s judgment.

So what do we mean precisely when we say that the peer-­review process provides a gatekeeper function? Whether as reviewers or editors (the latter of whom are almost always practicing researchers and subject experts and not full-­time professional editors), expert researchers determine whether a paper deserves to receive the publication’s stamp of approval. Publication in the journal can mean different things, but in essence, it should be understood that a paper written by experts has been judged by other experts (reviewers, editors) to be worthy of dissemination to yet more experts (readers). This does not necessarily mean that all parties agreed with the authors or accepted every finding in a paper, but if the results are preliminary and the conclusions are appropriately circumspect, a paper can be accepted for publication even if there is debate about the complete veracity of the ideas presented, the strength of the data collected, or the technique used to conduct the study.

So what function, precisely, do these gatekeepers perform in the peer-­review process? In essence, we should expect that an article published in a peer-­reviewed journal has been read by subject experts ahead of publication and that fellow experts have approved the paper for publication and also helped the author(s) correct earlier drafts. We assume all this to be true at any journal that claims to operate peer review, but as we will see later, this gatekeeper process is (potentially) being undermined as trust in the process of scholarly self-­regulation—­which has become so implicit in the entire foundation of peer review—­might slowly be eroding.

How is the gatekeeper function imposed daily at the vast majority of journals? The first step in the process usually involves having the editor (or a member of the editorial team) triage a paper to determine whether it contains any fundamental—­presumably unfixable—­flaws. The suitability of the article for publication at the journal to which it was submitted is also considered. Once a manuscript passes that initial phase, it makes its way to two or more reviewers. These reviewers have usually been invited to volunteer their time to provide an evaluation. Those sending the invitations to review—­if they perform their role properly—­will have selected the peer reviewers because their known expertise matches the subject matter of the paper.

Once the manuscript is in the hands of the reviewers, the reviewers use a somewhat standard set of criteria to determine whether they believe a paper should be published. Each journal provides its own unique set of criteria, but commonly used assessment benchmarks include the following:

  • Originality—­are new data or novel concepts presented?
  • Validity—­can the results or claims be tested and reproduced?
  • Context—­are the authors aware of other similar work, most obviously as expressed in the completeness of the references?
  • Claims—­is the tone of the discussion and conclusion in line with the results?
  • Accuracy—­is the paper free of obvious errors?
  • Synthesis—­if the article is a review of previously published work, is it comprehensive, balanced, and clearly built on a carefully designed literature search?
  • Limitations—­do limitations exist and have the authors properly acknowledged them?
  • Techniques—­if applicable, were the appropriate techniques or procedures applied correctly?
  • Ethics—­was the study ethical? Do the authors present any conflicts of interest?
  • Implications—­does the paper advance understanding? What does it contribute to a given field of study? Is it confirmatory (either in a positive or an unhelpful manner)?

Naturally, the application of these criteria can be arbitrarily applied between reviewers and across journals. Editors often, as a consequence, are compelled to seek the opinions of a third reviewer (or fourth or fifth, etc.), as attitudes between reviewers can be highly divergent. Reviewers might even agree on what needs to be corrected with an initial submission but still be split heavily on the fate of the paper. One reviewer’s minor revision recommendation is another’s rejection.

The job of the editor is to make sense of these reviewer preferences before rendering the final disposition. He or she has to do this while also addressing such different factors as the educational mission of the journal, the desires of the readership, and other critical issues such as ethical legitimacy. This is the conclusive step in the peer-­review process, and it is the editors who must determine whether a particular reviewer was “too tough,” whether the reviewer was ultimately qualified for the role of assessing a given manuscript, whether the reviewer was able to account for an important methodological issue, or whether a particular reviewer might have evaluated a paper while drawing on (consciously or otherwise) different cultural constructs, personal biases, and other external influences. An editor, for example, must also gauge whether a reviewer’s brief review (sadly commonplace) was possibly indicative of having made only a cursory reading of the paper, with scant checking for validity and a general failure to contemplate a manuscript’s findings and the implications of those findings.

Burnishing the Initial Submission

Assessing and allowing a paper to proceed to publication is only one aspect of peer review. The second major component of peer review is to correct, amend, and polish a paper before it is published. Ideally, the outcome of this burnishing process is that the finished article represents the best paper possible, at least given such constraints as the limits of the study, the ability and resources of the authors, and to some extent, the limits of the journal executing peer review.

Obviously, the best possible scenario involves a paper that describes a well-­conducted study and is written up carefully and accurately with methodological particularity. Such papers typically require little more than minor revisions, and one hopes that reviewers will identify these minor problems and note possible amendments in their comments.

Other papers might be based on elegantly executed studies but have been poorly written. Poor writing is quite commonplace and typically entails a failure to describe the study methodology adequately, the questionable use of statistical techniques, or the inadequate referencing of the paper. The research might also be poorly contextualized, which hinders assessment of its significance. Equally, language might be overblown, with evident bias and use of spin. So problems do not lie wholly with grammar, syntax, spelling, the misapplication of terms, or the erroneous use of words by authors writing in their nonnative language; the reality is that authors—­with depressing regularity, it seems—­simply do not include all relevant, pertinent information to allow the reader to assess the validity of the claims a paper makes (Chan and Altman).

An example of this poor writing could be a report of a randomized controlled trial that does not describe how patients were selected. A comprehensive review article might not elaborate on why some literature was discussed, while other articles were excluded. Another example might involve a poorly described (or nonexistent) outline of the origin of an idea or a poor description of a disease condition etiology. This often happens because experts in one niche of a field believe that such information is common knowledge and that description is redundant, but this decision might render papers incomprehensible or open to misinterpretation for other readers. More often, however, researchers are simply not sufficiently skilled in the art of writing a paper and have likely received little to no training on how to write up research for publication. Peer review, in other words, is deployed not only to catch a paper’s errors and omissions but also to point out how to correct them. In such instances, as described previously, an article would typically require major revision with requests to rewrite significant portions of the manuscript.

A third type of paper is a poorly conducted study that is, nonetheless, well written, so the reviewers believe that the authors will be able to correct some of the study’s defects. It is quite customary in such circumstances for a reviewer to request that the authors repeat their experiments. For an observational study, authors might be asked to consider including more data or reanalyze the original data. Authors are also frequently asked to read additional materials and determine if this new information influences their findings or the new ideas/concepts they are trying to advance. Often extra reading is recommended to help flesh out a reference section that is perhaps rather thin. In most cases, such requests following peer review demand a major revision or even a complete rewrite of the paper if the reviewers and editors believe the paper is salvageable. If, however, the reviewers believe that the problems are insurmountable, the authors are incapable of addressing the problem, or corrective measures might simply take too long, the paper is more likely to be rejected.

The fourth type of paper is typically an inadequate write-­up of a weak study or poorly conceived idea. Seasoned editors in any discipline, regardless of the standing of their journals, will attest to the remarkable frequency at which such material is submitted. With the proliferation of submissions to journals in recent years, editors are immediately rejecting some papers even before full peer review in an effort to avoid reviewer burnout (Watkinson). There is, after all, little point in sending out to full peer review an obviously flawed paper that cannot be rescued even with the provision of excellent peer-­review comments. In such a scenario, precious reviewer resources would have been wasted on poor papers and perfectly decent papers would not get the reviews they deserve.

Finally, a fifth type of paper might well fit into any of the categories outlined previously, but perhaps the biggest problem is that the paper contributes nothing new. The question of novelty is where peer review can become something of a lottery experience for the authors. Reviewers who are more lenient or perhaps less versed in the subject matter might simply judge the paper on its own internal merits, with no reference to the body of literature into which it would fit. Equally, some reviewers might judge that such a paper is wholly unoriginal and provides no advance to our current understanding, even in the smallest of incremental manners.

So how are papers burnished and polished for publication? How does peer review ensure that papers realize their full potential? How are errors and omissions caught? Peer reviewers are typically asked to post a recommendation regarding publication (usually some variant of accept, reject, and revise), maybe answer a specific question (e.g., does the paper cite all relevant material?), possibly grade some aspect of the paper (e.g., originality, future contribution to the field), and typically provide both confidential remarks to the editors and specific comments to the author. The overwhelming majority of journals utilize online submission systems, all of which offer a basic template to capture the information delineated previously. Some journals go one step further and will offer detailed instructions on how they would like the respective comments to the editor and author to be ordered. Rather frustratingly, many editors will attest to reviewers simply blowing through these instructions and writing what they feel is important, which can lead to a huge disparity in the usefulness of any comments provided.

Remarks directed to the editor are intended to provide a summary explanation of the reviewer’s grade for a paper and a context for the reviewer’s opinions (such as his or her own experiences). On occasion, these comments can also discreetly raise issues such as ethical concerns (e.g., questioning whether the authors received Institutional Review Board approval for their study). Along with such observations, the reviewer can also assess whether he or she believes the authors will be able to revise their paper. With unfettered freedom to express opinions, the comments to editors can reveal genuine concerns that a reviewer might feel uncomfortable expressing directly to the authors for various reasons. Additionally, and intriguingly, confidential comments to the editor can sometimes reveal interdisciplinary politics, bias, and gossip, all of which means that the decision-­making process does not necessarily exist in a vacuum. The confidential comments can also act as a confessional whereby reviewers can disclose their own limitations that might somewhat impinge on their abilities to fully judge a paper, such as a lack of knowledge on a specific point.

Beyond opinion, ideally the reviewer’s comments for editors should also include a brief, noneditorialized summary of the paper and the reviewer’s interpretation of the intended objectives and outcomes of the paper. The reviewer’s factual account can reveal to the editor both the potential and the limits of a paper, but it can also disclose whether the reviewer truly understood it.

Comments to the authors are normally the central element of article peer review. These remarks are intended to help authors improve their papers, and in this context, reviewers are not supposed to pass judgment on the suitability of a paper for publication. Unfortunately, peer review can be horribly uneven. This might be due to the amount of time a reviewer gives to the assessment of the paper or his or her own true level of understanding of a topic. Consequently, remarks from reviewers range from simple statements such as “This is a nice paper that contributes to the field” to lengthy, elegantly espoused observations that provide crucial support to an author who is revising his or her paper. Obviously the latter scenario is preferred for many reasons, not least because for authors operating without collaborators or a consultative support structure at their institution, good peer review can actually deliver a vital form of mentorship. In a reviewing system in which there is little to no uniformity in approach or training in the art of peer review, however, the comments can be delivered and interpreted in a manner ranging from the antagonistic to the collegial. Ultimately, the aim is to improve the completeness of reporting, enhance the presentation of information, offer alternative perspectives, and even provide advice to authors to help them better understand their own work.

Ideally, reviewer comments to the author should begin with a summary statement of what the paper aims to achieve and state whether the reviewer believes the authors have succeeded in achieving those goals. Furthermore, a well-­constructed reviewer summary should also highlight the major findings, strengths, and significance of a manuscript as well as deficiencies. Such a summary is actually quite essential, as many authors do not quite understand the larger significance of their work and the reviewer’s comments can help them moderate their language and style of presentation.

Having presented a summary of the paper, the best reviewer will then break a paper down section by section, commenting on the methods, results, discussion, and conclusion and also ideally differentiating between what he or she regards as major and minor issues. Comments are usually posed as short and impactful questions, suggestions, or corrections that allow the authors to clearly determine what they need to amend. The expectation is that when authors revise their papers as directed by the reviewer comments in response to the initial manuscript submission, they must also provide a detailed, point-­by-­point response to the reviewer comments—­hence the need for reviewers to clearly and succinctly outline each individual point that requires clarification, correction, or revision.

So what are some of the common critiques that help authors polish a paper? The list of effective comments to authors that follows, though not exhaustive, provides some of the essential elements of a good review. In other words, a well-­constructed peer review that is evaluating a trial, literature review, or study would ideally address these issues and provide authors with the feedback they need to amend their papers accordingly. For medical and scientific journals, articles usually are formatted along the lines of the commonly used introduction, methods, results, and discussion (IMRAD) structure, which provides the organization of the well-­constructed peer review. Specific fields, and certainly individual journals, typically provide a set of directions to reviewers to ensure that they address certain elements unique to their fields as they consider a paper.

Common components of effective comments to authors include the following:


  • Did the authors adequately report what they did?
  • Are descriptive statistics adequately reported?
  • How many subjects are in the analysis?
  • Is there a potential for bias or missing data?
  • Is the sample selection and size appropriate for the conclusions drawn?
  • Is the experimental procedure carefully described?
  • Is the context for data collection described in detail?
  • Are the inclusion or exclusion criteria for objects of study (patients, phenomena, previously published literature, etc.) effectively outlined?
  • Can the research question be answered with the study design used?
  • Does the paper provide an outcome assessment with unconventional or nonvalidated measures?


  • Do the results answer the research question asked? Do the authors try to make inferences from the results for which the data, and its collection, were not designed?
  • Are the results presented properly and in a comprehensible manner?
  • Are tables and figures used effectively? Are all images of a high enough resolution to interpret?
  • What are the primary and secondary outcomes from this study?
  • Are the results credible? What evidence, as a reviewer, can you use to validate your response to that question?


  • Does the data presented support the conclusions or contentions the authors make?
  • How are the data analyzed? Do the authors indicate which variables were analyzed by which tests, error control strategy, rationale for choice of tests, and missing data strategy?
  • Does the discussion section include a substantial discussion of limitations?
  • Are the results related to earlier studies to provide context and comparisons?
  • Does the discussion section successfully navigate from the specific (the study under review) to the general (current understanding) as a way to provide context?


  • Are any study limitations appropriately addressed?
  • Are pointers directing possible future research endeavors provided?

General Observations

  • Do the authors demonstrate familiarity with the subject, concept, technique, or principle?
  • Is the reference section complete and up to date? Within any review of the previous literature, did the authors explain why they included/excluded certain articles?
  • Is a study underpowered? Is there an adequately sized data set or sample size to draw conclusions with confidence?
  • Is there enough information in the article to reproduce the experiment/study/trial?
  • Is the work currently under review too similar to work previously published by the author(s)? What new content/data does the paper under review present?
  • After reading the paper, ask yourself, “So what?” If the answer is hard to discern, consider whether the paper is deserving of publication.
  • Did the authors adhere to the reporting criteria outlined in the appropriate reporting guideline (e.g., CONSORT for randomized controlled trials; STROBE for observational studies)?
  • Where relevant, does the paper under review adhere to current diagnostic conventions?
  • For revised submissions, have the authors made a “good faith effort” to address the critiques of reviewers and editorial staff of the initially submitted version of the paper?
  • Would any other experiments or additional information improve the paper? How much better would the paper be if this extra work was done, and how difficult would such work be to do or provide?
  • Does the paper require reorganization to flow better and add clarity to the message the author intends to impart?
  • Does the paper require edits for language?

Who Is the Peer in Peer Review?

Many journals, especially the most established titles, will typically retain a list of “go-­to” reviewers, expanding that pool as new experts emerge or through suggestions either from reviewers who declined the invitation to review or from authors themselves (resulting in a potentially alarming conflict of interest). Once again, if those selecting potential reviewers are doing their jobs properly, the reviewers will have been adequately screened before they are asked to conduct a review.

Criteria for selecting a reviewer vary by journal, but these criteria typically include a sense of the reviewer’s demonstrated level of knowledge in a particular subject (e.g., published papers, grants awarded, lectures given, being a thought leader in the field), a track record of writing full and fair reviews, an association with a known and experienced contributor to the field if the potential reviewer is an early career researcher, and crucially, no conflicts of interest, such as an obvious vested financial interest in the results, a personal/professional connection to the authors, a standing in competitive relation to the authors, or an instance where the reviewer’s work is being challenged by the paper under review.

Are all reviewers suitably qualified to determine the fate of a paper? Are all researchers naturally potential reviewers? No and no, quite simply. The reality is, however, that while all journals should vet a proposed reviewer thoroughly before allowing him or her to critique a paper, most journals—­beyond a select few—­are underresourced and under incredible pressure to provide rapid initial decisions. The result of this pressure is that some of these journals are, frankly, unconcerned by who reviews a paper and satisfied to secure anyone whose name can be retained in their submission and review system.

Considering these limitations, it is not surprising that journals often restrict their choice of reviewers to friends and acquaintances. When those familiar individuals turn down the opportunity to review, journals then cast their nets far and wide to capture a willing reviewer. Obviously, the potential of introducing error and inefficiency into the peer-­review process increases the further the reviewer is from a journal’s usual orbit. This tendency for a journal to rely on friends and acquaintances to critique papers naturally begs the question as to what—­or who—­constitutes the “peer” in peer review.

Unfortunately, it is a fact that the journal editors and thought leaders in a particular area of study might not fully understand what makes a suitable reviewer. Visible leaders in any given field are not always expert reviewers—­they might be more skilled as politicians or organizers, in fact, and might not even be stellar individual researchers, having really only collaborated alongside others who have actually driven major research projects forward. Other individuals might be quite brilliant clinicians or practitioners, and thus their opinions are obviously needed, but they might not be equally adept at spotting methodological flaws in a paper or the incorrect or inappropriate application of a statistical technique. The notion of being a suitably qualified reviewer also stretches to the aforementioned concerns about conflicts of interest or potential bias. As few reviewers ever embody the “perfect fit” for a paper under consideration, journals should ideally seek to have reviewers with a blend of backgrounds, interests, and capabilities on hand to ensure that a balanced perspective can be achieved.

In most scientific and academic fields, individuals become recognized as experts, practitioners, or competent exponents by qualifying following a course of study. However, there is currently no formal mechanism of qualification to become a peer reviewer. Indeed, though some journals, learned societies, and publishers offer reviewer training, most instruction on assessing a paper is mentor driven at best and nonexistent at worst. Many reviewers simply learn on the job, absorbing the directives of the various journals they work for or witnessing the comments of the other reviewers for a particular paper. When functioning as reviewers, regular authors can draw on their experiences as recipients of critiques of their own work, recalling what comments were especially insightful.

As a result of this lack of requisite reviewer training, there is really no set of reviewer core competencies to which a journal must refer. Such competencies would likely consist of some level of methodological grounding, an understanding of how to break down a paper, and critically, how to write up a review in an effective manner. As we will discuss elsewhere in this book, there is concern that there is more to being a reviewer than being a subject expert, and this has obvious implications for our confidence in the veracity and quality of the material that is published in peer-­reviewed journals.

Journals with sufficient resources can plug such gaps in the knowledge or experience of reviewers by using reviewers—­sometimes even one individual—­who are specialists in particular statistical or methodological areas. A statistical consultant, for example, is normally retained to examine the statistical design outlined in a paper rather than to assess the scientific content. He or she will provide feedback on the accuracy of the statistics presented, the appropriateness of the statistical technique used, and the correct application of a technique. The specialist will likely also assess the analytical plan. A methodology expert, on the other hand, might be employed to assess the application of the research method as well as the completeness of methodological reporting. The methodology expert will often perform specialist evaluations for specific article types. For instance, for retrospective studies, he or she might check to see if data are being used in ways that conflict with the design of the original collection. These experts will also concern themselves with issues related to the presentation of results.

Such specialist reviewers are typically used as complements to subject experts, but they can make or break a paper, especially in the detection of flaws or in discerning the overstatement of the significance of results. Depending on the journal, specialist reviewers can be deployed at different points in the peer-­review process. For instance, the Journal of Sexual Medicine enacts a methodological triage before the manuscript is seen by the editors in an effort to weed out weak papers. Others, such as Headache: The Journal of Head and Face Pain, deploy the statistical or methodological consultant as an additional reviewer. When resources are precious, the specialist might perhaps only be called on to review papers that have favorably completed peer assessment ahead of a decision from the editor.

Roles and Responsibilities of Peer Reviewers

Any form of self-­regulation is predicated on trust in the people and the process they support, and we can say with some confidence that a lack of transparency and confidence in the peer-­review process is ultimately harmful to all participants in the practices of research and publication. Yet in the absence of any formal training, many reviewers are simply unaware of their responsibilities and barely understand what they have been asked to do.

What should we be able to expect from reviewers? First and foremost, reviewers must supply a comprehensive, timely, and carefully balanced review to an author whose effort and trust in the system must be respected. Authors submit papers to journals in good faith and trust that editors and reviewers will treat their work with respect, will maintain the confidentiality of the submission, will not appropriate their ideas or data, and will not cause damaging delays—­intentionally or otherwise. Authors should be able to expect a level of courtesy from a journal, confident that they will not be insulted or undermined or, in the worst of cases, have their character challenged.

Reviewers also have a second core responsibility: they must understand their own strengths and limitations, declining the opportunity to review if they are not thoroughly qualified. Reviewers obviously have a responsibility to protect the sanctity of a field’s body of literature by ensuring that only the most deserving of papers are published. A reviewer’s failure to perform his or her role allows weak papers to infiltrate the literature, muddying what can be discerned as valuable, validated, trusted, and reliable work. A failure to review work in sufficient depth can also contribute to a very real sense of waste when weak papers are published, potentially leading to misdirected future research, wasted research funds, and wasted effort reviewing and publishing material that did not deserve to be published (Macleod et al.).

In a sense, reviewers—­especially if they are also authors—­have a responsibility to “pay it forward,” or to return the favor to other participants in the system of scholarly communication. This systemic quid pro quo is referred to in an editorial in Nature Neuroscience (2009) as a “civic duty” for authors (“Striving for Excellence”). For peer review to survive, authors too should dedicate time to evaluate papers as others have volunteered time and expertise to assess their papers. Indeed, most reviewers feel there is a benefit to doing that: 91 percent of respondents to a 2008 survey indicated that the reason they perform peer review was to “play [their] part as a member of the academic community” (Ware). Though peer review is a voluntary endeavor in all but a few cases, that fact alone does not absolve reviewers from the responsibility to participate.

Other than issues of timeliness (authors are much less patient than reviewers and editors), there does seem to be a match between what authors expect from peer review and what reviewers aim to deliver: full and balanced assessment, polite critique, refinement of the presentation of results and ideas, the handling of the paper in confidence, and ensuring that expertise is effectively directed at improving the paper. Despite much grumbling and some recent tinkering with its delivery and openness, the durability of peer review speaks to its ultimate effectiveness and the overall satisfaction of all stakeholders in the process.