Abstract

Developments in emergent technology offer innovative solutions for facilitating a hybrid review process; we examine a unique combination of private–peer and open–public review uniquely relevant for disseminating the scholarship of teaching and learning (SoTL) through development of the Journal of Instructional Research (JIR). An analysis of the hybrid review process (combining the strengths of traditional peer review with an integrative public review process) revealed substantial reviewer participation that contributed to a well–rounded, engaged review process. Public review feedback constructively addressed the value and relevance of the implications, methodology, content and written quality of the manuscripts; an additional layer of private, peer review further refined the manuscripts to determine suitability for publication. This, in turn, created a space where refinement of content, structure, and design of SoTL research was achieved through an interactive process of scholarly inquiry and dialogue.


 
The Journal of Instructional Research (JIR) was created to examine the feasibility of utilizing a hybrid review process to disseminate quality SoTL research and best practices in post–secondary instruction. Recognizing that publishing one’s scholarly work in a peer–reviewed journal is an essential component of the promotion and tenure system at most universities, JIR’s review structure is based on a hybrid review model that combines open–public and private–peer review. As such, the goal was to maximize the impact and value of open–public review while ensuring adherence to academic standards through private–peer evaluation. The resultant structure of JIR is a two–stage, hybrid evaluation process with open–public review, interactive discussion and final formalized private–peer review. In this study, we examine the value, relevance and utility of a hybrid review model for academic journal publishing.

The Internet and its inherent connectivity is changing not just how we play or how we access information, but also how we work together. Extending the impact of social networking technology, modern Internet–based interfaces allow collaboration for serious work (Howe, 2008). For instance, the practice of multisite, simultaneous collaboration through ‘uploading’ is “...one of the most revolutionary forms of collaboration” in a world flattened by technology (Friedman, 2008, 95). This widespread availability of communicative technologies is challenging long–standing processes and procedures. Included in this realm are the traditional peer–review processes of journal publication. In line with the burgeoning growth of online scholarly journals, the Journal of Instructional Research was launched to tap into the power of technology to facilitate open–public review, as a supplement to traditional peer review. This in turn aimed to foster the quality and impact of the scholarship of teaching and learning (SoTL). In this paper, we examine the benefits of utilizing a multi–layer, hybrid approach to manuscript review that combines the benefits of both open–public and private–peer review systems facilitated via the Internet.

Historically, the peer–reviewed journal process has been driven by a combination of practical and academic factors designed to ensure the distribution of quality research (Jefferson, Alderson, Wager, & Davidoff, 2002). The cost to publish print journals is sufficiently high to warrant investment in discriminatory processes that ensure only items of the highest quality are included for publication (Williamson, 2003). Recognizing that generalized publishers lack the necessary discipline–specific knowledge to determine quality in a given content domain, academic peers are utilized as reviewers to provide evaluative judgments about the value, relevance and worth of a research article (Poschl, 2010). Practical constraints associated with identifying content experts, facilitating document transmission and coordinating communications limits the number of individuals (typically 1–3 reviewers per manuscript) involved in the review of any given research article. Complicating the process further, inherent in any peer–review process are concerns of bias (unintentional or otherwise) and, recognizing the limited number of reviewers, the awareness that biases of any one reviewer may skew the overall review; as such, peer–review is typically done via a double–blind system (Dunckley & Hodgkinson, 2007).

While the resultant traditional peer–review system has served the academy well, new ways of evaluating peer work are beginning to emerge (Driscoll, 2010). These new methods of evaluation are changing the traditional peer review process for many academic disciplines. While peer reviewed journals have traditionally been subject to an anonymous review process, modern technology is creating space for new ways of knowledge production and evaluation to emerge. As discussed by Fitzpatrick (2011), advances in online publication technologies (i.e., digital archives, social networking platforms, multimedia, etc.) provide a host of opportunities that challenge our historical reliance on private, individualized communication. Focusing on the concepts of crowdsourcing, the rise of social media and the Internet, the purpose of this study is to examine the viability of integrating open–public and private–peer review into a comprehensive hybrid review process to enhance interactivity and value for research within the discipline of the scholarship of teaching and learning (SoTL).

The Internet and social networking sites have made the world a much smaller, more productive and collaborative place. Howe (2008) coined the term “crowdsourcing” to describe a phenomenon that is occurring due to the increased connectivity achieved by the Internet. The power of crowdsourcing relies on using the latent and combined potential of the masses to solve problems or otherwise produce instead of relying on a small group of experts. Thus, by using a larger group to perform tasks, many resources that previously were not used, available, or underused are becoming powerful in social, economic, cultural, business, and political arenas (Greengard, 2011). These often, self–organizing communities work together to break up large pieces of work into smaller chunks that can in turn be completed rapidly and with more focus. A perfect example of this power is in the widely popular information website Wikipedia. While some may argue that Wikipedia is too open to have quality content, research shows that the site is actually more accurate than the Encyclopedia Britannica (Giles, 2005). Wikipedia, for all of its critics and real faults, is proof that crowds of people working together can not only create large amounts of information, like an encyclopedia, but that this information can also be accurate.

In addition, crowdsourcing can be seen at work in areas such as marketing, astronomy, bird watching, and social groups and organizations (Howe, 2008). Howe (2008) notes that there are many people behind the scenes who have much to contribute to an idea even though this may not be there full–time job. For instance, many excellent musicians do not make music for a living, but still may have much to contribute to the field. Howe (2008) stated, “crowdsourcing operates under the most optimistic of assumptions: that each one of us possesses a far broader, more complex range of talents than we currently express within current economic structures” (17). In this way crowdsourcing not only relies on the power of many people working together, but also draws on the unused talents of individuals coupled with micro–applications of free time.

Many times, the best resources are found in the masses as many people work together and bring their experience and knowledge to bear on problems and products. As Howe (2008) writes, “Their greatest asset is a fresh set of eyes, which is simply a restatement of the truism that with many eyes, all flaws become evident, and easily corrected” (40). In short, the more people available to do any kind of work, the more efficient and accurate it can be. However, for crowdsourcing to be effective, these crowds of collaborators must have a place to connect. Thus, the nascent tools of social networking provide the means by which large and small communities converge upon and work within these areas of interest and expertise.

Lee and Crawley (2009) defined social networking as “an adaptive structure of social relationships where individuals connected by common interests values, experience and other factors engage in a pattern of interactions” (40). This covers a very broad spectrum of web–based resources, but helps define the space within which crowdsourcing takes place. Along these lines some sources of social networking have capitalized on providing the means for users to connect to people that they know as well as new connections. For instance, LinkedIn and Facebook offer a chance to associate with “connections” or “friends” that we are tied to by varying degrees of closeness.

Crawford (2011) explained that sites not so different from LinkedIn and Facebook are helping to push scientific ideas forward through open collaboration. Some of the mechanisms for collaboration are data sharing, searchable literature databases, and multiple opportunities for creating dialogue. Crawford (2011) notes that, “Improving the ability to collaborate reduces the chances of scientists starting long experiments that others are about to complete—eliminating redundant research...” (736). Simply put, not only can social media enable groups to work together on problems even if in disparate locations, but can also be a way to reduce redundancy as well as share information.

Granted, sharing information via technology is not new, but the breadth and reach of information to users is greater than in the past. For instance, libraries are using social media in many ways. Social media and social networking sources have provided libraries the opportunity to share data and scientific findings easily (O’Dell, 2010). Not only are these tools helpful, users have now come to expect them. These are only two of many examples of the many uses of social media and social networking. The practice of libraries using databases to share information is almost common. However, what is not as common and accepted is the use of data sharing via technology to produce and check the quality of academic journal information.

Traditionally in the area of academic publishing closed peer–review has been, and remains, the standard. Academic journals rely on the peer–review process to keep the articles reliable, valid, and objective. Jefferson, Alderson, Wager, and Davidoff (2002) argue, “Peer review is usually assumed to raise the quality of the end product and to provide a mechanism for rational, fair, and objective decision making” (2784). In this way, a rigorous peer review process is assumed to connect to the quality and reliability of journals (Driscoll, 2010). Traditionally, anonymous reviewers analyze scholarly articles before they are considered for publication (Hodgkinson & Dunckley, 2007). The more competitive and stringent the review process, the higher the perceived quality of the resultant peer–reviewed articles.

While peer–review offers distinct benefits, criticisms of the traditional review process highlight that traditional peer review is “too slow, too expensive, and too biased...” (Williamson, 2003, 16). Williamson (2003) also explained that within a traditional peer–reviewed process errors are difficult to detect, as well as fraud and misconduct. In support of this Williamson (2003) points to reviewer bias. Specifically, evidence exists that reviewers may favor authors who come from more prestigious institutions, certain geographical areas, and belong to one gender or the other (Williamson, 2003). Similarly, it has been noted that within the traditional peer review processes, bias against gender, affiliations, and nationalities are possible (Rowland, 2002). These seemingly un–academic biases may arise from the anonymity of the peer–review process. In addition, Williamson (2002, as cited by Rowland, 2002) identified concerns of traditional peer review with the headings of “subjectivity, bias, abuse, detecting defects, and fraud & misconduct” (5). Some of the reported abuses of the traditional peer review were described as the possible rejection of a manuscript because of the personal beliefs of the reviewer, favor being shown to certain sciences, and the bias toward authors of prestigious institutions (Williamson, 2003). It has also been reported that some referees may have delayed publication in order to have something of their own published first (Rowland, 2002). Despite the theoretical value of reliance on peer–experts, the reality of basing decision–making on the recommendations of a small number of individuals who may have personal investment in the outcome of the review process poses complementary challenges.

While the theoretical underpinnings of traditional peer–review rest on the reliance of experts to determine the quality and value of research for publication, the manner in which peer–review processes and systems are structured is largely a function of practical limitations. Peer review has historically only involved a handful of experts because it was only feasible to coordinate correspondence between a limited group of individuals (Williamson, 2003). But the power of technology and electronic–mediated communication has removed the original constraints that drove traditional peer review systems. Embracing the potential of crowdsourcing, the open review processes has emerged in contrast to the traditional methods of peer–review publication.

Open review is a peer review process, which typically uses named reviewers for the process of reviewing submissions (Hodgkinson & Dunckley, 2007). Open review generally relies on the Internet and social networking tools as a platform for review, allowing multiple persons to work and comment on a single project simultaneously. This system creates a more streamlined and efficient review process than previous review methods, including digital delivery of documents (Driscoll, 2010). Open review has emerged in a number of different forms (Hodgkinson & Dunckley, 2007) including:

  • Open—Also known as pre–publication review, readers are able to see reviews that are being made and also submit their own comments. Authors are also able to comment and answer questions that are posed about their study.
  • Open and permissive—This is different than simple open review in the way reviewers are chosen. In this case the author is required to solicit reviews from members of a review board. These reviewers once chosen by the author make their review and the review is posted with reviewer and author’s names attached. Readers are able to comment on the article in open and permissive reviews as well. But like in open review, readers’ comments are not used as part of the decision making process neither for accepting manuscripts nor for the revision process.
  • Community—This is the amalgamation of open and permissive open review but incorporates crowdsourcing to a greater extent. The pre–publication papers are open for review by anyone during a specified time period. In this way, not only are readers able to make comments but also, in all reality, the readers become the reviewers themselves. This form of open review is the most radical departure from the traditional blind peer review process. Not only are the reviewers public and identified, but are constituted of organically self–selected individuals. This review process is built on the idea that many reviewers, working together in a public forum can add a greater level of critique from multiple angles, resulting in a better end product article.

Despite these variations, open review processes share a common reliance on harnessing the community’s expertise and energies to review submissions utilizing online technologies. Open peer review processes post submitted manuscripts in an online forum and allow, depending on the model, either preselected panelist or interested readers to critique and comment on the quality, value and worth of a manuscript.

Recognizing the value of open review, MediaCommons (see http://mediacommons.futureofthebook.org/) was launched in 2007 to explore new forms of publishing that capitalize on the power of crowdsourcing. In this context, Fitzpatrick (2009) utilized open review in the process of publishing her text, Planned Obsolescence: Publishing, Technology, and the Future of the Academy, which examines the role of online publishing and alternative review models in contemporary higher education. Recognizing that technological availability is only one aspect of consideration, Fitzpatrick overviews the social, organizational and institutional aspects necessary to embracing the future of scholarly publications in the digital era. Embracing the spirit of the text, not only is the draft manuscript available online for public review, but there is an ongoing dialogue between author, formal reviewers, editor and open reviewers about the content of the text as well as the process of the review (to examine their reflections on the open review process, see http://mcpress.media-commons.org/plannedobsolescence/2009/12/16/external-reviews/).

As highlighted by this discussion, while open review is relatively new and in its infancy compared to established peer review processes, it offers several benefits. To begin, opening the review process up to outsiders can create a structure where journals are more accountable to produce a quality product (Williamson, 2003). Williamson (2003) explains that with the accountability of open review, abusive reviews will be negated, inappropriate reviewers may decline to participate, reviewers will receive more credit, and ideas will be less likely to be stolen. Examples of open review as a mechanism of accountability can be seen in the amount of time that named reviewers spend on reviews as well as the quality of their reviews. In fact, in one study, researchers found that just by having the reviewers’ names visible, the reviewers were more inclined to check their work more diligently (Walsh, Rooney, Appleby, & Wilkinson, 2000).

Further, Poschl (2010) noted that open review is needed for improving the quality assurance of scholarly journals in other ways. First, authors have a full public discussion before the paper is up for publication. This gives a certain amount of accountability to the authors. This may help to deter carelessly submitted papers. Secondly, along with this efficiency, there are more comments made for each paper than would happen in a traditional review process. And lastly, the original manuscripts can be archived for later use to benchmark or otherwise refine the review process (Fitzpatrick, 2011; Poschl, 2010; Walsh, Rooney, Appleby, & Wilkinson, 2000).

Beyond quality considerations, open review offers some logistical advantages over traditional review processes. Harnad (1998) described a major benefit of open review using the Internet as being efficient and cost–effective. The author noted that papers can be easily submitted and reviewed by the pubic via the Internet; and web–based hosting allows for easy archiving of original manuscripts. The Internet gives the benefit of a widely accessible vehicle for the open review process; through the use of technology, larger pools of reviewers are available for each manuscript since it is open to many more possible eyes. Reviews can be posted, emailed or sent through password controlled web sites. This process offers a distinct advantage over summative review processes; the interactive nature of the review process allows authors to update and revise the original manuscript as needed to enhance the quality of the scholarly work.

Despite these advantages, open review is not without opposition (Kiernan, 2000; Maron & Smith, 2009; Sweeney, 2001). The acceptance of open review lies in logistical considerations; and, as such, there remains skepticism for this model (Driscoll, 2010). As is the nature of all paradigm shifts, reluctance to accept open review processes may be a function of being a pioneer in using technology in this fashion. It is possible that as social and academic networking via Internet technology becomes more mainstream, attitudes toward the value of open review will gain increased acceptance.

Examining the relative strengths and weaknesses of open–public and private–peer review, it is clear that both offer value in enhancing the quality of scholarly work. Open–public review provides increased feedback and quality control through the crowdsourcing of information; private–peer review ensures adherence to academic standards through critique by established experts in the discipline. Integrating the value of each, hybrid review offers a two–stage review model that maximizes benefits via a formative, open–public review process dedicated to enhancing the quality of submitted scholarly work combined with a summative, private–peer review designed to determine suitability for publication.

Purpose of the Study

While both traditional peer review and open review have cost and benefits, few have tried to merge the advantages of both models. In the vein of open review, advances in technology, as well as the growth of social media, make it an opportune time to take advantage of the many experts in our fields to review research papers. Open peer review can leverage the “crowd” of experts who are behind the scenes, to constructively review current research and best practices. While the open review process creates a high quality form of accountability for both authors and reviewers, it can be strengthened by adding a blind review process conducted by disciplinary experts. The emerging hybrid review process combines peer–review and open review to allow the strengths of both models to be integrated and the drawbacks ameliorated. The purpose of this study was to examine the value and relevance of utilizing a hybrid (open–public and private–peer) review process to advance the scholarship of teaching and learning (SoTL). Specifically, this study examined the use of the hybrid review model in the development of the Journal of Instructional Research.

Methods

The overarching goal of the Journal of Instructional Research (JIR) is to allow SoTL researchers an opportunity for public review of their work to promote innovative, quality research examining post–secondary teaching and learning while simultaneously contributing high–quality research to the SoTL scholarly community. The journal’s focus on SoTL work was selected due to increasing interest and growth in the field; this popularity has created a wide audience of potential expertise to contribute to the open–public review process. Open–public review is particularly relevant to SoTL work due to the interdisciplinary emphasis on practice–based methods of inquiry geared toward systematic transformation (Hubball & Clark, 2010). SoTL seeks commonality between different disciplines by “engaging the scholarly community in critical educational issues...” (Hubball & Clark, 2010, para. 1). Flatt (2005) expanded on this definition. SoTL “...encourages and validates educational research ethically conducted by teachers within their own disciplines and within their own classrooms” (3). The focus of SoTL is on self–reflection into what can be done to improve all aspects of instructional practice; but SoTL goes beyond simple self–reflection to extend the systematic inquiry of one’s teaching scholarship into the larger academic community. As such, disseminating one’s investigations of teaching to the review, critique and application of peers extends reflective teaching into a true scholarly endeavor. In addition, the opportunity to serve as an open–public reviewer of the latest SoTL research provides educators insight into emergent research and best practices.

Typical of most SoTL journals, JIR solicits theoretical and empirically–based research articles, critical reflection pieces, case studies, and classroom innovations relevant to best practices in post–secondary instruction (including teaching, learning and assessment). Manuscripts must be supported with theoretical justification, empirical literature, evidence, and/or research; qualitative and quantitative inquiry methods are appropriate. Per the nature of the hybrid review process, manuscripts go through a two–stage review process:

  1. Open–public review—Each submitted manuscript is posted on the JIR website (www.instructionalresearch.com) for public discussion during one of three scheduled discussion periods; discussion periods are held in February, June and October of each year. Papers posted for discussion are not anonymous. Authors are invited to actively participate in the review process by responding to inquiries and posting their own questions to solicit feedback. In addition to the written paper, authors have the opportunity to supplement their manuscript with a brief presentation discussing the scope, goals and purpose of their work. Copyright information is posted along with each paper to ensure ownership of all posted information. Notification of discussion periods is openly disseminated to the broader academic community. Scholars wishing to contribute to the discussion surrounding each paper must register with JIR in order to publicly post their comments; as such, neither the authors nor reviewers are anonymous in the review process. At the conclusion of each discussion period, authors have the option of having their work removed from the website (with only the title and abstract remaining) or leaving their work archived for ongoing viewing (without the possibility of additional comments). All papers posted for discussion on JIR have permanent, dated documentation remaining on the site.
  2. Private–peer review—Following the discussion period, authors are encouraged to utilize the feedback from the public review in order to enhance their SoTL work. At this point, authors may elect to submit the paper for final publication consideration with JIR or they may elect to withdraw their submission. To ensure academic rigor and adherence to disciplinary quality standards, papers submitted for final publication with JIR are subjected to another round of formalized, private–peer review by anonymous reviewers who are identified experts in the discipline. During the private–peer review process, reviewers follow traditional processes of blind review and provide feedback along with publication recommendations to the journal editor. The final decision as to whether to publish a manuscript lies with the editor. Published manuscripts then go through standard rounds of copyediting and revision prior to appearing in the print publication.

Participants & Procedures

To examine utilization and effectiveness of the hybrid review process, we conducted an analysis of the peer review feedback for the ten manuscripts submitted for publication in the inaugural volume of JIR and the four manuscripts submitted for the initial discussion period of the second volume. In addition, lead authors (N=6) of each of the studies featured in the inaugural were surveyed to examine authors’ perceptions about the value and relevance of the hybrid review process.

Measures

To examine the utility of the hybrid review process, we coded each manuscript review session for: 1) number of views, 2) number of comments, 3) number of unique reviewers, 4) length of comments, and 5) author participation in the review process. We then conducted a content analysis of the public review comments to determine the nature and focus of feedback. The nature, frequency and type of comments for the first and second volume of JIR are coded separately to allow for an examination of any novelty factors that may have influenced the initial review process.

In addition, we surveyed lead authors to examine their perceptions of the review process and their satisfaction with the quality of the reviews. The survey included nine questions in which authors were asked to rank their satisfaction with various aspects of the review process on a 5–point Likert–type scale ranging from 1 (extremely dissatisfied) to 5 (extremely satisfied).

Results & Discussion

Ten papers were submitted for the inaugural volume of the journal; all manuscripts were evaluated via the hybrid review process. For hybrid review to be successful, it is vital that there are adequate public reviewers to foster the value and benefits of crowdsourcing. As such, the first challenge lies simply in engaging enough potential reviewers to view and read the manuscripts. Forty–two unique reviewers participated in the open–public stage of hybrid review producing an average of 12.7 comments per paper. This is interesting to contrast with traditional journals that use far less reviewers in the peer review process (typically 2–3 reviewers).

As a secondary indicator of engagement, we examined the number of page views per manuscript. Page views per manuscript ranged from 195–470 with a mean of 276.9 views. In contrast to the amount of blind reviewers typical of a traditional peer review process (Dunckley & Hodgkinson, 2007), it was clear that the journal succeeded in engaging scholars to interact with the pre–published manuscripts. To examine the extent to which this high number of page views per manuscript may be a function of a novelty effect or a true indicator of public interest, we conducted a similar analysis of the initial discussion period for volume two of the journal. For the four manuscripts during this discussion period, the page views ranged from 191–293 with a mean of 231.75 views. Relatively consistently, the first two discussion periods produced a level of scholarly interest adequate to engage enough reviewers to produce a range of potential feedback perspectives.

Central to an effective review process, not only do individuals need to read the pre–published manuscripts, but it is vital that they provide feedback to enhance the quality of the manuscripts and provide insight on the appropriateness for publication. An analysis of the public review feedback reveals that for volume one, the number of feedback postings per manuscript ranged from 5–20 with a mean of 12.7; for volume two, the number of feedback postings per manuscript ranged from 13–17 with a mean of 12.25. Comparing the number of page views to the number of feedback postings, only 4.5–5% of those viewing the pre–published manuscript provided written feedback. While the percentage of views–to–feedback is relatively low, it is still substantially higher than the two to three individuals providing feedback during a traditional blind peer review process. Table 1 provides a summary of the community review activity.

Table 1: Community Review Activity for the Journal of Instructional Research.
Volume Submitted ManuscriptsAverage ViewsAverage CommentsAverage Word CountUnique ReviewersAuthor Participation Rate
V1 (Complete)10276.912.7148.64260%
V2 (1st discussion period)4231.7512.25169.87350%

While some may argue that the number of reviewers is not as important as the quality of reviewers, the open reviewing phase of the hybrid review model argues for a third alternative “engaged reviewers.” Walsh, Rooney, Appleby, and Wilkinson (2000) found that when reviewers are named, they spent much more time on the review process and had higher quality responses. The accountability of being identified increased the rigor of the reviews. With this in mind, the open review process of the JIR may provide authors an incentive to produce high quality and thorough feedback on their manuscripts due to the loss of anonymity.

To examine feedback quality, we calculated the length of feedback postings and conducted a content analysis of the focus of feedback. As a proxy for the depth and detail of the feedback postings, we examined the length of each feedback post using a word count; underlying this choice was the logic that longer feedback posts are more likely to provide additional detail of information. The length of feedback postings varied considerably ranging from 106 words to 221 words for volume one (M=148.6 words) and 142 words to 205 words for volume two (M=169.87 words). As evidenced by this analysis, feedback comments were generally substantial and did not consist simply of quick evaluative phrases or critiques. Rather, the length of feedback postings provides evidence of a degree of depth in the review analysis. Table 2 provides a breakdown of review activity and feedback per manuscript.

Table 2: Review activity and feedback per manuscript.
VolumeManuscriptsViewsCommentsAverage Word CountUnique ReviewersAuthor Participation
1Popma22420184.418Yes
1Radda, Cross, Holbeck28613115.512Yes
1Wozniak, Mandernach, Wadkins22513130.411Yes
1Sharp47011126.911Yes
1Lamport, Bartolo2068221.08No
1Greenberger30511151.710Yes
1Levin-Goldberg4532010611Yes
1Meyer20517148.716No
1Lamport, Hill2005118.65No
1Ashton1959182.49No
2Gehringer29315205.5315No
2Nelson, Vontz, Fritson, Forrest20613155.1513No
2McKnight19117142.7117No
2Meyer23714176.0714No

To more closely examine the depth, quality and focus of the feedback postings, we conducted a content analysis of the public postings to identify the main themes of the feedback. In terms of the contextual review, the comments were coded according to the main themes of the reviewer feedback. A three step coding process was utilized. The first step was to identify particular quotes that emerged in the feedback. Step two identified codes from this quoted material, and the final step involved identifying broad and clear themes that represented the data set. Using a grounded theory analysis approach, the following themes emerged: general praise, writing issue—style/communication, writing issues—APA style, methodology, implications, and theoretical relevance. Table 3 provides the distribution of feedback themes for each volume.

Table 3: Distribution of feedback themes
Feedback ThemeVolume 1Volume 2Overall
General Praise24%27%25.5%
Writing issues—style/communication10%13%11.5%
Writing issues—APA11%13%11.5%
Methodology6%13%9.5%
Implications42%33%37.5%
Theoretical relevance6%4%5%

The majority of comments for both volumes focused on the implications of the research, followed closely by general praise. The implications category emerged from reviewers’ feelings on the findings and what they mean for further exploration of the topic. Implications included discussions of possible further research or how the concept of the manuscript fits in with current practices of teaching and learning. For instance, one reviewer added:

From what I gather, the study was based on interviews from one university. It would be interesting to see if another set of students from a different university with the same number of students and years, how that set of data would compare. I would also suggest the type of programs the students are registered, which could lead to other findings based on type of program. Then you could compare programs between schools to further expand points and data.

The broad focus and generalizability of SoTL work may have contributed to the reviewer’s confidence in addressing concerns about the implications of the research across a range of disciplines.

The next highest category in terms of the content analysis is “general praise.” This may seem like an unimportant category to emerge in a journal review process, however it seems to add additional insights into the engagement of the reviewer’s themselves. For instance, one reviewer wrote:

I believe that this particular area has a huge impact on all areas of learning and you have done a great job of giving a comparison as well as describing the perceived outcome of what the doctoral student is learning as a result of this process. Keep up the great efforts.

This is an indication that this particular reviewer was engaged in the review process. Also, positive comments that fit into the “general praise” category can serve as a mechanism of continuing the review process conversations. In some cases the “general praise comments” followed or proceeded serious critiques of the papers, acting as a softening agent, or linguistic repair, to allow for conversation to continue. One example of this dialogue series is as follows:

The content of the article was interesting and reflected the way most dedicated online educators feel about online teaching. I noted improper APA formatting throughout the paper and some grammatical errors, and so these issues need to be addressed before publication.

The next two emergent categories represent two types of writing feedback. First is the category of “writing issues–APA.” The comments coded to this section deal with ways the author of authors may need to improve APA formatting and citations. Some reviewers used a critical eye in regards to APA formatting. One example of this was as follows: “Also, your reference formatting is not in correct APA format either. I am not trying to take away from your paper but if this is to be published, you probably need to ascertain it is in proper APA formatting.”

Next, is the category of “writing issues—style and communication.” This category represents those comments that deal with writing mechanics and flow. In essence these two writing categories represent the reviewers’ feedback on improving and correcting APA style as well as writing flow, structure, and other issues. In this way the reviewers have moved from methodological concerns and praise to focus on important copy editing suggestions. An example of this was:

I do have some concerns about the APA and grammar throughout the paper. For example, numbers one through nine are still written out, as well as in the beginning of a sentence (Publication manual of the American Psychological Association, 6th Ed, 2010, 112). Also, another example of grammar or spelling is to be careful of possessiveness, such as University's vs universities.

It is interesting to note the number of reviewer comments that were coded in the “Methodology” category. This may seem surprising in an open review format, however because SoTL is a broadly studied and practiced topic, reviewers from the community were able to make meaningful suggestions on methodology. For example, one reviewer wrote:

As a scholar-practitioner, it might be very interesting, and likely of high utility, to research the intersection of doctoral learners' perceptions of cognitive and behavioral outcomes, and those of the stakeholders of the workplaces in which they operate. Where is the disconnect? Where are the parallels? As doctoral learners emerge and transform their organizations, this interface, without doubt, will be very relevant in many aspects of their work.

These types of comments are central to the value and relevance of open review process of the hybrid review model: each reviewer has the chance to add in their own expertise. Likewise, authors benefit from feedback that covers much more of their manuscript than a few reviewers could provide.

The final category that emerged was “theoretical significance.” The theoretical significance category entailed comments that connected the authors’ papers with research–based theories. The theories may or may not have been introduced by the authors themselves, but were sometimes connected by the reviewer.

Author Perceptions

To examine the viability of the public review process, we surveyed the lead authors with an emphasis on their perceptions of the value and worth of the public review process. Acknowledging the small number of individuals surveyed (N=6; 60% response rate), author feedback was overwhelmingly positive; see Table 3 for complete responses. On a scale of 1 (extremely dissatisfied) to 5 (extremely satisfied), authors’ mean satisfaction was 4.94 across the nine review criteria: overall quality of process; timeliness of publication process; quality of public review feedback; quality of peer/editorial feedback; quality of website; quality of the online version of the final publication; communication with the editor; clarity of publication process; and professionalism of interactions. Key within these dimensions is the issue of the quality of the public review feedback (M=4.83) which was rated identical to the peer/editorial feedback (M=4.83). Not only were authors highly satisfied with the quality of feedback they received from the public review process, but they rated it as comparable to the quality of feedback received during the blind review stage.

A further examination of the open–ended responses to the author survey revealed that, despite initial concerns about public review, authors found great value in the range of public review comments. As indicated by one respondent, “I very much enjoyed it. It gives an author a wider range of perspectives and interpretations.” This reaction was echoed by another respondent who highlighted the applied value of the feedback, “The public review gave great insight as to whether or not the study had benefits to the public.” Balancing out the positive feedback, authors did express a few concerns about the equal weighting of the public and peer reviews in the final publication decision. Specifically, as explained by one author, “The public review is not as thoughtful and comprehensive as one might expect considering its weight in the revision process; one would assume more dedication from peer reviewers or editors.” Similarly, another author explained, “The public review is a helpful way to receive candid comments from peers, but I find that some may censor their criticisms due to a lack of anonymity. This is not an issue in a blind review process.”

Recognizing the concerns expressed about the public review in comparison to the analysis of comment quality, it is difficult to determine if the perceptions about the quality of feedback is, in fact, a function of differential feedback; or if these perceptions are influenced by the general apprehension surrounding a novel means of integrating peer/public review. One experience with a public review process may not be adequate exposure to alter long–standing views on the journal review process. Despite concerns expressed, it is interesting to note that 100% of the authors stated that they would be interested in participating in a public review process for potential journal publication in the future.

Conclusions & Implications

When analyzing the data from the first volume and open review process of JIR reveal several important points that support open review processes and open the door for further research. From the perspective of sheer views, this study has shown that substantial reviewer participation may have contributed to a well–rounded review process where reviewers were engaged. Further, that these reviewers as evidenced in the focus of their comments were constructively commenting on implications, methodology, content and written quality of the papers. This in turn created a space where refinement of content, structure, and design of research was achieved in a process of scholarly inquiry and dialogue. Drawing on examples of other trials of crowd sourcing we argue that this dialogue can indeed create a quality process and product (i.e., Wikipedia, ornithology, astronomy). Also, it is interesting to note that the results of this study suggest that the authors themselves felt satisfied with the overall process. In general, the authors received feedback openly and made changes along the way. This high level of satisfaction may indicate that the authors themselves were also highly engaged in the open review process. In this way, the combination of open public review and closed peer review can work together to create a new and perhaps more robust review model, that we call hybrid review.


B. Jean Mandernach, Ph.D. is Research Professor and Director of the Center for Innovation in Research and Teaching at Grand Canyon University. Her research focuses on enhancing student learning through assessment and innovative online instructional strategies. In addition, she has interests in examining the perception of online degrees, the quality of online course offerings and the development of effective faculty evaluation models. Jean received her B.S. in comprehensive psychology from the University of Nebraska at Kearney, an M.S. in experimental psychology from Western Illinois University, and Ph.D. in social psychology from the University of Nebraska at Lincoln.

Rick Holbeck, M.S., M.Ed. is Director of Online Full Time Faculty at Grand Canyon University. His research focuses on formative assessment in the online classroom to enhance student learning and best practices for online teaching and learning. Other interests include online faculty time efficiency and using technology to enhance teaching and learning. Rick received his B.S. in secondary music education from Bemidji State University (Minnesota), his M.S. in Educational Leadership from Southwest Minnesota State University, and his M.Ed. from Grand Canyon University in curriculum and instruction with an emphasis in technology. Rick is currently a doctoral student at Grand Canyon University in higher education leadership.

Ted Cross, Ed.D. is Director of the Office of Dissertations in the College of Doctoral Studies as well as an instructor at Grand Canyon University in Phoenix, AZ. Ted’s research focuses on improving online community, teaching, and research practices. Ted earned a B.A. in English from Brigham Young University, a M.A. in English from Arizona State University, and an M.S.ed in Education from The University of Pennsylvania. Further, Ted holds a Post Bac. Certificate in H.R. Management from The Wharton School of Business and his doctorate from Pepperdine University in Organizational Leadership.

Bibliography

  • Crawford, Mark. “Biologists Using Social-networking Sites to Boost Collaboration.” BioScience 61, no. 9 (2011): 736. doi: 10.1525/bio.2011.61.9.18.
  • Driscoll, Margaret. “Opening Eyes: How Open Access Changed Scholarly Publishing.” The Educational Collaborative 1 (2010): 1–9. doi: 10.3138/jsp.45.4.01.
  • Fitzpatrick, Kathleen. Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York: NYU Press, 2011.
  • Fitzpatrick, Kathleen. Planned Obsolescence: Publishing, Technology, and the Future of the Academy. Media Commons Press [online] (2009): http://mcpress.media-commons.org/plannedobsolescence/.
  • Flatt, Jennifer M. Stolpa. “The Scholarship of Teaching and Learning.” Phi Kappa Phi Forum 85, no. 3 (2005): http://www.questia.com/magazine/1G1-139719288/the-scholarship-of-teaching-and-learning#/.
  • Friedman, Thomas L. and Sony Electronic Publishing Company. The World is Flat: A Brief History of the Twenty-first Century. New York: Farrar, Straus and Giroux, 2008.
  • Giles, Jim. “Internet Encyclopaedias go Head to Head.” Nature 438, no. 7070 (2005): 900–01. doi: 10.1038/438900a.
  • Greengard, Samuel. “Following the Crowd.” Communications of the ACM 54, no. 2 (2011): 20–2. doi: 10.1145/1897816.1897824.
  • Harnad, Steven. “The Invisible Hand of Peer Review.” Nature [online] (1998): http://users.ecs.soton.ac.uk/harnad/nature2.html.
  • Hodgkinson, Matt. “Open Peer Review & Community Peer Review.” Journalology (2007): http://journalology.blogspot.com/2007/06/open-peer-review-community-peer-review.html.
  • Hoffmann, Leah. (2009). “Crowd Control.” Communications of the ACM 52, no. 3 (2009): 16–7. doi: 10.1145/1467247.1467254.
  • Howe, Jeff. Crowdsourcing: Why the Power of the Crowd is Driving the Future of Business. New York: Three Rivers Press, 2008.
  • Hubball, Harry and Anthony Clarke. “Diverse Methodological Approaches and Considerations for SoTL in Higher Education.” The Canadian Journal for the Scholarship of Teaching and Learning 1, no. 1 (2010): http://ir.lib.uwo.ca/cjsotl_rcacea/vol1/iss1/2/.
  • Jefferson, Tom, Philip Alderson, Elizabeth Wager, and Frank Davidoff. “Effects of Editorial Peer Review: A Systematic Review.” The Journal of the American Medical Association 307, no. 6 (2002): 539–628. doi: 10.1001/jama.287.21.2784.
  • Kiernan, Vincent. “Rewards Remain Dim for Professors who Pursue Digital Scholarship.” Chronicle of Higher Education 46, no. 34 (2000): 45–6. doi: 10.1177/019263650408864104.
  • Lee, Sandra Soo-Jin and LaVera Crawley. “Research 2.0: Social Networking and Direct-To-Consumer (DTC) Genomics.” The American Journal of Bioethics 9, no. 6–7 (2009): 35–44. doi: 10.1080/15265160902874452.
  • Maron, Nancy L. and K. Kirby Smith. “Current Models of Digital Scholarly Communication: Results of an Investigation Conducted by Ithaka for the Association of Research Libraries.” Journal of Electronic Publishing 12, no. 1 (2009): doi: 10.3998/3336451.0012.105.
  • O’Dell, Sue. “Opportunities and Obligations for Libraries in a Social Networking Age: A Survey of Web 2.0 and Networking Sites.” Journal of Library Administration 50 (2010): 237–51. doi: 10.1080/01930821003634989.
  • Poschl, Ulrich. “Interactive Open Access Publishing and Public Peer Review: The Effectiveness of Transparency and Self-regulation in Scientific Quality Assurance.” International Federation of Library Associations and Institutions 36, no. 1 (2010): 40–6. doi: 10.1177/0340035209359573.
  • Rowland, Fytton. “The Peer-review Process.” Learned Publishing 15, no. 4 (2002): 247–58. doi: 10.1087/095315102760319206.
  • Suber, Peter. “Timeline of the Open Access Movement.” Timeline of the Open Access Movement (2009): from http://www.earlham.edu/~peters/fos/timeline.htm.
  • Sweeney, Aldrin E. “E-scholarship and Electronic Publishing in the Twenty-first Century: Implications for the Academic Community.” Educational Media International 38, no. 1 (2001): 25–38. doi: 10.1080/09523980122551.
  • von Ahn, Luis and Laura Dabbish. “Designing Games with a Purpose.” Communications of the ACM 51, no. 8 (2008): 58–67. doi: 10.1145/1378704.1378719.
  • Walsh, Elizabeth, Maere Rooney, Louis Appleby, and Greg Wilkinson. “Open Peer Review: A Randomized Controlled Trial.” The British Journal of Psychiatry 176, no. 1 (2000): 47–51. doi: 10.1192/bjp.176.1.47.
  • Williamson, Alex. “What Will Happen to Peer Review?” Learned Publishing 16, no. 1 (2003): 15–20. doi: 10.1087/095315103320995041.