Abstract

The long-term sustainability of Open Source (OS) software depends on its community of developers and core users, as well as that community's stability. Assessing OS software and the community which creates it is, therefore, an essential step in using OS software for a project. In this study, surveys of OS journal management systems were reviewed to determine which were still actively maintained. Actively maintained systems were rated using QualiPSo's Open Maturity Model (OMM), an assessment tool for determining the maturity and robustness of OS software. Of the OS journal management systems mentioned in existing surveys, only Ambra, Lodel, and Open Journal Systems (OJS) are still actively maintained. Of these, OJS scored the highest OMM rating, followed by Ambra and Lodel. A new system, Janeway, was also assessed. Although OS software can carry risks, it also brings benefits to librarians, readers, and publishers of scholarly journals. Assessing OS software and getting involved in OS software communities both help ensure the long-term survival of these communities and their work.

Keywords: Open Access publishing; Open Source software; software assessment

Author bio: Stewart C. Baker is the Systems and Institutional Repository Librarian at Western Oregon University. His interests are Open Source and Open Access, web design, emerging technologies, and how libraries are adapting to the changing information landscape. Stewart is also a published haiku poet and author of speculative fiction.

Introduction

Since its start in the mid-1990s, Open Access (OA) publishing has grown considerably. There has been a similar increase in the number of Open Source (OS) systems that can be used to manage the process of running a journal—OA or otherwise. However, many of these pieces of software are no longer maintained, and a relative lack of surveys reviewing journal management software in the last decade leave would-be OA journal publishers without recent information.

This article provides a current survey of active journal management systems to address that gap, while also presenting information about software assessment, an important tool for anyone wishing to use OS software of any type. Lastly, it carries out assessments of four active OS journal publishing tools, showing both the benefits of OS assessment and describing some of the potential problems that OA publishers should consider when selecting a piece of journal management software.

It is important to note that OS software did not emerge from a vacuum, but as an alternative to software developed by commercial organizations. In Eric S. Raymond's book on OS software development, The Cathedral and the Bazaar, the commercial, private model of software development is referred to by the Cathedral metaphor in the title, where a unified agenda and "small bands" of developers working without outside input drive the project, while software developed under an OS model is the bazaar, with "a great babbling" of "differing agendas and approaches" (1999, 29-30). For the sake of clarity, this article refers to software developed by commercial organizations without the release of its source code as "proprietary," while software developed under a license which allows anyone to access and make changes to its source codes is called Open Source (OS)—both uses which are typical in the literature about OS (West 2003; MacCormack, Rusnak and Baldwin 2006; Zhu & Zhou 2011).

Raymond argues that OS software "systematically harness[es] open development and decentralized peer review to lower costs and improve software quality" (1999, 1). These reasons are similar to those found in Open Access (OA) declarations like the Budapest Open Access Initiative, which states that OA increases the readership and usefulness of scholarly literature while lowering price and other barriers (Chan et al. 2002). This common ground is no coincidence. Open movements (Open Science, Open Standards, etc.) stem from concerns over the licensing of intellectual work and a belief that enabling users to freely access this work can drive positive social change. Involvement in Open movements has been shown to benefit libraries by improving the publication of, access to, and long-term preservation of scholarly publications (Corrado 2005).

In the OA movement, the drive for increased access led to a large number of web-based, scholarly OA journals starting in the mid-1990s (Laakso et al. 2012). Early journals were in many cases the efforts of individual researchers or academics, and were often thought to be unsustainable or of poor quality (Björk & Solomon 2012). As bandwidth and storage increased, and modern web technologies enabled the creation of web-based authoring platforms like WordPress, publishers faced significantly lower barriers in creating a usable and sustainable online journal (Björk 2013).

Along with advances in web technology, systems were created specifically to manage scholarly journals, with features that allowed editors and publishers to easily receive and review submissions from authors, and distribute published articles (Suber 2003; Willinsky 2005). A number of these systems distribute their software under an OS license, lowering the costs and other barriers of starting a new journal compared to commercial software. The effects of these lowered barriers can be seen in the Directory of Open Access Journals (DOAJ), which lists over 14,700 journals containing nearly 5 million articles as of June, 2020 (DOAJ 2020).

Although OS software does reduce barriers, it also comes with risks. In many cases, OS software development relies on volunteer administrators and programmers, as well as community support. Just like any other software, OS software can also become obsolete, leading to unfixed bugs and potential security concerns. The potential for software to become obsolete means that published articles which analyze OS journal management systems quickly go out of date, and websites which list them are not always updated to reflect obsolete software or dead links. Articles and websites also tend to include programs that manage only a portion of the journal management process, or which are related but do not manage it at all.

The rest of this article presents a review of the literature on OS journal management software, using previous surveys' findings and original research to present a list of currently active systems as of June 2020. Next, the article describes the assessment of OS software, moving from general principles to a summary of one specific assessment tool, QualiPSo 's Open Maturity Model (OMM). Finally, the article assesses four actively maintained OS journal management systems: Ambra, Janeway, Lodel, and Open Journal Systems (OJS) and discusses these assessments.

Literature Review

Lists of OS Journal Management Systems

Published articles about journal management systems from the early 2000s primarily listed proprietary software. Examples of these early lists are McKiernan (2002); a white paper from Public Library of Science (PLOS) titled Publishing Open-Access Journals (2004); and Ware (2005). McKiernan’s article lists only proprietary software, while the PLOS white paper and Ware list OJS amidst other, proprietary systems. The only website from this time[1] which lists OS journal management systems—a list by the Association of Research Libraries' Scholarly Publishing and Academic Resources Coalition (SPARC)—also lists mostly proprietary software. In 2002, all relevant resources on the SPARC site were proprietary; in 2004, OJS and the ePublishing Toolkit (ePubTk)—a now-defunct OJS journal management system sponsored by the Max Planck Society of Germany—were added (SPARC 2004). The 2004 addition was the last change made to the SPARC site; after that date, it remained a static resource until it went offline in 2015 (SPARC 2015).

Cyzyk & Choudhury (2008) provide the first comprehensive list of available OS journal management systems, evaluating each for ease of installation and ease of use, as well as sustainability. OJS is the only system listed which still exists as of 2019; the authors also reviewed now-defunct systems DPuBS—a joint project of Cornell University Library and the Pennsylvania State University Libraries and Press—and Hyperjournal—an Italian project—as well as some ancillary software.

Since 2010, two more lists of OS journal management systems have been published in articles, both in 2012, and two additional website lists created. Samuels & Griffy (2012) evaluate only DPuBS and OJS on the strength of Cyzyk and Choudhury's article and "various other factors" (2012, 46). Swan & Chen (2012) provide a "brief survey" of some OA publishing tools, including OJS, HyperJournal, ePubTk, GAPworks—a German-funded project of BerliOS—DPubS, Topaz/Ambra—the software used to run PLOS, and the framework it was previously built upon, respectively—and the Drupal E-Journal module. Swan & Chen's survey appears to be the most recently published article listing OS journal management systems.

The two website lists created since 2010 are the Simmons Open Access Directory (OAD) and the OA Info page on the Directory of Open Access Journals (DOAJ). Of these, the OAD list is slightly older, and has been updated more recently, with the oldest version dating from late 2012 and the most recent update from mid-2018. All versions of the OAD list include Ambra, Lodel—the software used to run French OA journal site Revues.org—DPubS, the E-Journal Drupal module, ePubTk, GAPworks, HyperJournal, OJS, and Topaz (OAD 2012). In 2018, a new system called Janeway was added (OAD 2019).

The last version the DOAJ site, from 2016, mentions the following installable publishing systems: Drupal's E-Journal module; HyperJournal; OJS; Ambra; and SciX's SOPS, as well as some proprietary software (DOAJ 2016). Exploration of the site in the WayBack Machine shows that these systems were also all included in December of 2013—the oldest capture on record (DOAJ 2013). In 2019, the OA Info page was replaced with the more comprehensive DOAJ Best Practices Guide, which functions more as a resource for authors and does not include a list of publishing platforms.

Other Articles and Websites

Beyond the lists above, a number of publications mention journal management systems in passing (Solomon 2008; Mullins et al. 2012; Brown 2013), as part of a broader review of OS software use in libraries (Randhawa 2008; Palmer & Choi 2014), or evaluate only a specific system such as OJS (Willinsky 2005; Owens & Stranack 2012; Edgar & Willinsky 2010).

Open Source Software Assessment

OS software has a radically different development process than most commercial products, with the replacement of in-house programmers by "developer-turned users ... cooperat[ing] under a model of rigorous peer-review and ... parallel debugging," and a much shorter release cycle than commercial software (Koch 2013, 189).

Because the success of OS software rests heavily on its community of users and developers, OS assessment methodologies must measure the software’s development process and community as well as products themselves (Koch 2013; Petrina, Stillitti, & Succi 2014). Studies and methodologies differ on what aspects of an OS software community play a role in its sustainability. Some elements include good project management practice, an effective structure of governance with clear leadership, and how the OS software is licensed (Gamalielsson & Lundell 2014). Others are code ownership, maturity process, the structure and organization of collaboration between a project's developers, and the robustness of the developer network (Ndenga et al. 2015, 338). Measuring the way OS software changes between releases can also be a good predictor of its suitability and overall quality (Ruiz & Robinson 2013; Macho, Robles, & Gonzalez-Barahona 2017).

Stol and Babar (2010) identify twenty assessment methodologies designed for OS software, although they note that half were not "well-defined methods" so much as a "set of evaluation criteria" (2010, 2). With so many available assessment methods, each with their own criteria and focus, it can be confusing for casual OS software users to select one. A number of studies have been conducted which compare various methods. For example, Deprez and Alexandre (2008) compare OpenBRR with QSOS; Petrinja, Sillitti, & Succi (2010) compare OpenBRR, QSOS, and OMM; and Glott et al. (2010) compare OpenBRR and QualOS software. No articles exist which exhaustively compare all available methods. One additional problem is that OS software assessment methods themselves do not always outlast the projects which created them.

The framework provided by Stol and Babar, which involves answering questions about the context, intended users, process, and evaluation status of the various methods, can aid in selecting the method that is most appropriate for any given assessment (2010, 3). Their research is also useful for the chart it provides summarizing several elements of 20 different methods, including whether each was initiated by a corporation or by individual researchers and whether it is a "well-defined method" or a "mere set of evaluation criteria" (Stol & Babar 2010, 2).

Methods

Preliminary Assessment

A list of OS journal management systems was compiled from the published surveys and web sites discussed in the literature review above, as well as one other OS journal management system discovered in GitHub but not mentioned in any existing list. Each system's web site was accessed and reviewed to determine whether the system was an OS journal management system or ancillary software. Ancillary software was disregarded as beyond the scope of this article.

If the system was an OS journal management system, the system's website was used to gather further information. OS journal management systems were separated into two categories—historical and active—based on last release date and other indicators of an active project, such as an active community or recent signs of activity on the system's web page or source code repository. Historical OS journal management systems (Appendix A) were not further assessed, as their lack of activity makes them unlikely to be suitable choices for a new project. Active systems were assessed using QualiPSo's Open Maturity Model, as described below.

Methodology for Active OS Journal Management Systems

Selection of Assessment Method

For this study, QualiPSo's Open Maturity Model (OMM) was selected as an assessment method. QualiPSo, a European project which ran from 2006 to 2010, aimed to improve the quality of and establish technologies, procedures, and policies for OS software (European Commission 2011). Although OMM was designed primarily to assist people involved with the creation of OS software, its documentation notes that it can also be used by third parties interested in assessing OS software (Qualips 2008a). All of the OS software assessment methods reviewed above have shortcomings, and OMM is no exception. In particular, OMM has been noted as having a heavy reliance on documentation or the ability to contact a developer of the OS software (Ndenga et al. 2015, 45).

OMM was selected for this study over the other assessment methods for two reasons. First, its aim is not necessarily to determine whether a piece of OS software is suitable for a specific project, but to measure the overall "trustworthiness" of the OS software and the community behind it, as well as the maturity of that community's development processes (QualiPSo 2008a). Additionally, information on how to implement OMM is available online on the QualiPSo website, along with guidelines for carrying out assessments with OMM as an external researcher, usage scenarios, and sample questionnaires and rating algorithms.

Overview of QualiPSo's OMM

OMM can be used to assess an OS software community's maturity at either a basic, intermediate, or advanced level. Each level examines a number of trustworthy elements (TWEs), defined by the model as "a specific factor or aspect of the software development process, or of product results that indirectly influence the perception of the trustworthiness of the FLOS software development process" (QualiPSo 2008b). The assessment of TWEs is what allows users to determine whether or not the software can be considered reliable.

At the basic level, the TWEs examined are those considered "essential" for reliable OS software, while the intermediate and advanced levels assess OS software development communities with more mature processes and products (QualiPSo 2008c). Table 1 describes the TWEs for the basic level, as well as what practices they measure.

Table 1 - Trustworthy Elements (TWEs) for the Basic Level of QualiPSo's OMM
TWEPractices Measured
LicensesSelection and evaluation of licensing practices.
Product DocumentationProvision, maintenance, and improvement of high-quality documentation.
Configuration ManagementEstablishment of baselines and configuration items, tracking of change requests, and establishment of configuration management records such as change logs.
EnvironmentSelection and maintenance of the development environment, including methodologies, communication tools, and use of Free or Libre Open Source Software (FLOSS) tools.
Maintainability and StabilityStability and maintainability of code and design, as well as the management of these processes.
Number of Commits and Bug ReportsEase of bug reporting by end-users, management of commits and bug reports, and improvement of the environment used to manage commits and bug reports.
Quality of Testing Process Provision of testing, including the creation and maintenance of a testing schedule and testing process.
StandardsUse of Open Standards, quality of implemented standards, and user satisfaction of standards, as well as adoption of development processes and project independence from specific technologies.
RequirementsManagement of product requirements.
Project Planning 1Estimation of project scope and lifecycle, and the development of a project plan.
Roadmap 1The creation of a roadmap for product releases.

Each TWE is given a rating from 0 (not applicable) to 3 (fully implemented) based on whether or not specific practices are present or not in the OS software community or its product, and a community is said to meet each maturity level when 90% of that level's TWEs have been fully implemented (QualiPSo 2008d). To achieve a maturity level of intermediate and advanced, a project must meet all TWEs from the level(s) below it as well as those of its current level.

Each practice also lists "LookFors," specific items or elements of that practice intended to guide in its assessment. For example, at the basic level, the "License" TWE includes practices like "Ensure that FLOS software does not contain any commercial components," with LookFors of "The project has a large percentage of FLOS software licensed components" and "Tools used in the project are FLOS software licensed" (QualiPSo 2008e).

Assessment of OS Journal Management Systems with OMM

For the purposes of this study, four active OS journal management systems were measured against the basic level of OMM. Assessment sheets for each system were created in Google Sheets (Appendix B), modeled on the charts available in the "OMM Structure" and "OMM Assessment Results" sections of the OMM documentation website. Using these sheets, each system was assessed and rated as follows:

  1. Each practice was given a rating of 0 to 3.
  2. Each TWE was given a rating of 1 to 3, based on an average of its practices, ignoring those practice rated 0 (not applicable). This average was rounded to the nearest whole number. The total number of practices, the number of fully and partially implemented practices, and the number of practices not rated, was also noted for each TWE.
  3. Each system was given an overall rating by taking the number of fully implemented practices and dividing it into the total number of rated practices, then converting the result to a percentage value, rounding down. (e.g. If a system had fully implemented 53 out of 76 rated practices, it was given a rating of 53/76, or 69%).

This methodology varies slightly from that described in the "rating algorithm" section of QualiPSo website in two main ways. First, it records the number of implemented practices at the TWE level as well as the overall level. This aids in clarity by showing how close to full implementation each TWE is. Second, as only the basic level for each system was assessed, it does not determine an overall OMM rating.

Results

OS Journal Management Systems

There were over a dozen OS journal management systems described in the literature. Many of these, however, are no longer supported, and in some cases the only way to access information about the software is by using the Internet Archive's Wayback Machine to browse deleted web pages. Additionally, a number of the systems described in the literature are not fully-fledged journal management systems, but only manage some small aspect of the process of accepting submissions, peer review, and publication of accepted papers. A list of these systems and software can be found in Appendix A.

Active OS Journal Management Systems

Of the journal systems reviewed, only four were deemed to be active: Ambra, Janeway, Lodel, and OJS (Table 2). Three of these (Ambra, lodel, and OJS) were mentioned in prior studies. Janeway was first released in 2017. OMM assessments were carried out on all four systems.

Table 2 - Active OS Journal Management Systems
SystemURL
Ambra (new)https://PLOS.github.io/ambraproject
Janewayhttps://janeway.systems/
Lodelhttps://lodel.org/
Open Journal Systems (OJS)https://pkp.sfu.ca/ojs/

None of the four systems assessed reached 100% of fully implemented practices across all TWEs. Although there were clear differences in the TWE ratings and overall score of each system, there were also some commonalities. All systems scored well in the licensing TWE, with appropriate licenses selected and no commercial components. The practices within the environment and commit TWEs also tended to be fully implemented, largely by virtue of all four systems' use of GitHub to manage code and documentation. GitHub tracks code contributions, bug reports, pull requests, and user satisfaction with bug requests, as well as serving to make environment tools and intra-project communication easier.

Another commonality was that all of the systems lacked adequate public information to assess some TWEs. This reflects more on the partial nature of OMM assessments carried out by external researchers than the maturity of the projects assessed, however, and is not necessarily an accurate reflection of the systems' robustness.

The overall rating of each assessed system, as well as its total number of fully implemented, partially implemented, not implemented, and not rated practices, are listed in Table 3. The sections below describe the ratings of each system in greater depth.

Table 3 - Overall OMM Assessment of Ambra, Janeway, Lodel, and OJS
SystemRatingTotal PracticesFully Implemented PracticesPartially Implemented PracticesNot Implemented PracticesNot Rated Practices
Ambra57%792351140
Janeway83%79456324
Lodel63%79266938
OJS89%79525120

Ambra

Ambra is the system created and used by the Public Library of Science (PLOS). The software allows users to host multiple journals on a single platform, supports citations, PDF and XML downloading, and user registration, along with a number of other features. Ambra underwent a complete redesign starting in 2013, resulting in the release of a new version in 2016 and the OS release of this new version's code in 2017 (PLOS 2018). This article looks at the new version of Ambra, rather than its legacy version, which is no longer supported.

Of the systems assessed, Ambra scored the lowest rating, with only a 57% based on 39 rated practices (Table 4). Ambra was also the system with the most practices (40) left unassessed, reflecting a sparsity of publicly-available documentation which made it difficult for an outsider to determine the project's plans and processes.

Table 4 - OMM assessment of Ambra
Trustworthy ElementRatingTotal number of practicesFully Implemented PracticesPartially Implemented PracticesNot Implemented PracticesPractices not Rated
Overall Rating59.97%792351140
Licenses2.8797101
Product Documentation2.1494032
Configuration Management1.8101225
Environment363003
Maintainability and Stability1.2560132
Number of commits and bug reports355000
Quality of Testing Procedures01000010
Standards2.7583104
Requirements040004
Project Planning090009
Roadmap130030

The highest scores for Ambra were for the TWEs of environment and commits, with both these areas receiving a "Fully Implemented" rating of 3 for the practices rated. However, it is worth noting that only 1/3 of the practices in the environment section were actually rated. Additionally, as mentioned above, the use of GitHub meant that Ambra had a good score for commits and bug reports.

TWEs where Ambra scored above a "Partially Implemented" rating of 2 were product documentation and standards. At the time of review, Ambra's documentation was only available in English, and there was no evidence of any roadmaps. Documentation in general was difficult to find beyond the GitHub Wiki page set up for the project. However, that documentation was of a generally high quality, and was easy to locate from the main project page. Likewise, although there were limited data available on the standards selected by Ambra, standards could generally be determined by examining the source code and the basic information about the project that was made available on GitHub.

Two TWEs—configuration management and maintainability and stability—were rated between 1 and 2. A rating in this range suggests that, while some practices scored well, there was not enough evidence of work done across all practices for these TWEs to be considered partially implemented. In configuration management, for example, Ambra benefited from its choice of GitHub as a source code management system. Although GitHub does not quite support the complexity OMM requires of a configuration management system, such as the identification of configuration items and the establishment of baselines, it does enable developers to monitor change requests and the progress of making changes to a project's code. In terms of maintainability, the project scored poorly for its lack of explicit goal-setting and authority delegation, and for the lack of discussion about interoperability with the older, legacy version of Ambra, but did maintain older code and documentation.

The TWEs of testing procedures, requirements, and project planning were left unrated, as it was not possible to tell whether any of these internal documents existed.

Janeway

Janeway made its first release in October of 2017 (Birkbeck Centre for Publishing and Technology, 2019). Although a relative newcomer, the software is maintained by the Birkbeck Centre for Technology and Publishing, University of London, and its project lead is Martin Paul Eve, a long-time advocate of OA and the co-founder of the Open Library of Humanities (OLH) (Janeway, n.d.). The system was primarily created to host OLH journals, but was also created with open design principles in mind, so that developers, publishers, and the OA community would be encouraged to "comment on, make feature suggestions for, or contribute to the codebase" (Eve & Byers 2018).

Table 5 - OMM Assessment of Janeway
Trustworthy ElementRatingTotal number of practicesFully Implemented PracticesPartially Implemented PracticesNot Implemented PracticesPractices not Rated
Overall Rating83.33%79456324
Licenses399000
Product Documentation2.5596210
Configuration Management2.5795112
Environment366000
Maintainability and Stability2.7563102
Number of commits and bug reports355000
Quality of Testing Procedures2.57105113
Standards383005
Requirements040004
Project Planning391008
Roadmap2.6632100

Janeway scored second highest of the four reviewed systems, with an overall rating of just over 83% (Table 5). The system also had a relatively low number of practices that could not be rated, a good indication that the system's design principles are being followed through the release of information in an open manner.

Of the practices rated, only three had not been implemented to some degree. Two of these were a lack of documentation and system configuration options in languages other than English. The third, a lack of detailed testing criteria and test cases, is likely a result of another of Janeway's design principles which prioritizes "selective regression testing" over "total testing" (Birkbeck Centre for Technology and Publishing 2019). Partially implemented practices included the lack of a documentation roadmap, limited updating of documentation with new version releases, and the scale of the project's overall roadmap, among others.

Lodel

Lodel is a French OA journal publishing system used primarily by OpenEdition, a Francophone site which hosts a number of books, journals, and other documents (OpenEdition 2016). Although the most recent major version of Lodel (1.0) released in March of 2014 (Riviere 2014), minor versions have been released since, with regular commits and closed pull requests on the project's GitHub page. The software can be installed in English, French, and several other languages.

Table 6 - OMM Assessment of Lodel
Trustworthy ElementRatingTotal number of practicesFully Implemented PracticesPartially Implemented PracticesNot Implemented PracticesPractices not Rated
Overall Rating63.41%79266938
Licenses398001
Product Documentation2.1693123
Configuration Management1.83101324
Environment365001
Maintainability and Stability262022
Number of commits and bug reports2.854100
Quality of Testing Procedures01000010
Standards2.7583104
Requirements040004
Project Planning090009
Roadmap130030

Lodel earned an OMM rating of 63% based on 41 rated practices (Table 6). As with Ambra, many of Lodel's practices—38 out of 79—could not be rated due to a lack of publicly available information. The TWEs with the highest ratings for Lodel were Environment and Licenses, both of which had a score of 3. Like Ambra, Lodel's use of GitHub for its code made benefited it for the environment TWE, even though some of the practices related to environment were impossible to determine.

TWEs where Lodel scored higher than "partially implemented" were, from highest to lowest, Commits and Bug Reports, Standards, and Product Documentation. Lodel does well with tracking its commits and bug reports through GitHub, but did have some outstanding reports, which lost it a few points in this area. Again similarly to Ambra, Lodel's standards were often not explicitly documented, but were possible to determine through examination of the system's code. Finally, although Lodel's user documentation is relatively well-maintained in a GitHub Wiki, the system loses points for lack of a roadmap and the fact that some of its documentation is only available in French.

Only one of Lodel's TWEs, Maintainability and Stability, scored a "partially implemented" rating of 2. This was because neither project goals nor maintenance goals were defined anywhere in the product's documentation. Likewise, only the Roadmap TWE had a "not implemented" rating of 1. The TWEs of testing procedures, requirements, and project planning were left unrated, as it was not possible to tell whether any of these internal documents existed.

A recent comment from a Lodel developer in GitHub, stating that a roadmap was "in the progress of being redefined" and that the "release production cycle will resume shortly" (jcsouplet 2019) suggests that a re-examination of the system in the near future may well improve its OMM rating.

OJS

Open Journal Systems (OJS) is an OS journal management system maintained by the Public Knowledge Project (PKP) based out of Simon Fraser University (PKP 2014). The system is still in active development, with the most recent release of its stable version of OJS 3 in September of 2018, and the most recent release of its stable version of OJS 2 in January of 2019 (PKP n.d.). OJS is by far the most popular choice for OS journal management software, with 9,700 journals publishing with the software as of February 2019 (PKP 2019).

Of the OS journal management systems reviewed here, OJS came closest to reaching a fully implemented rating at the basic level, with an overall rating of 89%. Additionally, OJS had the highest number of practices that were able to be rated, with 59 of 79 rated (Table 7).

Table 7 - OMM Assessment of OJS
Trustworthy ElementRatingTotal number of practicesFully Implemented PracticesPartially Implemented PracticesNot Implemented PracticesPractices not Rated
Overall Rating89.66%79525120
Licenses399000
Product Documentation2.8898100
Configuration Management2.7596201
Environment2.8365100
Maintainability and Stability365001
Number of commits and bug reports355000
Quality of Testing Procedures2.71106013
Standards2.7583104
Requirements040004
Project Planning392007
Roadmap333000

A number of OJS's TWEs received a "fully implemented" rating of 3: Licenses, Maintainability and Stability, Commits and Bug Reports, Project Planning, and Roadmap. Notably, OJS was the only active system to include a roadmap.

With the exception of the Requirements TWE, which was unrated due to a lack of available information, all of the remainder of OJS's TWEs rated higher than "partially implemented." Like Ambra, Janeway, and Lodel, OJS maintains its code in GitHub, which helped it achieve high ratings in many areas. Additionally, its extensive documentation on the PKP website made finding information relatively easier than the other systems.

Discussion

Obsolescence of Open Source Journal Management Systems

Despite more than a dozen OS journal management systems having been in active development during the 2000s, only a handful are still active today. While some of this may be due to OA journal publishers choosing proprietary systems, the nature of OS projects seems likely to also play a role. Although OS software is not per se more or less sustainable than proprietary software, a different set of dynamics powers its long-term survival. Wheeler points out the necessity of an active development community (2015); this is echoed by Gamalielsson & Lundell, who argue that OS software often survives or fails in the long term based on the sustainability of its community (2014, 128).

Historically, OS journal management software tends to have been created by specific organizations involved in an OA project. In many cases, these projects grew out of grants or state-supported initiatives which either were not large-scale or had a particular end date. As constant focus from at least some core participants is required for OS communities to thrive, the limited scale and end date of these projects seems to have led a lack of interest in or ability to continue maintaining and updating the software over the long term. A complementary factor was the rapid development of web technologies starting in the early-to-mid 2000s. Systems which did not have a strong core community actively updating and responding to these changes quickly became outdated and unusable.

Unlike many of the obsolete journal management systems, the three previously noted systems which were still active as of this study's date—Ambra, Lodel, and OJS—are tied to projects which are massive in scale and have no set end date, have significant buy-in from the wider academic community as well as funding from institutions which are dedicated to the OA movement's long-term success. Janeway, while a newcomer, is similarly tied to the Open Library of Humanities, another OA project with no set end date.

PKP, for instance, in a 2011 document, forecast revenue of $1 million for OJS, with "a highly interactive community of over 3,000 registered members" (Pinto 2011, 5) and "over 9,000 OJS journals" (Pinto 2011, 1). PLOS had assets of more than $17 million at the end of 2017 (PLOS 2017), with more than 215,000 articles published in their journals (PLOS n.d.a). Open Edition, the publisher for which Lodel was written, is smaller than the other two projects, but holds 503 journals and 6665 books at the time of this article (OpenEdition 2019), with stakeholders including several French Universities (OpenEdition n.d.).

The strong core communities and effective governance of these three systems are likely what have allowed them to respond to changes in technology over the years and remain active. Additional analysis will be necessary to determine whether the same is true of Janeway in the long term.

Assessment of Open Source Software

The difficulties of formally assessing OS software in some ways mirrors the difficulties that a user may find when looking at OS software itself. There are a large number of different assessment methodologies and tools available (Stol & Babar 2010), and selecting the one that is best for a given project can be tricky. Many of these models appeared in the early-to-mid-2000s mirror the proliferation of OS software development in the same time period, as shown above with the number of OS journal management systems. An additional difficulty is that detailed instructions for some methodologies are not easily accessible.

Once an assessment methodology or tool has been selected, it can be difficult to find all the information required to make a complete assessment. As seen in the results section above, several of the TWEs that OMM considers essential at a basic level could not be assessed for any of the journal management systems. Judging from the fact that OMM explicitly allows for partial assessments by users outside the OS community being assessed (QualiPSo 2008f), this is not uncommon, but some may find this lack of information frustrating.

In some studies, authors essentially crowdsource the assessment step, with the limit of their own research being the number of installations of the software under review (Goh et al. 2006, 8). Although such shortcuts may serve in the short term, they are only tenable if others have assessed software more thoroughly. Despite the difficulties involved in assessment, the involved and technical nature of using OS software means that it should not be bypassed. Additionally, because many OS communities provide technical support for end-users, taking time before installation to become familiar with an OS community and its software can make troubleshooting and using the software easier.

It is worth pointing out that the OMM "score" presented in this study is not intended as a pass/fail type of score. Rather, it is a measure of likely indicators of a system's robustness and long-term viability. Additionally, some aspects of a system's score may matter less to some users than others. A user who is comfortable writing their own code to fix bugs, for instance, may not need to pay as much attention to the number of releases and completed bug reports a system has compared to a user who is not capable of doing so. Likewise, someone using the system to create a small-scale, short-term project may not be as concerned with whether a system has engaged in the long-term planning implied by a road map as someone looking to migrate a major journal to a new, OS system. Assessment is contextual, in other words, and the needs of a given project can outweigh system shortcomings.

A final thing to note is that assessment of an OS community can be used to make that community more robust. Unlike commercial software, where organizations are unlikely to respond directly to feedback about their products, end-users in an OS community can directly improve the development process. By getting involved instead of just downloading and installing their software, users can play a role in improving the sustainability and success of OS communities.

Conclusion

OS software can make specialized processes like journal management more accessible by lowering cost barriers, while its community-driven development means it can be more responsible to users' needs and more flexible than commercial software. However, these benefits are balanced by the potential fragility of OS projects—especially those with a small development team, or which are tied to initiatives with a set ending date. As such, users of OS software need to be more involved with selection and assessment than with most commercial equivalents.

An added difficulty is that assessing the maturity, longevity, and robustness of OS software projects and communities requires knowledge unlikely to be held by small journal publishers. Assessment tools and methodologies like OMM help, but users of these will still need to understand technical concepts and terms such as baselines, change requests, commits, and requirements. Although it is true that assessing OS software can be tricky, though, taking the time to do so can save users from having to migrate their content to another system if the one they select stops updating. Assessment gives users a greater understanding of the process of OS software development and can encourage participation in OS communities, improving their likelihood of long-term success.

In the case of journal software in particular, the proliferation of OS projects in the early 2000s—and the relatively small number of these which are still active in 2019—can make it seem as though any OS journal management software is bound to become obsolescent in a short period of time. As proven by Ambra, Lodel, and OJS, however, it is quite possible for OS journal management software to thrive in the long term so long as there is a dedicated community of developers and users, a plan for development and maintenance, and financial support.

References

Appendix A: Historical OS Journal Management Systems

System NameURLLast UpdateNotes
Ambra (legacy)http://www.ambraproject.org/2015Replaced by new version of Ambra in 2017.
Annotumhttps://annotum.org/2016WordPress plugin.
DPubShttp://dpubs.org/2008
E-Journalhttps://www.drupal.org/project/ejournal2014Drupal plugin.
ePubTkhttps://dev.livingreviews.org/projects/epubtk2014
GAPworkshttps://sourceforge.net/projects/gapworks.berlios/2014
Hyperjournalhttps://sourceforge.net/projects/hyperjournal/2013
Rhaptoshttps://github.com/RhaptosUncertainReplaced by OpenStax, an OER repository system.
SciX Open Publishing Services (SOPS)http://www.scix.net/sops.htm2004

Appendix B: OMM Assessment Template

This appendix presents a basic template for assessing OS software using QualiPSo's Open Maturity Model (OMM). The full template, containing additional sections for each TWE that describe how to assess relevant practices, can be accessed online at https://bit.ly/2E5idXb or downloaded here.

Trustworthy Element (TWE)RatingTotal number of practicesFully Implemented PracticesPartially Implemented PracticesNot Implemented PracticesPractices not RatedNotes
Overall Rating78
Licenses9
Product Documentation9
Configuration Management9
Environment6
Maintainability and Stability6
Number of commits and bug reports5
Quality of Testing Procedures10
Standards8
Requirements4
Project Planning9
Roadmap3
    1. This and other websites which no longer exist were accessed via the Wayback Machine.return to text