Historically, the value of publication has been measured by success in the marketplace and impact of the publication, whether that impact be cultural or scholarly. The calculus of this value has been as straightforward as number of copies sold (documented most widely in “best seller” lists) and/or dollars in profit generated to the complex citation and referral counts that result in a scholarly “impact factor.” Other measures, such as prize awards and reviews have also contributed to the assessment of the quality of a publication. Each of these measures has had different weight depending on the disciplinary location and publication genre of the work under discussion, as well, of course, as on the interests of the assessor (for instance a commercial publisher will measure value quite differently from a promotion and tenure review board). As with so many areas of our cultural and intellectual lives, the widespread adoption of digital technology and networked communication (with its attendant social media practices) has disrupted our metrics of publishing value and has called for a revision of the ways in which that value is calculated. In some professional and social circles, page visits, link referrals, Google ranks, presence in the Twitter universe and other social media prominence, are now taken as seriously as scholarly citation and profit margins, a shift that raises questions for how scholars balance the emerging professional requirement for an online presence with the need for privacy and protected space for research. In addition, the value measure of page visits and glances (where a quick hit might “count” for the same as an extended period of study and engagement) are still in the early stages of development. While we have seen the rise of “altmetrics” and “impact stories,” weeks on the New York Times Best Seller List continue to indicate worthiness for attention, at least at the level of the general public (witness the international interest and continued sales success of Capital in the Twenty-First Century), and the case for scholarly job security continues to be strongly influenced by citation based measures. In addition, the increased ease of collaboration and co-authoring, even across wide spans of time and space, make assigning authorial and impact “credit” both more compelling and more difficult. We are also still developing rubrics for calculating the broader social contribution of work that is made widely available via the Web. In the scholarly context this revision of measures of value continue to be embedded in disciplinary practices and prejudices, contexts that have a significant impact upon shaping evaluation metrics.

When the Journal of Electronic Publishing invited reflections and reportage on enduring, emerging and potential measures of publication value, we expected such discussions would be rooted in the publishing context (of value to whom, for whom?) and would address both short-comings and usefulness of the metrics under discussion. While we anticipated that our contributors would be attendant to changes wrought by digital technology and networked communication, we were also interested in metrics embedded within other media cultures, both those that endure and those that are no longer current.

Our expectations were both overcome and overturned. The response to our call for papers for this special issue was quite strong, and we received proposals for articles addressing many forms of evaluation. The majority of articles contained herein do tilt toward “new forms,” but our authors also bring new perspectives to older forms. We are delighted to begin with a report from the Impactstory team, a group currently leading the way in almetrics collection and reporting. They describe the current state of the art in altmetrics and its effects on publishing, and share Impactstory’s plan to build an open infrastructure for altmetrics.

Diesner et al. are less concerned with evaluating digital forms than they are with making use of digital technology to evaluate impact. In a fascinating counterpoint to assessing scholarly text, they turn their attention and ours to evaluating the impact of social documentaries in film. They report on a research project where they are “developing, applying and evaluating a theoretically–grounded, empirical and computational solution for assessing the impact of social justice documentaries in a scalable, robust and rigorous fashion. [They] leverage cutting–edge methods from socio–technical data analytics—namely natural language processing and network analysis—for this purpose. [They] also built a publicly available software tool (ConText) that supports these routines.” While at least this reader could spend a long time thinking about the measures for evaluating film, questions also arise quickly about whether the methods could be applied to other formats and genres and be used across the publishing landscape.

Two of the articles in this issue look at less often evaluated aspects of publishing. John Duhring undertakes a critique of the way in which would-be professionals are prepared for the market, particularly scrutinizing the movement from a “house” model of publishing to a studio model involving rapid-fire team development of digital publication (specifically, here, apps) and the value of immersing beginning publishers in that studio model. De Grandis and Neuman take on another kind of evaluation, the evaluation of platform models for publication. They begin with the assertion that “academic communities interested in digital publishing do not have adequate tools to help them in choosing a publishing model that suits their needs,” and go on to lay out a rubric for guiding and informing choices of models to best meet needs, with some special attention to Open Access models throughout.

Two articles focus upon more conventional forms of evaluation at a key moment of digital inflection. Belojevic, Sayers, and the INKE and MVP Research Teams are engaged in developing tools to support new forms of peer review. They prototype a plugin that “could enrich the affordances of authoring and publication platforms (e.g., Open Journal Systems, WordPress, and Scalar), expand peer review, and further contextualize the practices of networked knowledge making.” They argue that the prototype—which they call “Peer Review Personas”— “enacts strategies to transform individuated feedback into peer–to–peer networks for scholarly communication.” Camilla McKay turns to another venerable form of evaluation in a transitional moment, the book review, asserting that “given its long–standing importance, the absence of the equivalent of the book review for the world of electronic scholarship may impact its academic acceptance, especially for promotion and tenure.”

The perspectives and methods of our contributors to this special issue vary widely. They do all agree that evaluation is an important part of publishing, for publishers, for creators and for audiences. And they all bring exciting and engaged perspectives to the question of evaluation. I trust that our readers will find the contributors’ work valuable, by many standards, and this will be demonstrated by a panoply of metrics, alternative and otherwise.