109dculturedh12869322.0001.001 in

    Chapter 2: Defining

    When invited to post a definition of Humanities Computing/Digital Humanities in the online forum “Day of Digital Humanities”:

    “The digital humanities is whatever we make it to be.” George H. Williams, 2011
    “DH is best experienced as both theory and practice.” Elli Mylonas, 2010
    “. . . just one method for doing humanistic inquiry.” Brian Croxall, 2011
    “A term of tactical convenience.” Matthew Kirschenbaum, 2011
    “I’m sick of trying to define it.” Amanda French, 2011
    “With extreme reluctance.” Lou Bernard, 2011
    “I try not to.” Willard McCarty, 2011


     
    Keywords: Humanities Computing, Digital Humanities, discipline, interdiscipline, modes of engagement, 2.0 interactivity, visualization, spatialization, code


     
    In an age when many people turn to the Internet for information, keyword searching is a tempting strategy for defining a field. However, the most obvious search term—digital humanities—yields only a partial picture. It is not a recognized subject heading in the U.S. Library of Congress classification system and, Willard McCarty found, near equivalents of “Humanities Computing” appear in conjunction with other terms such as humanities, arts, philosophy, and variations of computing, informatics, technology, data processing, digital, and multi-media (Humanities Computing, 2–3, 215). Some subjects such as “arts” were also outside the scope of early print-dominated Humanities Computing. The words digital and media, Andy Engel found in doing keyword searching for this book, appear often in titles of publications, educational programs, calls for conference papers, and job descriptions. Yet, as they have gained popularity their usefulness has diluted (e-mail, July 13, 2010). Database sleuthing, then, is only a blunt instrument. A closer analysis of six major statements furnishes a more nuanced picture of how the field is defined. This chapter then situates definition in the context of three major disciplines where new technologies and media are changing the nature of practice—English, history, and archaeology. It closes with a reflection on three trendlines that have emerged in those disciplines and Digital Humanities writ large—visualization, spatialization, and a computational turn in the field.

    Declaring

    Statement 1

    This collection marks a turning point in the field of digital humanities: for the first time, a wide range of theorists and practitioners, those who have been active in the field for decades, and those recently involved, disciplinary experts, computer scientists, and library and information studies specialists, have been brought together to consider digital humanities as a discipline in its own right, as well as to reflect on how it relates to areas of traditional humanities scholarship.

    —Susan Schreibman, Ray Siemens, and John Unsworth, “The Digital Humanities and Humanities Computing: An Introduction,” in A Companion to Digital Humanities (Malden, MA; Oxford: Blackwell, 2004), xxiii

    Publication of a Blackwell anthology in 2004 suggested that Digital Humanities had come of age in a history that is traced conventionally to the search for machines capable of automating linguistic analysis of written texts. The year 1949 is enshrined in most origin stories, benchmarked by Father Robert Busa’s efforts to create an automated index verborum of all words in the works of Thomas Aquinas and related authors. In the opening chapter, Susan Hockey divides the history of the field into four stages: Beginnings (1949–early 1970s), Consolidation (1970s–mid-1980s), New Developments (mid-1980s–early 1990s), and the Era of the Internet (1990s forward). Hockey is mindful of the challenge of writing the history of an interdisciplinary area. Any attempt raises questions of scope, overlap, impact on other disciplines, and the difference between straightforward chronology and digressions from a linear timeline (“The History,” 3). Willard McCarty also warns against the “Billiard Ball Theory of History,” asserting impact for some developments while consigning others to lesser or no importance (Humanities Computing, 212–13). Jan Hajic, for instance, tracks emergence to 1948, citing broader scientific, economic, and political developments prior to and during World War II. Interest in natural language arose in fields distant from linguistics and other humanities disciplines, including computer science, signal processing, and information theory. The year 1948 also marks Claude Shannon’s foundational work in information theory and the probabilistic and statistical description of information contents (80).

    Nonetheless, the field has a strong historical identity with linguistics and computer-aided study of texts, signified by the early names computational linguistics and humanities computing. Typical activities included textual informatics, miniaturization, and stylometric analysis of encoded textual material that aided studies of authorship and dating. Vocabulary studies generated by concordance programs were prominent in publications and, during the period of Consolidation, literary and linguistic computing in conference presentations. Yet, papers also accounted for using computers in teaching writing and language instruction, music, art, and archaeology. Overall, emphasis tended to be on input, output, and programming, though early reproduction was more suited to journals and books than poetry and drama. Mathematics for vocabulary counts also exceeded humanists’ traditional skills, and computer-based work was not widely respected in humanities (Hockey, “The History,” 7–10).

    The period of New Developments was marked by several advances. By the late 1980s, powerful workstations were affording greater memory, screen resolution, color capacity, and graphical user interface, facilitating display of not only musical notation software but also non-standard characters in Old English, Greek, Cyrillic, and other alphabets. Both textual and visual elements could be incorporated in digital surrogates of manuscripts and documents as well (Hockey, “The History”). Expectations for quality in graphics grew, Burdick et al. also recall, as bandwidth increased, and multimedia forms of humanistic research in digital environments emerged (9, 20). And, Melissa Terras adds, unprecedented investments and development in digitization were apparent in the heritage and cultural sector, along with changes in public policy that increased availability of funding (“Digitization,” 51). The rhetoric of “revolution,” the Companion’s editors caution, was more predictive in some disciplines than others (Schreibman, Siemens, and Unsworth, “The Digital Humanities,” xxiv). Even so, an authoritative historical record could now be compiled for what they alternately called a “field” and a “discipline” with an “interdisciplinary core” located in “Humanities Computing.” That label also marked a strong orientation to tools and methods reinforced in chapters on principles, applications, production, dissemination, and archiving.

    The advent of personal computers and e-mail in the “Era of the Internet” ushered in a new relationship of humanities and technology. Burdick et al. characterize the change as acceleration of a transition in digital scholarship from processing to networking (8). The implications were evident in one of the early homes for Humanities Computing. Nancy Ide describes the period from the 1990s forward as a “golden era” in linguistic corpora. Prior to the Internet, the body of literature for stylistic analysis, authorship studies, and corpora for general language in lexicography was typically created and processed at single locations. Increased computer speed and capacity facilitated sharing more and larger texts while expanding possibilities for gathering statistics about patterns of language, and new language-processing software stimulated renewed interest in corpus composition in computational linguistics. Parallel corpora containing the same text in two or more languages also appeared, and automatic techniques were developed for annotating language data with information about linguistic properties. Yet, limits persisted. By 2004, few efforts had been made to compile language samples that were balanced in representing different genres and speech dialects (289–90).

    Even with continuing limits, Hockey adds, by the early 1990s new projects in electronic scholarly editions were under way, libraries were putting the content of collections on the Internet, and the Text Encoding Initiative published the first full version of guidelines for representing texts in digital form. Services were being consolidated, and theoretical work in Humanities Computing and new academic programs signaled wider acceptance. And, early multimedia combinations of text with images, audio and video were appearing as well (“History,” 10–16). The sea change prompted by the Internet also became the basis for new periodizations of the field. Cathy Davidson calls the time from 1991 to the dot-com bust in fall 2001 “Humanities 1.0.” It was characterized by moving “from the few to the many.” Websites and tools facilitated massive amounts of archiving, data collection, manipulation, and searching. For the most part, though, tools were created by experts or commercial interests. “Humanities 2.0” was characterized by new tools and relationships between producers and consumers of tools, fostering a “many-to-many” model marked by greater interactivity, user participation, and user-generated content. This shift was apparent in the corporate and social networking of Google and MySpace, collaborative knowledge building of Wikipedia, user-generated photo-sharing of Flickr, video-posting of YouTube, and blogs, wikis, and virtual environments. “If Web 1.0 was about democratizing access,” Davidson sums up, “Web 2.0 was about democratizing participation” (“Humanities 2.0,” 205).

    Steven E. Jones highlights a more recent timetable over a ten-year period that gained momentum between 2004 and 2008. New digital products emerged along with social-network platforms and other developments such as Google Books and Google Maps. The change was not so much a “paradigm” shift as a “fork” in Humanities Computing that established a new “branch” of work and a “new, interdisciplinary kind of platform thinking.” Borrowing from William Gibson, Jones styles the shift an “eversion” of cyberspace, a “turning itself inside out” marked by a diverse set of cultural, intellectual, and technological changes. Eversion parallels Katherine Hayles’s conception of new phase in cybernetics that moved from “virtuality” to a “mixed reality.” This phenomenon is not isolated to the academy: it is part of a larger cultural shift marked by emergence and convergence. The new DH associated with this shift is evident in digital forensics, critical code and platform studies, game studies, and a new phase of research using linguistic data, large corpora of texts, and visualizations documented in the latter half of this chapter in the disciplines of English, history, and archaeology. A more layered and hybrid experience of digital data and digital media, Jones adds, is occurring across contexts, from archived manuscripts to Arduino circuit boards. Conceptualized in terms of Hayles’s notion of “intermediation” of humans and machines in “recursive feedback and feedforward loops,” this experience is evident in new workflows and collaborative relationships examined more fully in chapter 6 (3–5, 11, 13, 31–32, 83, 91, 173).


     
    Statement 2 signals another benchmark event that appeared three years after the Companion was published, the inaugural issue of Digital Humanities Quarterly (DHQ):

    Statement 2

    Digital humanities is by its nature a hybrid domain, crossing disciplinary boundaries and also traditional barriers between theory and practice, technological implementation and scholarly reflection. But over time this field has developed its own orthodoxies, its internal lines of affiliation and collaboration that have become intellectual paths of least resistance. In a world—perhaps scarcely imagined two decades ago—where digital issues and questions are connected with nearly every area of endeavor, we cannot take for granted a position of centrality.

    —Julia Flanders, Wendell Piez, and Melissa Terras, “Welcome to Digital Humanities Quarterly,” Digital Humanities Quarterly 1, no. 1 (2007): ¶3

    In welcoming readers to the new journal, Flanders, Piez, and Terras resist defining the field as a discipline. They also defer the underlying question, “What is digital humanities?” Orthodoxies, codifications, and dominant practices had already formed, raising the danger of ossifying the history of a young field prematurely. They argue instead for letting definition emerge from practice, allowing submissions to represent contours of the field in Humanities Computing, other varieties of digital work, and initiatives and individuals not necessarily classified as “digital humanities.” DHQ was conceived as an experimental model. Its innovative technical architecture afforded online, open-access publication under a Creative Commons license that allowed copying, distributing, and transmitting work for non-commercial purposes. Copyright remained with authors, enabling further publication or reuse. Giving all articles detailed XML encoding also facilitated marking genres, names, and citations, while other features fostered more nuanced searching, visualization tools, and other modes of exploration and tracking the evolving nature of the field. Moreover, the editors were looking forward to testing whether the nature of argument would change with the capacity for including interactive media, links to data sets, diagrams, and audiovisual materials.

    Mindful of the multiple organizations serving related interests by 2007, the editors also hoped DHQ would become a meeting ground and space of mutual encounter. They hoped to bridge historic constituencies of Digital Humanities represented by the sponsoring Alliance of Digital Humanities Organizations (ADHO) and closely related domains that were emerging at that point. The journal’s commitment to breadth has been borne out in the multidisciplinary scope of articles. Topics have spanned game studies and comic books, digital library resources, time-based digital media, digital editing, visual knowledge and graphics, sound, high-performance computing, copyright, endangered texts, and electronic literature, as well as teaching, learning, and curriculum and the reward system of tenure, promotion, and publication. Special clusters and numbers have also focused on project life cycles, data mining, classical studies, digital textual studies, the literary/studies, e-science for arts and humanities, theorizing connectivity, futures of digital studies, and oral histories of early Humanities Computing.


     
    One year after the launch of Digital Humanities Quarterly, in May 2008, another benchmark of the field’s evolution appeared when the National Endowment for the Humanities elevated a program-level initiative to a full-fledged Office of Digital Humanities (ODH). Brett Bobley, director of the office, addressed the question of definition in a presentation to the National Council on the Humanities:

    Statement 3

    We use “digital humanities” as an umbrella term for a number of different activities that surround technology and humanities scholarship. Under the digital humanities rubric, I would include topics like open access to materials, intellectual property rights, tool development, digital libraries, data mining, born-digital preservation, multimedia publication, visualization, GIS, digital reconstruction, study of the impact of technology on numerous fields, technology for teaching and learning, sustainability models, and many others.

    —Brett Bobley, “Why the Digital Humanities?” Director, Office of Digital Humanities, National Endowment for the Humanities http://www.neh.gov/files/odh_why_the_digital_humanities.pdf

    The mission of the ODH is to support innovative projects that use new technologies to advance the endowment’s traditional goal of making cultural heritage materials accessible for research, teaching, and public programming. Elevation to a new office was widely considered a sign of maturity, signified as a “tipping” or “turning” point. In her report on DH for 2008, Lisa Spiro calls it a mark of credibility, and, in an article on “The Rise of Digital NEH,” Andy Guess remarks what began as a “grassroots movement” was now anchored by funding agencies and a network of centers. The impact of technology on humanities, Bobley summed up, is characterized by four major game-changers:

    1. the changing relationship between a scholar and the materials studied;
    2. the introduction of technology-based tools and methodologies;
    3. the changing relationship among scholars, libraries, and publishers;
    4. the rise of collaborative, interdisciplinary work in the humanities.


     
    The ODH expanded the endowment’s support for digital work significantly. It provides funding for institutes on advanced topics and DH centers. Its Implementation Grants program supports a wide range of activities including the development of computationally based methods, techniques, or tools; completion and sustainability of existing resources often in alliance with libraries and archives; studies of philosophical or practical implications of emerging technologies in both disciplinary and interdisciplinary contexts; and digital modes of scholarly communication that facilitate peer review, collaboration, or dissemination scholarship. The ODH also partners with other funders, branches of government, organizations, and programs abroad. And, its Digital Humanities Start-Up Grants program supports smaller-scale prototyping and experimenting. Taking the April 2013 announcement of twenty-three new recipients of Start-Up Grants as a representative set of examples, projects span digital collections of visual, textual, and audio materials from early through modern periods, a mobile museum initiative, games development, and interests intersecting with fields of medieval studies, African American studies, and film studies. Older tools of computational linguistics are also being used in new contexts and novel ones developed for topic modeling, metadata visualization, open-source access, and preservation.

    The Digging into Data Challenge, in particular, has accelerated boundary crossing between humanities and social sciences by providing funding for research using massive databases of materials, including digitized books and newspapers, music, transactional data such as web searches, sensor data, and cell-phone records. The “Big Data” initiative has also heightened the need for collaboration and inter-institutional cooperation in working with large data sets of complex topics over time, such as patterns of creativity, authorship, and culture. And, access to data on a large scale enhances prospects for interdisciplinary research and teaching by facilitating more comprehensive views. Describing the multidisciplinary scope of the project Civil War Washington, Kenneth Price lists history, literary studies, geography, urban studies, and computer-aided mapping. One of the reasons so little research had focused on the city during that period, Price speculates, was that the form of scholarship previously available could not represent adequately the complex interplay of literary, political, military, and social elements (293–94). Research on that scale, however, is expensive, rekindling debate about the relationship of humanities with commercial enterprises that set terms of access to and use of data. It has also stimulated a debate on marginalization of smaller projects in the force of “Big Humanities.”


     
    Taken together, statements 1–3 document significant developments in the institutionalization of new fields—a defining literature, a dedicated journal, and funding support. Statements 4 and 5 benchmark an added development, growing debate on definition of the field. Read comparatively, they reveal new positionings.

    Statement 4:

    Speculative computing arose from a productive tension with work in what has come to be known as digital humanities. That field, constituted by work at the intersection of traditional humanities and computation technology, uses digital tools to extend humanistic inquiry. Computational methods rooted in formal logic tend to be granted more authority in this dialogue than methods grounded in subjective judgment. But speculative computing inverts this power relation, stressing the need for humanities tools in digital environments.

    —Johanna Drucker, SpecLab: Digital Aesthetics and Projects in Speculative Computing (Chicago: U of Chicago P, 2009), xi

    Drucker distinguishes “digital humanities,” characterized by a philosophy of Mathesis, from “speculative computing,” characterized by a philosophy of Aesthesis. Her distinction is based on experiences during the 1990s and early 2000s at the Institute for Advanced Technology in the Humanities, in projects that became the core of the Speculative Computing Laboratory (SpecLab). By privileging principles of objectivity, formal logic, and instrumental applications in Mathesis, Drucker’s formulation of “digital humanities” prioritizes the cultural authority of technical rationality manifested in quantitative method, automated processing, classification, a mechanistic view of analysis, and a dichotomy of subject and object. By privileging subjectivity, aesthetics, interpretation, and emergent phenomena, “speculative computing” prioritizes questions of textuality, rhetorical properties of graphicality in design, visual modes of knowing, and epistemological and ideological critique of how we represent knowledge. Mechanistic claims of truth, purity, and validity are further challenged by a probablitistic view of knowledge and heteroglossic processes, informed by theories of constructivism and post-structuralism, cognitive science, and the fields of culture/media/and visual studies (Drucker, Spec Lab, xi–xvi, 5, 19, 22–30; see also Drucker and Nowviskie).

    Drucker’s distinction elevates the aesthetics of computational work at the boundary of humanistic interpretation and computer science. In a comparable move, Burdick et al. bring a humanities conception of design—defined by information design, graphics, typography, formal and rhetorical patterning—to the center of the field framed by traditional humanities concerns—defined by subjectivity, ambiguity, contingency, and observer-dependent variables in knowledge production (vii, 92). Like Drucker, they also reconceptualize design from a linear and predictive process to generativity in an iterative and recursive process. Design, Drucker adds, becomes a “form of mediation,” not just transmission and delivery of facts. Information visualization, she notes elsewhere, becomes genuinely humanistic, incorporating critical thought and the rhetorical force of the visual (“Humanistic Theory,” 86). Not everyone, however, equates “digital humanities” narrowly with Mathesis. Drucker’s positioning of speculative computing as the “other” to DH, Katherine Hayles responded, opens up the field. Yet, her stark contrast flattens its diversity. Many would also argue they are doing speculative computing (How We Think, 26). Moreover, Drucker bypasses the boundary work of Statement 5.


     
    Statement 5 emanates from a group affiliated with UCLA’s Digital Humanities and Media Studies program. The group focused directly on the task of definition in a Mellon-funded seminar in 2008–2009 at UCLA, a Digital Humanities Manifesto 2.0, and a March 2009 White Paper by Todd Presner and Chris Johanson on “The Promise of Digital Humanities.”

    Statement 5

    Digital Humanities is not a unified field but an array of convergent practices that explore a universe in which: a) print is no longer the exclusive or the normative medium in which knowledge is produced and/or disseminated; instead, print finds itself absorbed into new, multimedia configurations; and b) digital tools, techniques, and media have altered the production and dissemination of knowledge in the arts, human and social sciences.

    —Jeffrey Schnapp and Todd Presner, “Digital Humanities Manifesto 2.0,” http://www.humanitiesblast.com/manifesto/Manifesto_V2.pdf

    The periodization of the Manifesto and the White Paper parallels Davidson’s distinction between Humanities 1.0 and 2.0. A first wave of Digital Humanities in the late 1990s and early 2000s emphasized large-scale digitization projects and technological infrastructure. It replicated the world that print had codified over five centuries and was quantitative in nature, characterized by mobilizing search and retrieval powers of databases, automating corpus linguistics, and stacking HyperCards into critical arrays. In contrast, the second wave has been qualitative, interpretive, experiential, emotive, and generative in nature. It moved beyond the primacy of text to practices and qualities that can inhere in any medium, including time-based art forms such as film, music, and animation; visual traditions such as graphics and design; spatial practices such as architecture and geography; and curatorial practices associated with museums and galleries. The agenda of the field also expanded to include the cultural and social impact of new technologies and born-digital materials such as electronic literature and web-based artifacts. DH became an umbrella term for a multidisciplinary array of practices that extend beyond traditional humanities departments to include architecture, geography, information studies, film and media studies, anthropology, and other social sciences.

    Interdisciplinary is a keyword in the second wave, along with collaborative, socially engaged, global, and open access. Their combination is not a simple sum of the parts. Manifesto 2.0 invokes a “digital revolution,” and the White Paper calls the effect of new media and digital technologies “profoundly transformative.” The authors reject the premise of a unified field in favor of an interplay of tensions and frictions. Schnapp and Presner do not suggest that Digital Humanities replaces or rejects traditional humanities. It is not a new general culture akin to Renaissance humanism either, or a new universal literacy. They see it as a natural outgrowth and expansion in an “emerging transdisciplinary domain” inclusive of both earlier Humanities Computing and new problems, genres, concepts, and capabilities. The vision of a transdisciplinary domain parallels trans-sector Transdisciplinarity. The Manifesto pushes into public spheres of the Web, blogosphere, social networking, and the private sector of game design. At the same time, it parallels the imperative of Critical Interdisciplinarity. If new technologies are dominated and controlled by corporate and entertainment interests, the authors ask, how will our cultural legacy be rendered in new media formats? By whom and for what? Elsewhere, Presner reported being told his HyperCities project using Google Maps and Google Earth puts him “in bed with the devil” (qtd. in Hayles, How We Think, 41).

    The transdisciplinary momentum of statement 5 is further apparent in comparable declarations, notable among them the Affiche du Manifeste des Digital Humanities. Circulated at a THATCamp in Paris in May 2010, the French manifesto embraces the totality of social sciences and humanities. It acknowledges reliance on the disciplines but deems Digital Humanities a “transdiscipline” that embodies all methods, systems, and heuristic perspectives linked to the digital within those fields and communities with interdisciplinary goals. Like its U.S. counterpart, the Manifeste covers a wide scope of practices: including encoding textual sources, lexicometry, geographic information systems and web cartography, data-mining, 3-D representation, oral archives, digital arts and hypermedia literatures, as well as digitization of cultural, scientific, and technical heritage. The Affiche also calls for integrating digital culture into the definition of general culture in the 21st century.


     
    Statement 6 sketches the broadest picture of the field in Svensson’s typology of five paradigmatic modes of engagement between humanities and information technology or “the digital.”

    Engaging

    Svensson’s typology builds on Matthew Ratto’s conception of “epistemic commitments.” Differing commitments influence the identification of study objects, methodological procedures, representative practices, and interpretative frameworks.

    Statement 6

    Below, I will examine five major modes of engagement in some more detail: information technology as a tool, as a study object, as an expressive medium, as an experimental laboratory and as an activist venue. The first three modes will receive the most attention. Importantly, these should not be seen as mutually exclusive or overly distinct but rather as co-existing and co-dependent layers, and indeed, the boundaries in-between increasingly seem blurry. This does not mean, however, that it may not fruitful to analyze and discuss them individually as part of charting the digital humanities.

    — Patrik Svensson, “The Landscape of Digital Humanities,” Digital Humanities Quarterly 4, no. 1 (2010): ¶102 http://digitalhumanities.org/dhq/vol/4/1/000080/000080.html

    In Svensson’s first mode of engagement—as a tool—the field exhibits a strong epistemic investment in tools, methodology, and processes ranging from metadata schemes to project management. There is also a strong focus on text analysis, exemplified by use of text encoding and markup systems in corpus stylistics, digitization, preservation, and curation. This first mode aligns DH with the concept of Methodological Interdisciplinarity. In his book Humanities Computing McCarty identifies method, not subject, as the defining scholarly platform of the field (5–6). The Wikipedia entry on Digital Humanities retains a strong methodological orientation. Tom Scheinfeld argues that scholarship at this moment is more about methods than theory (125). And, posters to the “Day of Digital Humanities” online forum on the question “How do you define Humanities Computing/Digital Humanities?” associate the field strongly with “tools” and “application” of technology. McCarty and Harold Short have mapped relations in the “methodological commons” (see fig. 1).

    Figure 1: An intellectual and disciplinary map of Humanities Computing. (From Willard McCarty, Humanities Computing [London and New York: Palgrave Macmillan, 2005].)
    Figure 1: An intellectual and disciplinary map of Humanities Computing. (From Willard McCarty, Humanities Computing [London and New York: Palgrave Macmillan, 2005].)

    The octagons above the commons in figure 1, McCarty explains in his book, demarcate disciplinary groups of application. The indefinite cloudy shapes below the commons suggest “permeable bodies of knowledge” that are constituted socially, even though lacking departmental or professional aspects. All disciplines, however, do not have the same kind of relationship to the field. McCarty designates history as the primary discipline (especially history of science and technology), along with philosophy and sociology. All the rest are secondary (Humanities Computing, 4, 33, 119, 129). In a speech in March 2013, Raymond Siemens compared versions of the figure. The first version, he recalled, focused on content oriented toward digital modeling (emphasizing digitization). The second version, above, is more inclusive of media types and extra-academic partners while acknowledging process modeling (emphasizing analysis). Looking toward the future, Siemens proposed it is time to focus on problem-based modeling that moves past the rhetoric of revolution to a sustainable action-oriented agenda.

    All of the shapes in figure 1, it should be said, are not strictly “disciplines,” underscoring the need for the fourth major term in the baseline vocabulary for understanding interdisciplinarity—interprofessionalism. The figure also has a mix of traditional disciplines and interdisciplinary areas, in the latter case including cognitive science, performance studies, cultural studies, and the history and philosophy of science and technology. In addition, the profession of engineering appears. The commons in the middle of the figure is a hub for transcending the limits of specialized domains. In a separate though complementary reflection on the relationship of interdisciplinarity and transdisciplinarity in Digital Humanities, Yu-wei Lin calls models and tools for modeling “carriers of interdisciplinarity.” Their carrying capacity fosters projects that may lead to more radical “transdisciplinary” movement beyond parent disciplines through a shared conceptual framework that integrates concepts, theories, and approaches from different areas of expertise in the creation of something new (296–97).


     
    In Svensson’s second mode of engagement—as a study object—the digital is an object of analysis with a strong focus on digital culture and transformative effects of new technologies of communication. Cyberculture studies and critical digital studies, for example, accentuate critical approaches to new media and their contexts. The scope of forms is wide: encompassing networked innovations such as blogging, podcasting, flashmobs, mashups, and RSS feeds as well as video-sharing websites such as MySpace and YouTube, Wikipedia, and massively multiplayer online role-playing games (MMORPGs). Creating and developing tools, Svensson adds, are not prominent activities in this mode, and use of information technology does not extend typically beyond standard tools and accessible data in online environments. The difference in the first two modes illustrates how definition varies depending on where the weight of priority falls: the algorithm or critical theory. Even the most fundamental terms, such as access, are used differently. From a technical standpoint, access connotes availability, speed, and ease of use. From the standpoint of cultural analysis, it connotes sharing materials and reinvigorating the notion of “public humanities” on digital ground.


     
    In the third mode of engagement—as experimental laboratory—DH centers and laboratories are sites for exploring ideas, testing tools, and modifying data sets and complex objects. This kind of environment is familiar in science and technology but is relatively new to humanities. Svensson cites the Stanford Humanities Laboratory (SHL) and his own HUMlab at Umeå University. Digital platforms such as Second Life, he adds, may function as virtual spaces for experiments that are difficult to mount in physical spaces. Svensson likens such structures to Adam Turner’s notion of “paradisciplinary” work born of exchanging ideas, sharing knowledge, and pooling resources. Turner compares modes of interaction and creativity in these spaces to the community collaboration at the heart of “hacker/maker culture.” Whether the site is a shed or a garage, “the space breathes life into the community” (qtd. in Svensson, “Landscape”). In their model of a new Artereality, Schnapp and Michael Shanks call the SHL both “a multimodal and fluid network” and “a diverse ecology of activity and interest.” Established in 2001, the Stanford Lab was modeled on the platform of “Big Science.” Activities within this collaborative environment comprise a form of “craftwork” where participants learn by making.

    Comparably, Saklofske, Clements, and Cunningham liken the space of humanities labs to “experimental sandboxes” (325), and Ben Vershbow calls the New York Public Library Lab a kind of “in-house technology startup.” The lab is occupied by “an unlikely crew of artists, hackers and liberal arts refugees” who focus on the library’s public mission and collections. Envisioned as “inherently inter-disciplinary,” their work has empowered curators “to think more like technologists and interaction designers, and vice versa.” Vershbow credits their success to being able “to work agilely and outside the confines of usual institutional structures” (80). Bethany Nowviskie further likens such spaces to skunkworks, a term adopted by small teams of research and development engineers at the Lockheed Martin aeronautics corporation in the 1940s. Library-based DH skunkworks function as semi-independent “prototyping and makerspace labs” where librarians take on new roles as “scholar-practitioners.” In the Scholars’ Lab at the University of Virginia Library, collaborative research and development has led not only to works of innovative digital scholarship but also to technical and social frameworks needed for support and sustainability. The lab was a merger of three existing centers. It opened in 2006 in a renovated area of the humanities and social sciences research library that was conducive to open communication and flexible use of space (53, 56, 61).


     
    In the fourth mode of engagement—as expressive medium—increased digitalization has afforded unparalleled access to heterogeneous types of content and media. Much of this content is born digital in multimodal forms that can be manipulated within a single environment, including moving images, text, music, 3-D designs, databases, graphical details, and virtual walk-throughs. Some areas—such as visual, media and digital studies––have been affected significantly and, Svensson found, work tends to focus on studying objects rather than producing them. Nevertheless, both the third and fourth modes heighten creativity. For builders of tools, Thomas Crombez posted to the 2010 “Day of Digital Humanities,” DH is a “playground for experimentation.” Innovation has led to technological advancements in the form of new software and more powerful platforms for digital archives. It has also fostered new digital-born objects and aesthetic forms of art and literature. Posting to the 2009 forum, Jolanda-Pieta van Arnhem called DH “about discovery and sharing as much as it is about archival and data visualization.” It advances open communication, collaboration, and expression. At the same time it mirrors her own artistic process by incorporating art, research, and technology.


     
    In Svensson’s fifth mode of engagement—as activist venue—digital technology is mobilized in calls for change. He highlights several examples. Public Secrets, Sharon Daniel’s work on women in prison and the prison system, is a hybrid form of scholarship that is simultaneously artistic installation, cultural critique, and activist intervention. Daniel moves from representation to participation, generating context in a database structure that allows self-representation. She describes her companion piece, Blood Sugar, as “transdisciplinary” in its movement beyond new ways of thinking about traditional rubrics to contesting those rubrics in open forms (cited in Balsamo, 87–88). Kimberly Christen’s Mukurtu: Wampurrarni-kari website on aboriginal artifacts, histories, and images provides aboriginal users with an interface that offers more extensive access than the general public. And, another form of activist engagement occurs in conversations about making as a form of thinking about design and use. Preemptive Media is a space for discussing emerging policies and technologies through beta tests, trial runs, and impact assessments. Elizabeth Losh also cites the Electronic Disturbance Theater that adapted principles of the Critical Art Ensemble in virtual sit-ins, the b.a.n.g. lab at the California Institute for Telecommunications and Information Technology, the “Electronic Democracy” network’s research on online practices of political participation, and acts of “political coding” and “performative hacking” by new-media dissidents (168–69, 171).

    Svensson does not include Critical Interdisciplinarity and the “transgressive” and “trans-sector” connotations of Transdisciplinarity in the fifth mode. Yet, they can be viewed as activist modes of scholarship. Questions of social justice and democracy are prominent in cultural studies of digital technologies and new media. And, older topics of subjectivity, identity, community, and representation are being reinvigorated. Digital technologies are also sources of empowerment. Indigenous communities, for example, have used geospatial technologies to protect tribal resources, document sovereignty, manage natural resources, create databases, and build networking forums, and guidebooks. Yet, the same technologies are sources of surveillance, stereotyping, and subjugation. Amy Earhart has also interrogated the exclusion of non-canonical texts by women, people of color, and the GLBTQ community. Scrutinizing data from NEH Digital Humanities Start-Up Grants between 2007 and 2010, Earhart found that only 29 of the 141 awards focused on diverse communities and only 16 on preservation or recovery of the texts of diverse communities (314).

    Distinct as they are, modes of engagement are not airtight categories. They may overlap, and even in the same mode differences arise. In an interview with Svensson, Charles Ess cites tension at a conference of the Association of Internet Researchers (AoIR) between German and philosophical senses of critical theory and radical critiques from the standpoint of race, gender, and sexuality in the Anglophone tradition. Moreover, although most researchers study the Internet as an artifact rather than engaging in experimentation, in Scandinavia there is a strong tradition of design. Internet research, Ess adds, could also be considered a subset of telecom research, digital studies, or other areas when it takes on their identities. Moreover, growing interest in research and instruction in multimedia art, design, and culture has aligned Humanities Computing with visual and performing arts. Svensson’s statistical tracking of the twenty to fifty most frequent words in programs of AoIR conferences from 1999 to 2008 also revealed the focus in another example of the second mode—Internet studies—was on space, divide, culture, self, politics and privacy phenomena, cultural artifacts and processes. An activist orientation appeared that is rare in the older discourse of Humanities Computing, where the predominant focus is databases, models, resources, systems, and editions.

    That said, DH organizations are opening up to new topics. The annual meeting of the flagship Alliance of Digital Humanities Organizations (ADHO) still emphasizes Humanities Computing over new media and cultural interests that find more space in groups such as HASTAC. Yet, a new “Global Outlook” (GO::DH) special interest group has formed to address barriers that hinder communication and collaboration across arts, humanities, and the cultural heritage sector as well as income levels. Scott Weingart’s analysis of acceptances to the 2013 ADHO conference reveals that literary studies and data/text mining submissions outnumbered historical studies. Archive work and visualizations also appeared more often than multimedia. That said, despite being small, multimedia beyond text was not an insignificant subgroup. Gender studies also had a high acceptance rate of 85 percent, and the program included a panel on the future of undergraduate Digital Humanities. Traditional topics of text editing, digitization, computational stylistics, and curation are still invited for the Australasian Association’s hosting of the 2015 conference, but so are arts and performance, new media and Internet studies, code studies, gaming, curriculum and pedagogy, and critical perspectives.

    Locating

    The history of Digital Humanities is painted both in broad strokes, revealing shared needs and interests, and in thin strokes, revealing distinct subhistories. Like linguists, classicists have invested in making digital lexica and encyclopedias, and they have benefited from advances in graphic capacity and language technologies that facilitate machine translation, cross-lingual information retrieval, and syntactic databases. Like literary scholars, linguists have also created electronic text editions enhanced by the ability to annotate interpretations and hyperlink resources. And, involved as they are in data-intensive work, classicists, archaeologists, and historians have all gained from increased capacity for record keeping and statistical processing. The introduction of Digital Humanities interests often generates a claim of interdisciplinary identity in a discipline. Yet, identities differ. If there is a tight relationship between a discipline and a digitally inflected study object, Patrik Svensson found in mapping modes of engagement, the work may lack strong identity as “digital humanities.” A media studies scholar interested in news narratives in online media, for example, may consider this work to be anchored within media studies rather than a separate field. In contrast, if digitally mediated language or communicative patterns in Second Life are incorporated as objects of study, a discipline may change to include digital objects and develop intersections with other disciplines and fields. The changing nature of work practices and perceptions of the role of the digital are evident in the examples of English, history, and archaeology.

    Digital Humanities and English have a long-standing relationship which Pressman and Swanstrom attribute to the fact that many groundbreaking projects centered on literary subjects. In an oft-cited essay, Matthew Kirschenbaum identifies six reasons why English departments have been favorable homes (“What is Digital Humanities,” 8–9). The beginning reason is not surprising: “First, after numeric input, text has been by far the most tractable data type for computers to manipulate.” In contrast, images present more complex challenges of representation. The second reason marks the multidisciplinary scope of English. Subfields of literary and cultural studies, rhetoric and composition, and linguistics have attained separate disciplinary status, but they are still typically housed within the same department. Over time, Pressman and Swanstrom add, conception of the “literary” has expanded beyond traditional texts. In welcoming readers to an online “disanthology” of articles on literary studies in the digital age, the editors called literary studies a “confluence of fields and subfields, tools and techniques.” Given that computational approaches come from varied sources, a growing array of methodologies are engaged and practices and methodologies of digital scholarship lead into other fields in humanities as well as computer science and library and information science.

    In defining the second reason Kirschenbaum highlights, in particular, the long-standing relationship of computers and composition. Teachers of writing and rhetoric, Jay David Bolter recalls, were among the earliest to welcome new technologies into the classroom, initially word processors and then chat rooms, MOOs, wikis, and blogs. They constituted new spaces for pedagogy, and research on computers and composition expanded eventually from text-based literacy and writing to include new digital media, video games, and social networking (“Critical Theory”). By 2011, the relationship to Digital Humanities was the focus of a featured panel at the annual Computers and Writing conference. Panelist Douglas Eyman called himself a “self-confessed digital humanist,” but admitted he is still puzzling over the question of fit for himself and the field of digital rhetoric. On the TechRhet Digest listserv that prompted the session, Dean Rehberger cautioned against equating DH with one area such as composition and writing, or one area subsuming the other. “The trick,” he advised, “will be to untangle the points of intersection and interaction.”

    Throughout its history, composition studies has intersected with multiple disciplines and fields, including literary studies and rhetoric, literacy studies, technology studies, and new media studies. One of those intersections, with rhetoric, is also linked with the field of communication studies. Computer-mediated communication was an early site of studies of behavior in online communities, work that continues in both communications and English departments. In a report on the emergence of “digital rhetoric,” Laura Gurak and Smiljana Antonijevic call for a new “interdisciplinary rhetoric” capable of understanding the persuasive functions of digital communications that encompass text, sound, visual, nonverbal cues, material, and virtual spaces. Digital rhetoric, they argue, must assert a new canon that draws on prior constructs while recognizing changes in the 2,000-year-old tradition that constitutes the field of Western rhetoric. “Screen rhetorics,” Gurak and Antonijevic add, are not a sidebar to studies of public discourse and public address. They are at the center of what theorists and critics should be studying, and of interest to linguists, psychologists, and others exploring human communication.

    The third reason recognizes the link between English departments and converging conversations around editorial theory and method in the 1980s, amplified by subsequent advances in implementing electronic archives and editions. These discussions cannot be fully understood, Kirchsenbaum notes, without considering parallel conversations about the fourth reason—hypertext and other forms of electronic literature. By the 1990s, Bolter recalls, some critics were positioning digital media as an electronic realization of poststructuralist theory. George Landow argued that hypertext had a lot in common with contemporary literary and semiological theories, although it was aligned initially with formalist theory and print continued to dominate (“Theory and Practice,” 19–20, 26). The “revolution” envisioned by early theorists of hypertext and electronic modes of authorship beckoned radical restructuring of textuality, authorship, and readership while fostering analysis of digital material culture. It took time, though, for more transformative practices of hypermediation and multi-modal remixing to become the object of study.

    The fifth reason stems from openness to cultural studies. English departments were early homes for related interests, fostering interactions with other interdisciplinary fields such as popular culture studies, identity fields, and postcolonial studies. The scope of study also expanded with new objects. Once confined to print, the underlying notion of a “text” expanded to include verbal, visual, oral, and other forms of expression. Indicative of this trend, the Texas Institute for Literary and Textual Studies (TILTS), affiliated with the University of Texas English Department, focused on a broadening conception of the “literary” and the “textual.” The TILTS 2011 series on “The Digital and the Human(ities)” encompassed traditional works, non-textual forms, and popular genres. Symposium 1—Access, Authority, and Identity—considered older topics of scholarly editing plus social networking, corporatization and Google, and the fracturing of knowledge and undermining of traditional canons. Symposium 2—Digital Humanities, Teaching and Learning—looked at pedagogical innovations and digital mediated learning, new subjects of games and code, student subjectivities, born-digital materials, and multi-media composition. Symposium 3—The Digital and the Human(ities)—included automation, digital vernacular, the changing nature of argument, justice, and rights of students and of citizens. Kirschenbaum’s sixth and final reason also recognizes the rise of e-reading and e-book devices, as well as large-scale text digitization projects such as Google Books, data mining, and visualization in distant readings.

    The discipline of history also has a long-standing involvement with Digital Humanities. In his report in the Blackwell Companion, William G. Thomas identified three phases in historians’ use of computing technologies. During the first phase in the 1940s, some historians used mathematical techniques and built large data sets. During the second phase beginning in the early 1960s, the emerging field of social science history opened up new social, economic, and political histories that drew on massive amounts of data, enabling historians to tell the story “from the bottom up” rather than elite perspectives that dominated traditional accounts. The third and current phase is marked by greater capacity for communication via the Internet, in a network of systems and data combined with advances in the personal computer and software. Historical geographical information systems (GIS) also holds promise for enhancing computer-aided spatial analysis in not only history and demography but archaeology, geography, law, and environmental science as well. The number and size of born-digital data collections has increased as well, along with tools that enable independent exploration and interpretive association.

    Change, however, stirred debate. During the second phase, cliometrics was a flashpoint, with particular criticism aimed at Robert Fogel and Stanley Engerman’s 1974 book Time on the Cross: The Economics of American Negro Slavery. Critics questioned lack of attention to traditional methods, including narrative, textual, and qualitative analysis as well as interdisciplinary study of social and political forces. Another initiative launched in the 1970s, the Philadelphia Social History Project, assembled a multidisciplinary array of data while aiming to create guidelines for large-scale relational databases. It was criticized, though, for falling short of a larger synthesis for urban history. Other projects aggregated multidisciplinary materials. Who Built America?, for example, compiled film, text, audio, images, and maps in social history. Yet, early products were limited to self-contained CD-ROM, VHS-DVD, and print technology lacking Internet connectivity. As new technology became available, the idea of “hypertext history” arose in projects such as The Valley of the Shadow, which brought together Civil War letters, records, and other materials. Thomas speculates the term digital history originated at the Virginia Center for Digital History. In the 1997–98 academic year, he directed the center. He and William Ayers used the term to describe the project. In 1997 they taught “Digital History of the Civil War and began calling such courses “digital history seminars.” Subsequently, Steven Mintz started a digital textbook site named Digital History (Thomas, 57–58, 61–63).

    Advances heralded new ways of studying and writing history. However, they also raise new questions about the nature of interpretation. In a 2008 online forum on “The Promise of Digital History,” William Thomas cautions that the fluidity or impermanence of the digital medium means scholars may never stop editing, changing, and refining as new evidence and technologies arise. Where, then, do interpretation and salience go in online projects that are continually in motion? And, what impact do technologies have on understanding history as a mode of investigation, meaning and content, and creating knowledge? Douglas Seefeldt joined Thomas in cautioning that expanded access does not answer the question of what history looks like in a digital medium. Production, access, and communication are valuable. Yet, on another level Digital History is a methodological approach framed by the hypertextual power of technologies to make, define, query, and annotate associations in the record of the past and to gain leverage on a problem. The scale and complexity of born-digital sources require more interdisciplinary collaboration and cooperative initiatives, as well as tailored digital resources and exposure for graduate students. Well-defined exemplars, guidelines for best practices, and standards of peer review are also needed. And, the focus must shift from solely product-oriented exhibits or websites toward the process-oriented work of employing new media tools in research and analysis.

    Parallel advances are also evident in the third discipline. In his report on “Computing for Archaeologists” in the Blackwell Companion, Harrison Eiteljorg II traces the history of computing and archaeology to record keeping and statistical processing in the late 1950s. Early limits of cost and access, however, impeded progress. Punch cards and tape were the only means of entering data, and results were only available on paper. Archaeologists also had to learn computer languages. By the mid-1970s, database software was making record keeping more efficient, expanding the amount of material collected and ease of retrieving information without needing to learn programming languages. By the 1980s, microcomputers and new easy-to-use software were available, and geographical information systems (GIS) and computer-aided design (CAD) programs were enhancing map-making and capturing the three-dimensionality of archaeological sites and structures. Virtual reality systems based on CAD models also promised greater realism, but accurate representations were still limited by inadequate data. Like other disciplines, archaeology also needed more discipline-specific software and standards for use. Furthermore, the increasing abundance of information and preservation of data collections require careful management, doubts about the acceptability of digital scholarship persist, and not enough scholars are trained in using computers for archaeological purposes. Even with notable advances, Eiteljorg concludes, the transformation from paper-based to digital recording remains incomplete.

    In a blog posting on “Defining Digital Archaeology,” Katy Meyers situates “digital archaeology” historically within the recent rise of “Digital Disciplines.” Yet, she reports, archaeologists have not engaged with the most active of them—the interdisciplinary group of Digital Humanities—or the ways technology is changing their work. Digital technologies are widely used and integrated into the discipline to the point that GIS, statistical programs, databases, and CAD are now considered part of the archaeologist’s toolkit. Yet, there is no disciplinary equivalent to “digital humanities” that accounts comprehensively for an archaeology of digital materials, including excavation of code, analysis of early informatics, and interpretation of early web-based materials. Or, digital archaeology conceived as an approach to studying past human societies through their material remains, rather than a support tool or method. Meyers also echoes long-standing concerns about the gap between generic approaches and discipline-specific needs, in this case the limits of the Dublin Core standard for metadata. Rather than a separate discipline and approach, the digital may constitute a different specialization such as a focus on ceramics, lithic analysis, or systems theory.

    A recently published open-access book, Archaeology 2.0, provides an overview of new approaches taking hold in the discipline. It does not explore digital initiatives outside of North America and the United Kingdom, but it does cover a broad range of topics that cut across disciplinary and geographic boundaries. Archaeology, Eric C. Kansa notes in the introduction, has long been considered “an inherently multidisciplinary enterprise, with one foot in the humanities and interpretive social sciences and another in the natural sciences.” Technological capacity has increased because of more powerful tools for data management, platforms for making cultural artifacts more accessible, and interfaces for making communication more open and collaboration feasible. Yet, these advances have compounded the challenges of archiving, preserving, and sustaining data, while creating information overload. Even with increased use of themed research blogs and field-based communication devices, the peer-reviewed scholarly journal also remains dominant. And, archeology faces unique challenges in designing computational infrastructure. It deals in longer horizons of “deep time” and complex multidisciplinary projects with data sets for describing complex contextual relations that are generated by different specialists. In addition, it has links to tourism and the marketing of cultural heritage involving commercially controlled mechanisms of communication and information sharing in both professional and public spheres.


     
    Looking back on the trajectory of change in these disciplines, three trend lines stand out: visualization, spatialization, and a computational turn in scholarship. Visualization is not new. Conversations about visuality occur across disciplines and fields. The label visual culture, Nicholas Mirzoeff recounts, gained currency because the contemporary era is saturated with images, from art and multimodal genres to computer-aided design and magnetic resonance imaging (1–3). The most striking development for Digital Humanities has been enhanced capacity to visualize information, fostering a “spatial” and “geographical” turn in the field facilitated by technologies of Google Earth, MapQuest, the Global Positioning System (GPS), and three-dimensional modeling. Patricia Cohen, who covers “Humanities 2.0” for the New York Times, calls this development the foundation of a new field of Spatial Humanities. Advanced mapping tools, she recalls, were first used in the 1960s, primarily for environmental analysis and urban planning. During the late 1980s and 1990s, geographical historical information systems made it possible to plot changes in a location over time using census information and other quantifiable data. By the mid-2000s, technological advances were making it possible to move beyond restricted map formats and to add photos and texts.

    The interdisciplinary character of the spatial turn is evident in three other ways. Visualization in humanities, Burdick et al. report, is based in large part on techniques borrowed from social sciences, business applications, and natural sciences (42). The multidisciplinary scope of materials also renders patterns more visible. A project to create a digital atlas of religion in North America, for example, revealed complex changing patterns of political preference, religious affiliation, migration, and cultural influence by linking them geographically. David Bodenhamer, of the Polis Center, calls the results of capturing multiple perspectives “deep maps” (qtd. in Patricia Cohen). Another project, the Mapping Texts partnership of Stanford and the University of North Texas, allows users to map and analyze language patterns embedded in 230,000 pages of digitized historical Texas newspapers spanning the late 1820s through early 2000s. With one of two interactive visualizations, for any period, geography, or newspaper title users can explore the most common words, named entities such as people and places, and correlated words that produce topic models.

    Yet, Drucker admonishes, traditional humanistic skills of cultural and historical interpretation are still needed. Mapping the Republic of Letters is a Stanford-based project that plotted geographic data for senders and receivers of correspondence, making it possible to see patterns of intellectual exchange in the early-modern world. Lines of light expose connections between points of origin and delivery in the 18th century. Drucker cautions that discrepancies of time and flow are disguised by the appearance of a “smooth, seamless, and unitary motion” (“Humanistic Theory,” 91). Nonetheless, the project renders networks visible for interpretation. Another Stanford-based initiative, the Spatial History Project, provides a community for creative visual analysis in the organizational culture of a lab environment and a wide network of partnerships and collaborations. Geospatial databases facilitate integration of spatial and nonspatial data, then visual analysis renders patterns and anomalies. These examples underscore the blurred boundaries of data and argument. In the HASTAC Scholars online forum on Visualization Across Disciplines, Dana Solomon calls the practice of information visualization a form of textual analysis with the potential for historicizing and theorizing a technical process. It can also be located within a broader constellation of aesthetic practice and visual representation; in the traditions of statistics, computer science, and graphic design; and in the cultural heritage industry through use of virtual reality and augmented reality in restoration of sites.

    The third trend line is signified by the label computational turn. David Berry calls it a third wave, extending beyond Schnapp and Presner’s first and second waves. The computational turn moves from older notions of information literacy and digital literacy to the literature of the digital and the shared digital culture facilitated by code and software. This development is evident in real-time streams of data, geolocation, real-time databases, Twitter, social media, cell-phone novels, and other processual and rapidly changing digital forms such as the Internet itself. Focusing on the digital component of DH, Berry adds, accentuates not only medium specificity but also the ways that medial changes produce epistemic ones. At the same time, it problematizes underlying premises of “normal” print-based research while refiguring the field as “computational humanities” (4, 15). The translation of all media today into numerical data, Lev Manovich also emphasizes, means that not only texts, graphics, and moving images have become computable but also sounds, shapes, and spaces (5–6).

    The names culturnomics and cultural analytics accentuate the algorithm-driven analysis of massive amounts of cultural data occurring in the computational turn. In the process, Burdick et al. also note, the canon of objects and cultural material broadens and new models of knowledge beyond print emerge (41, 125). The capacity to analyze “Big Data” makes it possible to construct a picture of voices and works hitherto silent or glimpsed only at a microscale and in isolated segments. The project People of the Founding Era, for instance, provides biographical information about leaders along with facts about lesser-known people, making it possible to know how they changed over time and eventually to visualize social networks of personal and institutional relationships. It combines a biographical glossary with group study of nearly 60,000 native-born and naturalized Americans born between 1713 and 1815, their children, and grandchildren.

    Like the visual and spatial turns in scholarship, the computational turn in Digital Humanities is indicative of a larger cultural shift. In defining “Digital Humanities 2.0,” Todd Presner treats computer code as an index of culture more generally, and the medial changes it affords foster a hermeneutics of code and critical approaches to software (“Hypercities”). At the same time, the computational turn has generated new overlapping subfields of code studies, software studies, and platform studies. At the Swansea University workshop on the computational turn, Manovich dated the beginning of the movement to 2008. The use of quantitative analysis and interactive visualization to identify patterns in large cultural data sets enables researchers to grapple with the complexity of cultural processes and artifacts. New techniques, though, must be developed to describe dimensions of artifacts and processes that received scant attention in the past, such as gradual historical changes over long periods. Visualization techniques and interfaces, Manovich added, are also needed for exploring cultural data across multiple scales, ranging from details of a single artifact or processes, such as one shot in a film, to massive cultural data sets/ flows, such as films made in the 20th century.

    Heightened attention to the operations of code and software has also fostered Critical Interdisciplinarity in overlapping fields of race and gender studies. Amy Earhart has questioned the ways technological standards such as the Text Encoding Initiative’s tag selection construct race in textual materials (“Can Information,” 314, 316). Jacqueline Wernimont critiqued the politics of tools and coding practices from a feminist perspective, and Tara McPherson examined the ways early design systems such as the UNIX operating system prioritized modularity and isolated enclaves over intersections, context, relation, and networks. Responding in her blog to the charge of not being inclusive, Melissa Terras addressed the way guidelines in the Text Encoding Initiative assigned sexuality in a document by encoding 1 for male and a secondary 2 for female. As program chair for a Digital Humanities conference, Terras also aimed to widen protocols beyond consideration of disciplines, interests, and geography to include gender equality as well as economic, ethnic, cultural, and linguistic diversity.

    The differing modes of engagement and practices reviewed in this chapter affirm Svensson’s conclusion: “The territory of the digital humanities is currently under negotiation.” It has evolved historically as the body of content expanded, new claims arose, and alternative constructions were asserted. And, as we’re about to see, constructions of the field also took root in differing institutional cultures.

    Clustered Links for Chapter 2 in Order of Appearance