
The American Literature Scholar in the Digital Age
Skip other details (including permanent urls, DOI, citation information): This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact digital-culture@umich.edu to use this work in a way not covered by the license. The print version of this book is available for sale from the University of Michigan Press.
For more information, read Michigan Publishing's access and usage policy.
Part 3: Theoretical Challenges in Digital Americanist Scholarship
Page 208Digital Humanities and the Study of Race and Ethnicity
The digital revolution has promised and delivered much to students of race and ethnicity. Manuscripts, photographs, diaries, court petitions, pamphlets, short stories, sermons, poems, audio recordings, video, and more, all related to race and ethnicity in America, can be found in just an hour of trawling on the Internet. There are comprehensive projects and small, well-formed sites; there are sites with frustratingly incomplete bibliographical information that have gems for the scholar willing to search; and there are sites that do not conform to best practices in digital editing but that teachers love because the wealth of materials and friendly interface draw in high school and college students. In short, there are exciting materials now available to anyone with Internet access, but scholars of race and ethnicity do not yet get online and find themselves in a deep, comprehensive, well-linked and indexed world of materials.
In a comprehensive survey, Scholarship in the Digital Age: Information, Infrastructure, and the Internet, Christine L. Borgman acknowledges the sense of possibility that attended the dawning of the digital age and the work yet to be done. In the early days of the Internet, we anticipated a deluge of primary sources freely available on the Web, materials previously accessible only to well-funded scholars who knew how to comb through special archives. We also anticipated that once peer-review processes were established, there would be a steady flow of monographs, essays, and scholarship in forms we could not yet imagine. But there has not been a flood. Before the Internet, we thought that it was the cost of publishing (paper, Page 210printing, shipping, advertising, and overhead) that was holding us back, but, as it turns out, it is us and the size of the task—the fact that our work takes time, money, training, and knowledge. As Borgman puts it, “Scholarly information is expensive to produce, requiring investments in expertise, instrumentation, fieldwork, laboratories, libraries, and archives.” There are other costs as well, including investments in creating an infrastructure that ensures that information will be “permanently accessible.” But, as Borgman rightly insists, the “real value in information infrastructure is in the information,” and “building the content layer of that infrastructure is both the greatest challenge and the greatest payoff.”[1]
Among humanities scholars who seek to understand race and ethnicity in America, building a deep “content layer” has long been recognized as a primary task, even before the birth of the Internet. Scholars have worked hard, often without institutional support, to find, preserve, edit, and republish neglected and forgotten texts. With the social movements of the 1960s, interest in noncanonical authors grew, and the work of text recovery began to garner financial support and institutional recognition. University presses and small independent presses found that texts by writers of color sold well, and in 1973, the Society for the Study of Multi-Ethnic Literature of the United States (MELUS) was founded at the annual MLA convention. Their mission was simple: “Locate the ‘lost’ texts. Publish the important ones, with English translations, if needed, by our own MELUS press.”[2] What followed over the next 20 years was profound. Anthologies such as Berndt Peyer’s The Elders Wrote: An Anthology of Early Prose by North American Indians (1982) introduced writers many students had never read, and critical studies such as William Andrews’s To Tell a Free Story: The First Century of Afro-American Autobiography, 1760–1865 (1986) provided careful analysis of texts previously ignored. In 1981, Jean Fagan Yellin verified Harriet Jacobs’s authorship and published Incidents in the Life of a Slave Girl, a text now widely taught; in 1986, Dexter Fisher published an edition of Zitkala-Sa’s American Indian Short Stories; and in 1990, Vintage brought out, in one volume, William Wells Brown’s Clotel, Francis Harper’s Iola Leroy, and Charles Chesnutt’s The Marrow of Tradition. Contemporary writers of color also garnered increased attention, as Mary Jo Bona and Irma Maini have noted.[3] Between 1982 and 1988, Pulitzers were awarded to Alice Walker’s The Color Purple, August Wilson’s Fences, Rita Dove’s Thomas and Beulah, and Toni Morrison’s Beloved. Other awards went to Page 211Louise Erdrich’s Love Medicine, Bharati Mukherjee’s The Middleman, and Other Stories, Amy Tan’s The Joy Luck Club, and David Hwang’s play M. Butterfly. The teaching canon took on a new shape in 1989 with the publication of The Heath Anthology of American Literature, and within a few years, the Norton Anthology offered a more diverse collection of writers. It may be easy, now, to forget what was at stake in the canon debates, but as Paul Lauter notes, canon debates are, in the end, about “who has power in determining priorities in American colleges” and “whose experiences and ideas become central to academic study.”[4]
The Internet revolution, coming on the heels of the canon expansion, has the potential to help democratize the canon by leveling the publishing playing field, increasing access to texts, and perhaps challenging the very notion of center and margin. Patricia Keefe Durso suggests that the Web is particularly hospitable to the outsider paradigm of multiethnic literature and to features—such as fragmentation, multilinearity, and intertextuality—that Gloria Anzaldúa, Gerald Vizenor, Ramón Saldívar, and others identify as central to ethnic literatures. Durso also hypothesizes that the Internet’s “nonnhierarchical structure encourages and facilitates interaction with a text’s history and politics.”[5] Such interactions may be particularly important for texts whose social, political, and cultural contexts are not well known. Stephen Pulsford describes the new digital era as “post-Norton” and insists that the Internet “challenges the authority of the anthology” by replacing the canon with a town hall cacophony in which there is no privileged voice.[6] In short, the Internet has the potential to make a powerful contribution to the projects of recovery, canon expansion, and increased and enriched engagement with voices on the margins.
In 2005, when Durso sought to quantify the Internet’s role in undoing the canon, she found that a Google search produced 32,000 hits for Zora Neale Hurston and 161,000 for Henry James, five times as many for James as for Hurston. In early 2009, a Google search yielded 653,000 hits for Hurston and 4 million for James, six times as many for James as for Hurston. But the loss in parity is less significant than the twentyfold increase in hits for Hurston, and the fact that the 2009 search results include the Library of Congress’s digital collection of 10 mostly unpublished and unproduced plays by Hurston and 19 sound recordings of Hurston singing Florida folk songs, as well as sites with electronic versions of Hurston’s works and commentary by fans and scholars. Thus, although the number of hits for William Wells Page 212Brown (96,400), Harriet Jacobs (125,000), Zitkala-Sa (39,600), and Samson Occom (23,100) do not compare to those for Walt Whitman (7.4 million), Nathaniel Hawthorne (1.9 million), or Emily Dickinson (612,000), the fact that information about and texts by these writers are available to anyone with Internet access is worth celebrating.
In addition to contributing to the expansion and perhaps dismantling of the canon, the Internet also makes an important, though sometimes less visible, contribution to our understanding of ethnicity, because every Web site plays a part, explicitly or implicitly, in shaping how we preserve and transmit the nation’s and the continent’s cultural heritage in the digital age. All scholars working in the humanities have to decide what is collected, how it is preserved, what labels it receives, what commentary to offer, what texts and contexts are worthy of study and juxtaposition, what interface or apparatus is appropriate, and a host of other questions. Although scholars working in print have long grappled with these questions, digital scholars confront complex and distinctly unfamiliar technological questions, they engage audiences with more varied expectations, and they seek to disseminate their work via institutions and economic contexts that are rapidly changing. In short, the Internet offers the possibility of a radical break with the past, a chance to preserve and represent the cultural record in new ways, and an opportunity to think differently about race and ethnicity. The survey that follows identifies a handful of the many projects that are making good on this promise of recovery, increased access, innovative scholarship, and new frameworks for race and ethnicity studies. It also describes the fragile funding and institutional contexts that support much of this work.
North American Slave Narratives is an excellent example of what is possible in the work of recovering and increasing access to little-known texts with substantial institutional investment of time, money, and personnel over many years.[7] The site is a well-organized, well-designed scholarly site that welcomes all users, without charge. This collection is part of a larger digital publishing initiative, Documenting the American South, that offers 11 thematic collections and draws on the collections of the University of North Carolina and other academic libraries.[8] Edited by William Andrews, a pioneer in slave narrative studies, and supported by a project director, a project manager, a cataloger, a preservationist, nine contributing librarian staff, 15 contributing graduate students and librarians, and an editorial board of 25 scholars that oversees the entire Documenting the American South project, Page 213the North American Slave Narratives collection earned an early digitization grant from the NEH of $111,000. Not surprisingly, given the resources (people, expertise, and money) invested, the collection is excellent: it offers full-text searchable texts of “all known extant narratives written by fugitive and former slaves” (except a few of the earliest that cannot be found and the few that have only recently been published and thus are still under copyright). The collection includes materials from more than 70 repositories, and for each text, the site provides an HTML file, an XML-TEI source file, and an image of the title page and of all original illustrations. Some narratives are also accompanied by a summary and useful contextual and historical information.
Equally impressive is The Church in the Southern Black Community, a collection supported by a 1998 Library of Congress/Ameritech National Digital Library Competition grant for $74,500 and the expertise of 12 scholars and librarians.[9] The site offers about 100 works, “including autobiographies, sermons, church reports, religious periodicals, and denominational histories” relating to the church experience of Southern African Americans. The collection is supplemented by a carefully crafted index that identifies “descriptions written by slaves of religion and religious practice during the period of slavery” that are embedded within the wide range of texts in the collection. Both the slave narrative and the religion collection also include image indexes that direct the visitor to images of nineteenth-century African American writers and religious leaders. Given the paucity of images of nonwhite peoples in many versions of U.S. history and the objectification of the black body in U.S. culture, these images go a long way toward diversifying the visual record and putting faces to voices and experiences. More generally, the contributions made by these two digital collections are noteworthy: of the more than 500 texts available, fewer than half would typically be available in print at a major research university library, and as few as 10 or 20 are available to the general reading public via bookstores and public libraries. In 2002, upon the occasion of the thousandth text being added to Documenting the American South, Librarian Joe Hewitt noted that 60 percent of more than 1,500 comments over two and a half years came from the general public.[10]
Notably, both of these collections within Documenting the American South were completed eight years ago. Because they are well-built databases, more materials can be added, but they represent an early push, often with Page 214financial and technical help from the Library of Congress and the NEH, to spearhead precisely what is called for in the 2006 report Our Cultural Commonwealth: The Report of the American Council of Learned Societies Commission on Cyberinfrastructure for Humanities and Social Sciences. As the report notes,
The emergence of the Internet has transformed the practice of the humanities and social sciences—more slowly than some might have hoped, but more profoundly than others may have expected. Digital cultural heritage resources have become a fundamental data-set for the humanities: these resources, combined with computer networks and software tools, now shape the way that scholars discover and make sense of the human record, while also shaping the way those understandings are communicated to students, colleagues, and the general public. But we will not see anything approaching complete digitization of the record of human culture, or the removal of legal and technical barriers to access, or the needed change in the academic reward system, unless the individuals, institutions, enterprises, organizations, and agencies, who are this generation’s stewards of that record, make it their business to ensure that these things happen.[11]
Not surprisingly, well-funded libraries at research institutions have been able to make the greatest headway in moving us toward the goal of completeness.
The Library of Congress American Memory collection is a particularly useful example of what is achieved when public and private funds are dedicated to a comprehensive project aimed at making a significant contribution to the “complete digitization of the record of human culture” that the ACLS calls for, work that will take the commitment of “this generation’s stewards.” American Memory began as a pilot project in 1990. The Library of Congress “identified audiences for digital collections, established technical procedures, [and] wrestled with intellectual-property issues.” In 1994, the Library of Congress turned from CD-ROMs to the Internet and launched the National Digital Library Program, drawing on $5 million from Congress and $45 million from private funding. The Library of Congress has also supported digital work at other libraries and hosted projects. American Memory’s mission is to systematically digitize “some of the foremost historical treasures in the Library and other major research Page 215archives” and to make these materials “readily available on the Web to Congress, scholars, educators, students, the general public, and the global Internet community.”[12]
Through this commitment to digital preservation and access and to building the more than 100 collections now in American Memory, the Library of Congress has made a significant contribution to ensuring that scholars and the general public will be able to generate, for years to come, fresh and provocative understandings of race in America. For example, the African-American Pamphlet Collection provides access to the 351 titles collected for the “Exhibit of Negro Authorship” that W. E. B. DuBois curated for the 1900 Paris Exposition. Slaves and the Courts, 1740–1860 provides page images of more than 100 pamphlets and books dealing with legal contests related to slavery. The Frederick Douglass Papers at the Library of Congress allows anyone with Internet access a chance to scour the more than 7,400 items that were in Douglass’s personal library at his home in Anacostia, Washington, DC; and Born in Slavery: Slave Narratives from the Federal Writers’ Project, 1936–1938 offers images of the typescript pages for more than 2,300 narratives and more than 500 photographs of former slaves collected by the Federal Writers’ Project.
American Memory also takes seriously what “access” means. As Adam Bank notes in Race, Rhetoric, and Technology: Searching for Higher Ground, owning a computer does not guarantee digital access; real access must be “material, functional, experiential, and critical.”[13] Owning a computer and being able to click on a link is only the first and perhaps most easily addressed issue in assuring a real democracy of knowledge. Having intellectual access is much harder. American Memory extends a welcome to all visitors and seeks to facilitate access for the nonspecialist. The site works well, offering good searching capabilities (full text, keyword, subject, author, or title) as well as browsing by topic, time period, type of material, and place. In addition, secondary materials provide historical context, site overviews, and teaching materials. The “Learning Page” offers extensive help to teachers who want to use the more than seven million primary source documents available through American Memory. The chronological site map, lesson plans, and activities provide increased intellectual access to the collections, as they offer questions and ideas that lead the user into a collection or to specific materials and that indicate the kinds of questions that the archive might address.
Page 216Often, digital collections created by academic libraries have at their center an original print collection. Thus, the digital collection recapitulates the original rationale, whether that is the papers in Frederick Douglass’s library at the time of his death, the pamphlets collected for the Paris Exposition, or the idiosyncratic habits of a particular collector, librarian, or library. Sites created by individual scholars typically claim a more comprehensive principle. For example, Loren Schweninge’s Race and Slavery Petitions Project seeks to provide searchable abstracts for all legislative and county court petitions related to slavery. Similarly, The Atlantic Slave Trade and Slave Life in the Americas: A Visual Record, a handsome site recently published by Jerome S. Handler, Senior Scholar at the Virginia Foundation for the Humanities, and Michael Tuite of the Digital Media Lab at the University of Virginia, offers access to over 1,225 images associated with the Atlantic slave trade. Print copies of these images are not necessarily rare. For example, some come from periodical literature such as Harper’s Weekly, which can be accessed at Making of America, or from slave narratives, travel accounts, and books commonly held by research libraries. But together, the clarity of the collection rationale, a good subject index that increases the possibility of targeted and meaningful access, the quality of the images, the commitment to including images from Africa and Europe, and the reach across a wide range of libraries ensure that the collection offers a comprehensive visual record that is compelling to view and a meaningful contribution to efforts to broaden and deepen our understanding of slavery.
What becomes evident with a close examination of sites such as the University of North Carolina’s North American Slave Narratives, the Library of Congress’s African-American Pamphlet Collection, or Handler and Tuite’s The Atlantic Slave Trade and Slave Life in the Americas is the significant intellectual value-added that these digital archives provide, the very work that Borgman notes depends on time-consuming, expert scholarship. They have been created with careful attention to indexing, bibliographic accuracy, and a scholarly apparatus that provides information about the contents and the purpose of the archive and commentaries or essays that help a wide range of users engage the archive effectively. In addition, such sites have deep value-added if they are encoded well. At its simplest, encoding is the tagging of each document and the parts of each document so that the on-screen visual representation captures the information embedded Page 217in the original print design (layout, font, and spacing). However, more sophisticated tagging is now standard, and the Text Encoding Initiative, an international consortium, has developed a widely accepted and flexible “markup language for representing the structural, renditional, and conceptual features of texts.”[14] In the process of tagging a document or an entire collection of documents in TEI-XML, digital scholars have to grapple with fundamental questions about the print materials and decide what should be tagged and how. Such decisions, it turns out, are not trivial or obvious. In creating The Complete Writings and Pictures of Dante Gabriel Rossetti: A Hypermedia Archive, for example, Jerome McGann and his colleagues discovered that they implemented their markup schema differently from one another and thus learned “what we didn’t know about the project.”[15]
The editors of The Revised Dred Scott Case Collection tell a similar story about learning more about the materials as they encoded documents related to Dred Scott’s suit for freedom. In the courts for 11 years, Dred Scott took up critical questions about personhood, asking, “Who would count in the law of the land as a citizen, a political agent, an individual, a human being?”[16] The decision, written by Chief Justice Taney, swept away a large body of legal work that had made distinctions between the legal standing of various classes of people of color—slaves, former slaves, and free blacks—in diverse settings such as civil courts, criminal courts, state courts, federal courts, and other social, commercial, and legal venues. As a result, the Taney decision contributed to the reification and naturalization of both the Constitution and race, suggesting that law and racial categories were not open-ended discourses but, rather, closed systems with “logically deducible rules.”[17]
The story of the creation of The Dred Scott Case Collection began with an appreciation for the significance of Dred Scott in U.S. history, and the project directors were eager to make accessible 85 documents that had been discovered in a civil courthouse in St. Louis, the site of the first petition filed by Scott in 1846. Published in 2000, the site was immediately popular and had more than 150,000 hits in a few weeks.[18] In 2006, recognizing that the site did not comply with newer standards and that its functionality was limited, the Digital Library Services at the University of Washington, the home of the site, proposed using TEI-XML encoding instead of HTML. In migrating to TEI encoding, the project staff discovered that the criteria they had used to encode document titles were inadequate and that Page 218even standard TEI was “limited in its ability to reflect the structure of legal documents.”[19] But as an extensible markup scheme, TEI-XML allowed the editors to create a tag library that was more appropriate (allowing multiple dates and a range of authors—court, witness, notary, etc.). Significantly, in doing this work, the editors discovered that the 85 documents were, in fact, 78, since some were embedded in others and not appropriately considered separate documents. In addition, as the scholars tagged the documents in TEI, they acquired a deeper understanding of every line and abbreviation in each document, and they discovered that the documents pointed to an additional 25 texts, which they were able to locate. The site now offers 111 documents, all of which are full-text, searchable, and accompanied by high-resolution images of the originals. Given the importance of Dred Scott, having access to the earliest documents in the case allows scholars of race and U.S. law to hear a broader range of arguments that were adduced and challenged in the construction and deconstruction of such critical notions as legal standing, personhood, and state’s rights.
As McGann notes, “when a book is produced it literally closes its covers on itself,” and as a result, print editions are, inevitably, “instantiated arguments” about the various instances and the distinct authoritative value of each item in what is often “a vast, even bewildering array of documents.” The digital archive, by contrast, is intended to be “open to alterations of its contents.”[20] In fact, it was such openness that allowed the editors of The Dred Scott Case Collection to revisit the materials and revise their understanding of the very nature of some of the documents. For literary scholars, digital environments provide an appealing alternative to the single authoritative edition. As Daniel Ferrer, the director of the Institut des Textes et Manuscrits Modernes, suggests, the digital collection offers “an unlimited number of paths through the documents; it allows instant juxtaposition of facsimiles, transcriptions, and commentaries (which can be as long as necessary, in various depths of accessibility, so as not to stifle the manuscript themselves); and it welcomes dialogic readings, with unlimited possibilities of reordering, additions of new documents, and changes of reading.”[21] For scholars of race and ethnicity, unbounded collections and increased opportunities to add and reorder texts should help with the work of upending canonical hierarchies. But as we become excited about the openness of digital archives and increased access to manuscripts and multiple versions of a text, we must also ask whose work will receive this kind of attention. Page 219The texts and authors that get selected for this kind of intensive textual recovery in the digital world depend, as Rachel Blau DuPlessis reminds us, “upon extra-textual debates about value, canon, audience, and even sometimes market that cannot be ignored.”[22]
Two important digital projects in African American literature—the Digital Schomburg African American Women Writers of the Nineteenth Century and Chris Mulvey’s Clotel: An Electronic Scholarly Edition—offer useful examples of the role economic forces can play in the digital editing and publishing of writers of color. The Schomburg Center began as part of the Division of Negro Literature, History, and Prints of the 135th Street branch of the New York Public Library. It now has more than 10 million items, including remarkable holdings for many major African American writers, and the Center is aggressive in building its collection, even though it sometimes has had to pass on items that have attracted intense bidding from private collectors. In 1988, the Center published a 33-volume edition of 58 works by African American women first published between 1773 and 1920. Widely praised, The Schomburg Library of Nineteenth-Century Black Women Writers changed the landscape for scholars who study race, ethnicity, gender, and literary aesthetics. Although the Schomburg Library is now out of print, the Center has made the texts available at the Digital Schomburg. Creating the digital versions was an expensive undertaking, since it was essential to have each text double- or triple-keyed because dialect is common in many of the texts.[23] Completed in 1999, the digitization of the texts complied with TEI guidelines at that time. The searches work well, the texts are edited well, and the project makes texts available to those who may not have access to the print series, which was surely a purchase beyond the budgets of many small public libraries. Unfortunately, the corporation behind the software used for the project went out of business in 2002, and migrating to newer and better interfaces will require money and additional technical and literary expertise. As DuPlessis notes, “texts themselves—their creation and their subsequent publication—are part of social processes and bear the marks of those processes.”[24] These works by African American women writers had limited runs and limited distribution in the nineteenth century, went out of print quickly, were recovered only with the concerted effort of dedicated scholars, and now exist in the digital environment in a fragile state. They may yet again disappear from view if, as the 2006 ACLS report previously cited says, we do not make it our business to ensure that Page 220our diverse cultural heritage is digitized and thus a part of how “scholars discover and make sense of the human record.”
The costs and challenges of sustaining a site’s interoperability with new platforms and software and also of designing an aesthetic and highly functional interface, including markup schema that conform to best practices, have led some scholars to turn to digital publishers. Those not affiliated with digital centers find valuable help with technical issues as well as marketing and long-term management services through such programs as the University of Virginia’s Rotunda project, which is funded by the Andrew W. Mellon Foundation and the University of Virginia and dedicated to the “publication of original digital scholarship along with newly digitized critical and documentary editions in the humanities and social sciences.”[25] It is true, of course, that such a choice typically means that the project is not freely available on the Web. But for some, this is an acceptable cost of getting what they hope is a guaranteed future for their digital scholarship.
This is the choice Chris Mulvey made in publishing Clotel: An Electronic Scholarly Edition with University of Virginia’s Rotunda Press. William Wells Brown’s Clotel has a complex publication history: Brown published four very different versions between 1853 and 1867. It also has a complex relationship with other texts, since Brown quotes, borrows, and some would say plagiarizes from a wide range of sources, including Lydia Maria Child’s short story “The Quadroons,” abolitionist tracts, newspaper articles, congressional debates, slave narratives, and poems.[26] Mulvey first approached the Electronic Text Center at the University of Virginia about creating an electronic edition of Brown’s novel in 2001. The project, according to Matthew Gibson, posed a “sizeable challenge,” since Mulvey wanted to “mark up regions of contextual similarity” across the different versions “without necessarily privileging any one version” and wanted to make it possible to use the site for “uninterrupted reading” without losing the option of comparing the texts side by side.[27] The result is a stable, well-functioning site that offers “the full extant texts of the novel’s four versions,” with full-text searching, parallel reading displays, and “line-by-line annotations and textual collation.”[28] The price for access ranges from $420 for high schools and individuals to $845 for research universities, plus an annual maintenance fee. While this price limits access, the expectations of purchasers and the income may bolster the University of Virginia’s commitment to maintaining and updating the site as technological changes require.
Page 221While this discussion of economic contexts underscores the role of the market in shaping what appears and disappears on the Internet, it is equally important for scholars to recognize the power they have to shape the questions, courses, syllabi, and research agendas that, in turn, can ensure that the digital revolution does not simply recapitulate the biases and limitations of the print world. Thus, although American Memory currently has 17 collections dedicated to African American materials, only six dedicated to Native American materials, and one focused on Chinese American history, we can hope this will change as scholars challenge narrow definitions of America. Notably, one of the early recipients of a grant from the Library of Congress was the University of Washington’s American Indians of the Pacific Northwest, a collection of 2,300 photographs, 1,500 pages from the annual reports of the Commissioner of Indian Affairs to the Secretary of the Interior from 1851 to 1908, six Indian treaties negotiated in 1855, 89 articles from the Pacific Northwest Quarterly and other University of Washington publications, and 10 introductory scholarly essays. More recently, hemispheric studies has been able to attract substantial funding. In 2007, the Maryland Institute for Technology in the Humanities and Rice University’s Fondren Library and Humanities Research Center were awarded almost a million dollars, which will be matched by the schools, to develop an online site that will integrate an existing multilingual digital collection (the Early Americas Digital Archive) with a new archive of multilingual materials to be developed by scholars at Rice University. Named in honor of Jose Marti’s 1893 essay, the Our Americas Archive Partnership explicitly seeks to challenge “the nation-state as the organizing rubric for literary and cultural history of the Americas.”[29]
The Our Americas Archive Partnership also provides a glimpse of an increasing interest in digital tools and the role these tools might play in race and ethnicity studies. Perhaps inspired by the radical questioning that led scholars to challenge narrow nationalist notions of culture and thus to launch hemispheric studies, the project directors proclaim that their goal is to “develop new ways of doing research” and to create “a new, interactive community of scholarly inquiry” through the adaptation of tools such as geographic visualization, social tagging, and tag clouds. Excitement about digital tools is common among digital enthusiasts who prophesy the emergence of a scholarship that is interactive, collaborative, open-ended, visual, and more likely to allow innovation in race and ethnicity studies.
Page 222One of the most impressive examples of born-digital scholarship that uses the medium to challenge how we think about race is Wendy Chun’s Programmed Visions. Published in 2007 in Vectors: Journal of Culture and Technology in a Dynamic Vernacular, an international electronic journal supported by the University of Southern California’s School of Cinema and Television, the site is part of a book project, Programmed Visions: Software, DNA, Race, in which Chun explores the paradoxical proliferation of images in the last twenty years just as there has been increasing doubt about the power of the image to index reality. Much of Chun’s book focuses on programming languages, computation and information theory, and stored memory programming, but she also suggests that there are important similarities between software and race as powerful forms of visual ideology.
Chun’s site focuses on the ways in which race works as an archive, as a category used to create meaning, even as the very notion of race as a meaningful category has been undermined. The result is a site that challenges our desire for an easy or invisible interface. As the editor of Vectors explains,
The digitization initiatives that drive so much of contemporary online culture—from Google Books to our local universities—envision the virtual archive as a kind of seamless information machine bringing the riches of the world to a screen near you with a quick tap of the finger. Such archives privilege transparency, accessibility, standardization, interoperability, and ease of use, lofty goals all, and quite useful when confronted with reams of data. But . . . [this project] urges you to shift your line of vision and to think about the larger stakes our frenzy of digitization might likely conceal.[30]
Chun’s site eschews the usual navigational tools—menu bars, an index, a “search this site” function, or even “breadcrumb trails” that mark the path taken. The site rejects the usual virtues associated with a digital archive—completeness, coherence, and transparency. Instead, it offers snippets rather than whole texts, and everything is on the move, as portions of texts float across the screen, beyond the control of the user’s mouse. The words of Toni Morrison, W. E. B. DuBois, Franz Fanon, Octavia Butler, court cases, and scientific treatises collide in “an archivist’s nightmare” of opacity and chaos. The site frustrates our expectations that we can move from Page 223micro to macro, from close-ups to overviews, from one well-bounded text to another, each with familiar bibliographical information. A map is slowly created that allows the user to recall snippets already viewed, but bringing faint text fully into view is not possible with just a mouse click. As one user suggests, the site “refreshes our awareness of the interface as something coded and constructed,” bringing to our attention “how naturalized” interfaces have become. As a result, the site links “opacity to a complex figuring of the systematic production of race as a category of power/knowledge and, most importantly, inextricably links race (as archive) to our understanding of visuality, whether opaque or transparent or somewhere in between.”[31] Samira Kawash notes in Dislocating the Color Line, a text included in Chun’s archive, that the concept of race is “predicated on an epistemology of visibility,” even as visibility is “an insufficient guarantee of knowledge.”[32] Chun makes the insufficiency of visibility an integral part of her Web site and thus unsettles the clarity that race, archives, software, and Web sites seem to promise.
A very different kind of born-digital scholarship, one that taps the ease of publication and collaborative spirit many have hoped the Internet would foster, can be found in Cary Nelson’s Modern American Poetry Syllabi (MAPS). The site grew out of Nelson’s experience of editing the Anthology of Modern American Poetry for Oxford University Press, and it is a good example of how the Internet may indeed explode the boundaries of the traditional anthology. Richard Powers enthusiastically describes MAPS as “a living, breathing conversation between hundreds of poets, scholars, and readers” and a “clearinghouse for some of the best criticism on the best poets of our time.”[33] Significantly, the site also offers an impressive introduction, intentionally or not, to the multiethnic landscape of American poetry, and pages such as “Japanese American Concentration Camp Haiku” or those on Louise Erdrich include images from the American Memory collections and the University of Washington’s American Indians of the Pacific Northwest.
Surely our scholarship has changed as a result of the digital revolution and the materials now available, which are far more extensive than this survey can convey. But the change is hard to quantify. In addition to the significant body of primary sources available on the Internet for no fee, there are large databases such as those offered by Alexander Street Press in Caribbean literature, Latino literature, North American immigrant diaries Page 224and letters, North American Indian personal writings, and African American music, to name only a few of their collections. But a review of bibliographies in the journals American Literature and MELUS suggests that although scholars may be working with digital versions of primary sources, they are not often citing the online version. Librarians also know that full-text databases of scholarly journals are heavily used by scholars and that the world of secondary sources as well as primary sources has expanded, perhaps exponentially, for the scholar who has access to a university Web portal. JSTOR, Project Muse, Academic Search Premier, and other full-text databases deliver scholarly articles in a matter of seconds to teachers and scholars of American literatures, and a 2006 survey by Ithaka indicates that 63 percent of faculty are willing to see their libraries cancel print subscriptions as long as the electronic version remains available.[34]
Some speculate that as scholars do more work online, the expectation for seamless navigation will increase. Scholars will expect to be able to move effortlessly from freely available pages in a copyrighted book on Google, to scholarly journals in a subscription database, to online archives of digitized images and well-edited transcriptions of rare primary sources. The economic contexts that will make this possible are not yet clear. But while we watch individual contract negotiations and major court battles find compromises between business models, which inevitably must focus on meeting costs and generating profits, and the commitment of libraries to serving the public good through free access to as much knowledge as their budgets allow them to purchase, we should also note that the scholarly production of digital archives and born-digital scholarship is deepening and widening.[35] This is good news for race and ethnicity studies. Although the habits, biases, power centers, and economics that shaped print over the last 500 years are also shaping the digital world, this survey suggests there are more diverse materials available to a “worldwide web” of students, teachers, and scholars than ever before. Postmodern theories played an important role in undoing positivist assumptions about race and ethnicity and idealized notions about well-bounded texts. Now, by increasing the availability of materials and by welcoming marginalized voices and perspectives, digital scholarship should, in the not-too-distant future, have a profound impact on the stories and histories we tell about race and ethnicity in the Americas.
Notes
1. Christine L. Borgman, Scholarship in the Digital Age: Information, Infrastructure, and the Internet (Cambridge: MIT Press, 2007), 227.
2. Katharine Newman, “MELUS Invented: The Rest Is History,” MELUS 16, no. 4 (Winter 1989–90): 101.
3. Mary Jo Bona and Irma Maini, introduction to Multiethnic Literature and Canon Debates (Albany: State University Press of New York, 2006).
4. Paul Lauter, Canons and Contexts (New York: Oxford University Press, 1991), xi.
5. Patricia Keefe Durso, “It’s Just Beginning: Assessing the Impact of the Internet on U.S. Multiethnic Literature and the Canon,” in Bona and Maini, Multiethnic Literature and Canon Debates, 213.
6. Stephen Pulsford, “Literature and the Internet: Theoretical and Political Considerations,” in Literature and the Internet: A Guide for Students, Teachers, and Scholars, by Stephanie Browner, Stephen Pulsford, and Richard Sears (New York: Routledge, 2000), 171, 185.
7. See http://docsouth.unc.edu/neh/.
8. See http://docsouth.unc.edu/.
9. See http://docsouth.unc.edu/church/.
10. Joe A. Hewitt, “DocSouth 1000th Title Symposium, March 1, 2002,” University of North Carolina, Chapel Hill, http://docsouth.unc.edu/support/about/jahewitt.html.
11. Our Cultural Commonwealth: The Report of the American Council of Learned Societies Commission on Cyberinfrastructure for Humanities and Social Sciences (New York: American Council of Learned Societies, 2006), 1, available at http://www.acls.org/uploadedFiles/Publications/Programs/Our_Cultural_Commonwealth.pdf.
12. “About American Memory,” Library of Congress, http://lcweb2.loc.gov/ammem/about/index.html.
13. Adam Banks, Race, Rhetoric, and Technology: Searching for Higher Ground (New York: Routledge, 2006), 135.
14. “TEI Guidelines,” Text Encoding Initiative, http://www.tei-c.org/Guidelines/.
15. Jerome McGann, Radiant Textuality: Literature after the World Wide Web (New York: Palgrave, 2001), 91.
16. Sara B. Blair, “Changing the Subject: Henry James, Dred Scott, and Fictions of Identity,” American Literary History 4, no. 1 (Spring 1992): 38.
17. Dred Scott v. John F. A. Sanford, opinion of Chief Justice Taney, U.S. Supreme Court, December term, 1856, 21; Morton J. Horwitz, The Transformation of American Law, 1780–1860 (Cambridge: Harvard University Press, 1977), 259. For further discussion, see Blair, “Changing the Subject,” 41.
18. See “Washington University Acquires Lost Documents from the Dred Scott Case,” Journal of Blacks in Higher Education 31 (Spring 2001): 59.
19. “About the Dred Scott Case Collection,” Washington University, http://digital.wustl.edu/d/dre/about.html.
20. McGann, Radiant Textuality, 69, 80, 71.
21. Daniel Ferrer, “Production, Invention, and Reproduction: Genetic vs. Textual Criticism,” in Reimagining Textuality: Textual Studies in the Late Age of Print, ed. Elizabeth Bergmann Loizeaux and Neil Fraistat (Madison: University of Wisconsin Press, 2002), 92.
22. Rachel Blau DuPlessis, “Response: Shoptalk—Working Conditions and Marginal Gains,” in Loizeaux and Fraistat, Reimagining Textuality, 56.
23. See Howard Dodson, introduction to Digital Schomburg African American Women Writers of the Nineteenth Century, http://digital.nypl.org/schomburg/writers_aa19/intro.html; Thomas P. Lucas, “Editorial Methods for Creation of the Digital Schomburg Editions,” Digital Schomburg African American Women Writers of the Nineteenth Century, http://digital.nypl.org/schomburg/writers_aa19/editorial.html. Contrary to Dodson’s report that the series is no longer available in print, the Oxford University Press online site suggests that it will accept orders for any of the volumes, perhaps to be filled by print on demand.
24. DuPlessis, “Response,” 85.
25. “About Rotunda,” University of Virginia Press, http://rotunda.upress.virginia.edu/index.php?page_id=About.
26. For commentary on the publication history of Clotel and its use in teaching and in scholarly studies, see Ann duCille, “Where in the World Is William Wells Brown? Thomas Jefferson, Sally Hemings, and the DNA of African-American Literary History,” American Literary History 12, no. 3 (Autumn 2000): 452–54. For more on Brown’s use of other texts, see Robert Levine’s “Cultural and Historical Background,” in Clotel, or The President’s Daughter, Bedford Cultural Edition (New York: Macmillan, 2000).
27. Matthew Gibson, “Clotel: An Electronic Scholarly Edition,” University of Virginia, http://mustard.tapor.uvic.ca/cocoon/ach_abstracts/xq/xhtml.xq?id=152.
28. Chris Mulvey, ed., Clotel: An Electronic Scholarly Edition (Charlottesville: University of Virginia Press, 2006), http://rotunda.upress.virginia.edu:8080/clotel/.
29. Our Americas Archive Partnership, Humanities Research Center, Rice University, http://culture.rice.edu/americas.html. The preliminary site can be found at http://oaap.rice.edu/.
30. Tara McPherson and Steve Anderson, editors’ introduction to Programmed Visions, in Vectors, http://www.vectorsjournal.org/index.php?page=7&projectId=85.
31. Tara McPherson, “Reprogramming Vision,” in Vectors forums, http://www.vectorsjournal.org/forums/?viewId=397.
32. Samira Kawash, Dislocating the Color Line: Identity, Hybridity, and Singularity in African-American Narrative (Stanford: Stanford University Press, 1997), 130.
33. “About MAPS,” Modern American Poetry Site, http://www.english.uiuc.edu/maps/about.htm.
34. Roger C. Schonfeld and Kevin M. Guthrie, “The Changing Information Services Needs of Faculty,” Educause, July/August 2007, 9.
35. For an analysis of one court battle within the larger context of libraries’ commitment to serving the public good and the obligation of corporations to serve the interests of their shareholders, see Robert Darnton, “Google and the Future of Books,” New York Review of Books, 12 February 2009.
Design and Politics in Electronic American Literary Archives
This essay explores the political implications of digital literary archives. Its focus is on the institutional involvements and choices made by electronic resource builders, largely in the academy and largely using technologies that involve XML (such as TEI, the Text Encoding Initiative’s standards for tagging literary texts). The word archive is here used broadly, to indicate projects that present American literature electronically and their associated storage, delivery, and community-hosting technologies (databases, interfaces, wikis, and the like).[1] Taking up a few important free archival projects—including the Walt Whitman Archive and the Our Americas Archive Partnership—the essay will discuss questions of political involvement and meaning facing American literary archives today through the lens of the internal and external commitments such endeavors must make. By internal, I mean, loosely, the sorts of ties necessary to generate and sustain an archival project (which may well be multi-institutional and transnational); by external, I mean those means by which such an archive takes its place in the larger world. Language, economics, and collaboration all emerge as important political categories as archives shape and position themselves among the different models of access available today. I argue that part of the work of responsible online American literary archival projects is to engage with these politics consciously and explicitly, even as, in turn, experimentation with the potential of electronic storage and delivery shifts the coordinates of political possibility in ways that cannot be anticipated.
Building a literary archive on a digital platform is difficult work. For Page 229most of us, it has required learning another language (or two); mastering the differences among programming languages, scripting languages, and markup languages; encountering a world of standards organizations and their thousand-page guidelines; trying to find hundreds of thousands of dollars for humanities projects; and then figuring out how to justify all this to our colleagues in the academy. We may be driven by the ideals of a new scholarly form—one, for example, that will change the boundaries of the academy and bring previously hidden documents to an international public. But in the on-the-ground building of a project, it can be easy to accept certain disciplinary norms and consequently to make XML-based literary archives regenerate scholarly structures and priorities that we might hope to transform. Given the pace and scope of the production of digital cultural resources in the United States as compared to the rest of the world, American literary projects may be particularly susceptible to such pressures. This essay hopes to offer perspectives on the conditions in which digital archives of American cultural materials are built, suggesting questions we might routinely make part of our analyses of them.
This essay offers a précis, rather than an exhaustive or synthetic panorama. There are many other political layers that could be pursued here, including the ones taken up elsewhere in this volume. In the first place, as John Lavagnino has observed, the very use of XML is not always appropriate for a digital literary project, for formal or technical reasons.[2]Melville’s Marginalia Online, for example, uses Adobe PDF and a regularized symbolic set to present marginalia, rather than XML stylesheets or actual page scans in free image formats.[3] XML may be unappealing for theoretical reasons: it imposes a hierarchy on a text, so it stands in a fundamental tension with the argument that imaginative literary works make meaning through inherently unstable structures. Even when XML functions relatively smoothly with a literary archival project, there persists a tension between text and image that is not in tune with the formal equality of those elements in some genres (such as children’s books) and certainly within the multimedia World Wide Web interface. “Indeed, computationally speaking, the divide between image and text remains all but irreconcilable,” Matthew Kirschenbaum points out, and the chasm between ASCII text and bitmap images “in turn reflects and recapitulates certain elemental differences in the epistemology of images and text.”[4]
If only it were just epistemology at stake. N. Katherine Hayles’s warning Page 230that electronic resources—“the prostheses joining humans and machines”—profoundly shape our identities, not just our representations, should inform any discussion about the potential of the digital to liberate or constrain us.[5] A responsible literary archive-building practice will both engage this ontological condition and heed Jerome McGann’s warning that every act of remediation is an act of interpretation. The challenge then becomes to shape editorial policy with a kind of self-consciousness particular to digital storage and delivery. “Literary works do not know themselves, and cannot be known, apart from their specific material modes of existence/resistance,” McGann writes. “They are not channels of transmission, they are particular forms of transmissive interaction.”[6] This is no less true when the material modes of existence take the form of a server, XML, stylesheets, a Web browser, and a reader’s computer. Most literary scholars still understand the book and its materiality better than they do the many transmissive states of the electronic text, so it can be difficult to see how form and politics get linked to each other on the way to producing an electronic literary object.
In many ways, this is a long-standing difficulty playing out in a new arena. In this essay I focus on the same kinds of questions Raymond Williams brought to the attention of literary scholars a long time ago—questions about the context of production of literature and how it influences the way human beings relate to each other through texts. “The form of social relationship and the form of material production are specifically linked,” Williams wrote, but not “in some simple identity.”[7] Indeed, the material and social conditions for digital work are changing so quickly that the Marxist base-superstructure analytical approach cannot make clear sense of them; what is more, the multinational and multilingual nature of our expanded audience demands attention to translation no less than to economics. The economic stakeholders in digital projects are numerous and can shift rapidly. So, too, can the sources of labor and institutional relations that make an archive possible. Given users’ increasing ability to download and “repurpose” data, the line between a product and raw material is blurry (especially in the case of free-access archives). Access remains a crucial area of thinking about the political because, while dreams of universal access fuel much academic Web development, there are problems with both the ideals and the pragmatics of digital access. Literary editing is starting to become more like history writing in terms of its audience. Suddenly, much larger audiences, from beyond higher education, are able Page 231to access our richly marked-up texts. But a bigger audience usually means one less interested in the rich markup—that is, in the theoretical “angle” of the editing. If we want to keep and inform that audience, then, we must build not just new scholarly archives but new scholarly interfaces. Before, presses handled distribution and interface design, but now that the model for going public is less the book and more, perhaps, the museum, those processes Williams stressed in his analysis have come increasingly under the control of those who create scholarly content.
Ann Stoler argues that we should regard archives as places where knowledge is produced, not just stored or displayed; what gets kept and how it gets marked as evidence gives form to power, shaping the imagination of those who use an archive. As both editors and designers, we encode protocols of power in the systems by which our literary past is circulated and accessed.[8] In what follows, I describe important features of the landscape of contingency in which literary archives grow today, both internally, as projects shape themselves, and externally, as they take place in the digital resource realm. The distinction is merely intended as a heuristic and will begin to break down as the essay proceeds. With this gesture, I hope less to prescribe an approach than to suggest important questions and elements of strategy in building scholarly resources for literary study, so that we may attend to the kinds of knowledge our archives do—and might—make.
What shape should a digital project take? This question confronts every project, initially and iteratively throughout its life. In addition to the questions about what standards to use (or to attempt to develop), there are questions about the canon. Especially given the trend toward interactive Web sites, with user-contributed and user-manipulable content—collectively known as “Web 2.0”—a generation gap may be emerging that maps onto an epistemological shift from author-based literary studies to network-based literary studies. The design of each resource makes an argument about the canon and what humanities “does,” even about the university and its role in society. Meredith McGill implies as much in her critique of the Walt Whitman Archive in a 2007 forum in PMLA. The archive, she writes, adheres “surprisingly closely to normative ideas of the author and the work.” Why focus on Whitman (and in particular his poetry) instead of, say, transcendentalism, or American writers, or queer poets, or alternative spiritualists?[9] The boundaries of an archive are inscribed at many Page 232levels, from the way it presents itself on the Web and argues for funding to the degree of interoperability with other electronic resources built into its code.
Beyond the implications of choosing a shape for a literary resource is the question of where to lodge it institutionally. This can be much like trying to find a publisher for a scholarly monograph; one crucial difference is the importance of sustainability to a digital project. Servers, code, and software all require maintenance, and even the least-interactive project will receive suggestions for revision from users that must be vetted. Internal funding for such projects and their maintenance varies from institution to institution, as do the strings attached. Extramural federations and funding can help a project achieve some latitude, but local administrative, library, and faculty interests will still put pressure on it.[10] Perhaps most important to younger faculty initiating new forms of literary research, the degree to which digital work can be assessed as a positive contribution to a tenure and promotion case varies by department, school, and university administration. Here political goals can collide: to innovate in the form of humanities work in some situations, it might be tempting to shape a digital project around a single, canonical author. Archiving authors with both a firm place on syllabi and an audience beyond academia makes attracting funding, student labor, and attention (both within the field and from media) easier. Focusing on a theme instead of a single author may mean a longer start-up time, as more institutions, repositories, and area specialists may be involved. At the same time, pace McGill, focusing on a single author can provide models, software, experience, and a core community for other kinds of digital humanities work, as it has in the case of many of the excellent sites fostered by the University of Virginia’s Institute for Advanced Technology in the Humanities. Taken together, these factors subtly create a landscape of difference with respect to where and how innovation can thrive in the digital humanities and where and how it cannot.
The labor models for archives are features of that landscape, too. Digitization is extraordinarily expensive. To save money, many projects outsource transcription and other forms of capture to overseas companies with ambiguous employment and compensation ethics. Here Marxist critiques of burgeoning global distributions of labor and new forms of alienation make odd bedfellows with nationalist critiques of offshoring jobs. The cheapness and speed of overseas digitization have, however, made archiving Page 233of certain kinds and at certain scales possible where otherwise they would not be. Since the early 1990s, medical records digitization has been performed in India, occasionally causing controversy about confidentiality and accuracy. Along the same lines, Janet Gertz argues that with inexpensive digitization offshore, the main reason a project would perform digitization in-house would be quality control and conservation of original documents. But there are also questions about how placing digitization outside the intellectual labor matrix of a project affects the self-awareness and creative development of a scholarly resource. Often the feedback between transcribers or encoders and project directors can change the encoding scheme or even some of the basic intellectual structures of a project. As a student in the 1990s, doing transcription and basic encoding of Whitman documents, I learned a lot about textual structures that I had not encountered in seminars; when discussing those observations with the project directors, sometimes new areas of concern or future development would emerge.[11]
Then again, not all schools have graduate students to perform (and learn by performing) this sort of work. The term that has been used recently as a panacea for many of the challenges, both internal and external, facing the digital humanities is collaboration. Long argued as the key to transforming the humanities’ genius-in-the-tower, single-author model of production, collaboration is a necessity in the digital realm. It thus seems to offer an advantage that balances the difficulties of articulating literary critics, library experts, computer technicians, and code wizards together. But the necessity of collaborating on digital projects should not obscure the ways that old structures persist, shaping the rewards of such work. Two of the most frequently named inspirations for collaborative authorship in the humanities are the natural sciences and the Web 2.0 practices just mentioned.[12]
The science model relies on a relatively clear division of labor underlying attribution for published scholarly work. Coauthorship is triggered by conventions in the research process; within subfields of the natural sciences, the particular significance of first authors, second authors, and so on is recognized. Underlying that division of labor is a relatively clear funding structure, channeled through principal investigators who head research laboratories. Not attributing authorship properly when working under federal funding gets a researcher in big trouble. So while it is true that collaborations in the sciences only a few decades ago tended to be small—two or three researchers at the most—the Rosalind Franklin scandals are few and Page 234far between these days.[13] (Data theft and fictionalization, unfortunately, are not; nor, as many graduate students would respond to this, are the triggers for authorship anything more than relatively clear.) Most humanists are unfamiliar with the role of the principal investigator as simultaneous mentor and funding source, and there are no broad, government-funded “training grants” for graduate students in the humanities as there are for the sciences. Having them would encourage the development of a more widespread use and understanding of the many potentials of electronic mediation in humanities work. Absent these material foundations, collaboration in the humanities must borrow selectively from the sciences, with a realistic sense of the disjunctions that remain.
Web 2.0 models of collaboration are thrilling. Having hundreds of contributors create a humanities “event,” online or otherwise, is inspirational, creative, and at times revelatory.[14] But realistically speaking, those who end up getting the credit—in the form of promotion, tenure, book contracts, board positions, grants, and speaking invitations—are those who design such events.[15] In this, we risk repeating the old theory-versus-content hierarchical divisions within the humanities. Theorists are stars; content specialists can never be. Collaborative projects sometimes have long lists of contributors (often heterogeneous with respect to academic rank), but when an article or news coverage is generated to talk about the project, only one or two people are consulted or are officially named coauthors. What is worse, the power dynamics of collaboration are often difficult to see through the hype. Using online collaboration tools to elicit responses to a draft of an essay, for example, seems the perfect embodiment of collaborative practice. But can an unknown graduate student make this move and get the same level and quality of response as an established scholar? The music industry offers a reasonable analogy here: Radiohead can give away its records for a price the customer chooses and both survive and be described by the media as innovative. But can the little-known swamp rock band The Levees do it and even get attention?
Some of what has been called “Web collaboration” is not quite as radical as it seems. Wikis, for example, are frequently cited as exemplary collaboration tools with great potential to change humanities authorship models. But wikis are not collaborative tools by nature; rather, they are iterative ones. One version is replaced by another. The authors of successive versions may not know each other, agree on changes, or agree on the final product’s “correctness,” Page 235much less claim a real stake in the final product. Contributions can “disappear” entirely from a casual reader’s perspective, relegated to the log or discussion. Wikis can be collaborative in certain circumstances, and they are certainly radical as an iterative authorship model. But until coauthored articles—even, perhaps, massively coauthored articles—in leading journals in the humanities become common, little will have changed in our profession on this point. It is to social relations, as much or more than to technologies, that we must look to encourage or analyze collaboration.
Rather than trying to find a “model” in response to the current trends, I suggest that we develop ethics of collaboration. Models often risk re-creating the very hierarchies that have made it hard for digital humanities to become a widespread practice beyond the handful of institutions that have invested substantial material and reputational resources in digital humanities, such as the University of Maryland and the University of Nebraska. Collaborations can suit the conditions of a particular electronic project and its material basis while responding knowledgeably to the market conditions of academic work. This may mean that students contribute only in specific ways or to specific sections of an essay or a digital project yet still receive coauthorship credit. In some cases, it may mean that all of the collaborators shape a work equally and, thus, that the ideas of the person who originated the project morph into a different form (something that almost never happens in science, where there are generally only one or two authors who shape the overall objectives and conclusions of a paper).[16] Once people begin contributing, they should also get some control, whether or not they are leaders on a publication. Graduate students often lead innovation in digital projects, and they need credit, not just acknowledgment. The process of authorship comes to the fore in this approach and might become itself part of the considerations in promotion and tenure cases.
If grappling with the canon, university politics, and the ethics of collaboration offer challenges to the genesis of a project, others haunt it as it takes its place in the larger world. Between federal, state, local, university, and private funding sources, support and audiences flow with sometimes competing political visions or notions of what digitized literature can do in the world. Many of the questions about archival politics rotate around two issues: selection and access. Questions of selection include debates about what gets digitized and why, as well as how resources should be allocated Page 236for digitization. Questions of access proliferate, because it is here that the liberal ideal of free access to information is lodged. The expansion of copyright laws has made it difficult (or simply expensive) for public archives of twentieth-century media to be built, as most releases after 1923 are under copyright. Siva Vaidhyanathan, Lawrence Lessig, and James Boyle have written eloquently on the secondary effects of such extensions, including the degree to which prosecutions initiated by groups like the Recording Industry Association of America cause academic entities to be overly and often needlessly cautious about reproducing materials.[17]
Alterations of copyright laws will be assisted by evidence that scholarly digital projects leverage the freedom of the Internet to advance research and enhance pedagogy. A start toward this has been made with the creation of the easy-to-use Creative Commons licenses, which offer literary archives ways of expanding their integration with secondary materials that scholars designate as reproducible for educational purposes. These licenses help protect the intellectual property of the scholars building rich academic resources, while at the same time facilitating sharing of those resources. While our code has theoretically been protected all along, Creative Commons licenses help establish expectations on the Web about rights and usage; they also allow us to share our recent publications online while preventing their unregulated use for commercial purposes. Underlying all of these intellectual property issues is the question of the “digital divide,” of who has access to the Internet, preceding the question of whether resources on the Internet should be made available for free or may be aggregated and gated for profit by groups like Elsevier. For much of the world, having reached the World Wide Web, the question will be, what language should the content be in? It is with this issue that I would like to begin, working my way back to those of selection, funding, and free versus gated resources.
American literary archives are, for the most part, still monolingual. Elsewhere I have argued about the importance of translation enterprises for this field, given U.S. linguistic demographics, the history of non-Anglophone publishing in North America, and the importance for linguistic diversity of counteracting tendencies toward “global English” and also English-language-only initiatives within Anglophone countries.[18] When literary archives tackle translation issues, they usually take care to focus on the cultural nuances of language. This is significant because efforts toward Page 237automatic translation attract a great deal of funding and attention in the digital world. Google’s translation engine is probably the one known best by Web users; as a tool for limited applications, it is a time-saver, but it partakes of an old ideal of a universal language, or of conceptual equivalence across languages, that is problematic.[19] Also, literary archives are sensitive to the fact that, at least at the moment, the codes used to “tag” objects in literary archives are largely in English, with logics largely based in Western media. There are projects to internationalize code, which would catalyze the spread of standards across linguistic fields, but the question remains whether that spread will induce changes in the structure of the code itself or, at least, in standards such as TEI.
The Our Americas Archive Partnership (OAAP) offers a promising, ambitious approach to translation in an American archival project.[20] In content and organization, the OAAP is a transnational, hemispheric undertaking. Rather than absorbing or generating all of its content, it federates archives by porting heterogeneous data sources into a central database and query portal, through which users pass to the original repositories when they have found a document. It is thematic in focus, organized around the topic of the development of nation-states in the Americas. Necessarily, the contents of such an archive are multilingual; the OAAP has a translator on staff and plans to translate documents from and into Spanish, English, Dutch, French, and Portuguese. At the infrastructural level, the project will develop search technologies and protocols to address the difficulty of searching across different languages. This will demand taking into account historical and regional variations in orthography and other aspects of language, since the OAAP will involve documents reaching back to the seventeenth century and across the continents of North and South America.
But the archives of American nation formation will also be laced with documents featuring the hundreds of indigenous languages of the Americas. Some indigenous activists might claim, in fact, that the revolutionary era is far from over in some places in the Americas and that digital resources can play an important role in shaping political movements today—assuming those resources can be found and accessed. Questions of translation of and searching in indigenous languages have an impact on what Timothy B. Powell describes as “the struggle to identify and correct the narrative of the Vanishing Indian that lies hidden beneath the glossy surface of search engines and hyperlinks.”[21] While Powell concludes that teaching American Page 238literature can be enhanced by the use of such resources, questions have been raised in Australia about the appropriateness of outside access to indigenous databases. Elizabeth Povinelli argues that the Western orientation of searching, with its belief in the completeness and clarity of information access (Stewart Brand’s “information wants to be free”), should be interrogated when it comes to indigenous cultural resources, whose subjection to colonial expropriation could be extended into the digital realm.[22] To what degree should the generation of cultural resource databases be constrained by the protocols of the groups represented therein? What would interfaces informed by indigenous information protocols look like, and might American literature read differently through them? Implicit in these questions is another one, about who should be involved in the creation and curation of archives. The Aboriginal Voice National Recommendations, a report from a Canadian panel of First Nations representatives under the aegis of the Crossing Boundaries National Council, explicitly indicates that funding for information technology development, including electronic cultural repositories, should be structured so that it both helps link First Nations people with Canadians and strengthens self-determination through the generation of resources and networks within indigenous groups. Indigenous nations with active or potential electronic presence, whether officially recognized or not, bring the vexed questions of sovereignty together with the more familiar issues of access and intellectual property in digital humanities works.[23]
In the past, the audiences for literary archives were comparatively small. What does it mean to create a scholarly resource whose audience numbers not in the thousands but, over the not-so-long run, the millions? This shift of scale means that questions about the politics of digitization have been asked increasingly frequently in public forums beyond the academy. Anthony Grafton, in a 2007 New Yorker article titled “Future Reading,” posed the question of digitization of textual resources using a familiar rhetorical gesture: Will physical texts disappear with the Google Books revolution? Is it, in fact, a revolution? From a historian’s perspective, of course, revolutions are few and far between, so the obvious answer is no. From Grafton’s perspective as a historian of books at Harvard, the library seems a permanent fixture. Grafton briefly mentions some shortcomings of digitization, including the fact that preservation efforts have been largely limited to print, texts in English, narrative or reference works (rather than Page 239government records, private works, or other manuscripts), and books out of copyright.[24] But the ethics of the archive and access to it concern him little. The ideology of “democratic access” is just that: an idea, a political platform, not something a reasonable person would consider actualizable.
Illuminating, though, is Grafton’s insistence that the history of ways of finding information, ways of organizing it, is disjointed, heterogeneous, and likely to remain so. He reveals a critical symmetry between scholarly calls for widespread free access to information and the rhetoric of private companies promising to make information universally accessible. Google, one expects, has more to gain materially from such rhetoric (or its realization) than do scholars. “It’s not likely that we’ll see the whole archives of the United States or any other developed nation online in the immediate future,” Grafton points out, “much less those of poorer nations.” This is not news (especially in the wake of the Google scandal in China), but Grafton helpfully sees that an important reason we will not see complete digitization is that electronic archives constitute “not a seamless mass of books, easily linked and studied together, but a patchwork of interfaces and databases.” The challenge under the circumstances, he argues, is “to chart the tectonic plates of information that are crashing into one another and then to learn to navigate the new landscapes they are creating.”[25]
Some of the most significant scholarly digitization efforts are doing just that. Grafton cites the open-access All Patents Initiative as a boon to historians—but he does not mention SparkIP.com, a commercial research interface for the patent database built by the same folks who built the free interface. SparkIP is aimed at researchers and inventors—largely pharmaceutical companies and biotech manufacturers—who want to know what has not yet been discovered or patented, as much as what has been. The search algorithm for SparkIP is complex; it sorts by user search terms, but it also crawls through the patent files searching for commonalities, establishing links between patents based on semantic and referential links between documents. A strong link, for example, is forged when two patents cite the same two sources in their bibliographies. Thus it is possible not only to see how research clusters around certain topics but also where links have not yet been made. In a structurally similar way, researchers working on the Semantic Web are trying to come up with a metadata system to link heterogeneous bodies of digitized information through a set of umbrella categories that dynamically change as new information goes online, independent Page 240of how that new content is formatted. Groups like the Networked Infrastructure for Nineteenth-Century Electronic Scholarship (NINES) have, on a small scale, linked previously independent scholarly archival projects, across a range of disciplines, through an interface called Collex, which offers Web 2.0–style user tools for collecting, annotating, and sharing sources. The OAAP promises a similar federation of resources under the rubric of the development of nations and nationalism in the Americas.[26]
So areas of scholarly research are being brought together by some digitization efforts, not just tectonically separated. Still, building those bridges is expensive. Google’s economic and institutional power is an important aspect of the financial context within which literary archival or analytical projects develop today. There is no common standard for choosing what should get digitized and what should not. Indeed, private entities like the Mellon Foundation have quite different priorities than does, say, the National Endowment for the Humanities. While generative in many ways, this means that there is little conversation in major venues for literary scholarship about why some digital resources get created, funded, and promoted and why others do not. Literary archives in particular face challenges to raise more money than customary for humanities projects; raising funds means tying a project to the politics of donors.
Google itself, as the biggest developer of search technologies and the engine most used by students in North America, offers potential political dilemmas to scholarly partners. Leaving aside the questions of copyright, comparative linguistic uniformity, and selection raised by critics of its book-scanning program, Google offers economical solutions for digital challenges that are tempting, building itself into the scholarly infrastructure, in bits and pieces, through APIs.[27] The Whitman Archive’s search engine is Google-based, temporarily solving a problem faced by many archives, which is that designing search queries and interfaces for richly tagged data sources is difficult, expensive, and time-consuming. The mass digitization project of Google Books seems also to solve a problem for major research libraries struggling to decide what portions of their budgets should go to digitization, which can appear to be a bottomless sinkhole for staff time and funds. Google Maps is beginning to appear all over the terrain of digital humanities archives, as visualization becomes more and more the focus of funders and promoters of electronic scholarship. Yet Google’s collaboration with China’s censorship practices is out of step with the ideals of many of Page 241the projects that use Google’s tools. It may be time for scholarly archives to start finding collective solutions to the economies of developing searches; this is a matter of prioritization, not scarcity of options.
Some examples of the kinds of questions we should be asking about American literary archives in the digital age may be helpful. Rather than pick on other projects out there, I will start by critiquing the Walt Whitman Archive, at which I have worked for over a decade (and where many of the contributors to this volume also had their digital humanities apprenticeships). The Whitman Archive features translation, original page scans, standards-based markup, and free access, and it has drawn high-level attention in literary studies recently. It was the subject of an entire forum in a recent issue of PMLA, which does not usually devote many pages to digital work. But the Whitman Archive exemplifies and, to an extent, struggles with some of the problems I have just outlined.
It is surprising to see, among the criticisms of the Whitman Archive in the PMLA forum, no mention of the fact that its XML is unavailable for download, that its search engine cannot use the deep markup we have used, or that it lacks user accounts or other community-hosting capabilities. Each of these issues is crucial in assessing a scholarly resource, not because there is an ideal configuration of these elements, but because each contributes to the shape and argument of a project. The staff of the Whitman Archive have debated each of these issues for years and have at times had hard choices to make about them. Offering our XML remains under discussion: we do give away the code for our Spanish translation edition, under a Creative Commons license, and will, I hope, build on that precedent in the future. The search engine is a more difficult problem, because developing nonproprietary search capabilities is expensive, difficult, and time-consuming. The Whitman Archive has tried several approaches without finding an adequate solution and continues to try new ones. It may be partly because of the time and expense of trying to develop a rich search interface that the archive has not prioritized user accounts and interfaces for community interaction.
Less visible than these issues is the fact that the Whitman Archive contains a set of identifiers, called “Work IDs,” that users cannot see because they are embedded in the XML tags. Together with the Document Type Definition (which defines the tags we use and their hierarchical relationships), the Work IDs materialize, in a meta-structure, the intellectual axis Page 242of the archive. Ultimately, this will be one of the most powerful aspects of the archive, because it will allow users to see the relationships—established by the editors—among different objects in the archive. It is bound also to be controversial. The public information about the Work ID structure explains that “a ‘work’” is defined as “the abstract idea of a poem or book, etc. We name the work according to the last instance published in Whitman’s lifetime.” The “etc.” is a clue to the slipperiness of the definition of the “work.” What if something was not published? What establishes a subset of Leaves of Grass as worthy of a unique identifier? What if Whitman’s contemporaries thought he wrote a piece, but we have later learned he did not? Meredith McGill is righter than she could know in her critique of the Whitman Archive when she says that “the effect of the archive’s design is to streamline Whitman’s writing so that it begins with, gravitates toward, or orbits around the masterwork Leaves of Grass.”[28] When more prose is online, this will seem less the case, but the poetry may still predominate, since it is less the selection of texts and more the Work ID structure (which will name each poem or poem draft but not, say, each paragraph of a prose text) that encourages the eschatological orientation McGill criticizes. With this and other effects of the process of remediating Whitman’s oeuvre in mind, as we continue to encode more documents and reference them from each other, the definition of the Work ID will be refined or, perhaps, kept deliberately, productively loose. It will be important for us to make public that definition, its dynamism, and how it came into being.
In making these brief critical comments, I believe I am embodying what I consider to be one of the Whitman Archive’s strongest points: it takes shape through conversation and difference of opinion, rather than a truly “unified” editorial theory. The appearance of editorial unity on a collaborative enterprise—including the print-based ones of the past—is partly an illusion of context and analytical framing; it does not grow solely out of the actions of editors. In truth, because the Whitman Archive has become an institution, a publishing venue, an editorial project, a preservation system, a laboratory for new information systems, and a training space, heterogeneity of approach is not just salutary but necessary. Different sections of the archive have different interfaces, which means different frameworks of interpretation are posited and encouraged. This heterogeneity is expensive and time-consuming to sustain, so we have implicitly, if not explicitly, made it a priority. In his response to Folsom’s essay in PMLA, Jonathan Page 243Freedman criticizes the “treatment of The Walt Whitman Archive as a product of inspired editorship by Folsom and his colleagues and elevation of database into a self-maintaining . . . genuinely collective, genre-transcending human agency.”[29] This perception, I would argue, results from the fact that Folsom and Price are the dominant voices representing in print the Whitman Archive’s enormous staff. In fact, there is considerable disagreement within the archive about its approach and potential. What makes the archive a good collaboration is neither its editorial unity nor its databaseness but the disagreements about how to think about it and how to build it. A key advantage of the structure of the Whitman Archive going forward is that it is a rare collaboration in the humanities that fosters dissent within its bounds in order to help answer difficult questions both about the material and about the politics and economics of new literary archives.
Still, the bounds of that collaboration might be imagined wider. The construction of the Whitman Archive might systematically extend beyond the academy, might break down the (admittedly strategic) distinction I have made in this essay between the inside and the outside of an archive. The simple way of describing what the Whitman Archive might do would be to say that it could move from a Web 1.0 (content-focused) model to a Web 2.0 (interaction-focused) model. Yet users have been wrangling and mangling our data at the Whitman Archive ever since we put it out there; selections from our texts, our images, and even our background graphics can be found mashed up all over the Web. The distinction, then, needs more elaboration. Web 1.0 is not over, first of all—most of the world’s manuscripts, much of its print and architecture, much of its sheet music, and so on remain undigitized. The distinction between rich markup of data and simple mass capture is one of the most important ones to keep in mind in assessing the political importance of digital archives. Weak digitization, such as the unchecked transcriptions generated through Optical Character Recognition (OCR) in Google Books or completely untranscribed images or sound files, does not move the humanities forward. So the question is not merely how to “unscrew the doors themselves from their jambs,” as Whitman put it in 1855, but how to do so in a way that takes specific advantage of the value added by scholarly labor.[30]
There are a few basic things that the Whitman Archive can do to broaden its potential uses and impact in this light. It can make its XML freely available to users (who might make their own modifications or create Page 244their own stylesheets) under a Creative Commons license. It might even develop tools that allow users to modify stylesheets in a modular way, to look at primary texts in different ways, better to exploit the archive’s XML markup. At the least, providing searches that use that markup makes sense. To create some sense of community at a time when such functionality is increasingly de rigueur on the Web, a public forum might be provided, or user accounts that allow for the caching, annotation, and sharing of archive content. The utility of audience review has limits, admittedly: most of the people in our audience will not be interested in, for example, the intricacies of marking up the ink color of one of Whitman’s marginal notations on an obscure newspaper article about trains; some of the people in our audience, while loveable, perhaps kind, and sincere, have interests to promote that are persistently off-topic. Still, at least two things are worth recalling. First, standardized markup gives us the power to represent the same content in multiple ways, so we can have a spectrum of avenues into the material, among which users can toggle for different archival “feels.” Second, there are ways of reaching out to audiences that will give structure to their participation without predefining what comes out of a collaboration.[31] In the era of mass digitization, whether the resources are made by scholars or by Google, creative interface design, cheap tools for analyzing vast bodies of data (e.g., data and text mining, topic mapping, the Semantic Web, and similar approaches), and carefully cultivated, integrative relationships between archives and audiences will shape humanities scholarship.
Tim Powell’s work developing a resource database and interface with the Eastern Band of Cherokee Indians at the Digital Library of Georgia offers a good closing example of the multilayered nature of archival politics. For Powell, beginning to address questions about how digital archives create knowledge involves politics at three levels. Obviously, at the national level, making available the Cherokee archive helps expose a history of official policies of dispossession and their effects. At the level of the generation of the archive itself, there are local, disciplinary politics, since the archive contents are no longer solely in the hands of content experts. “Allowing the students to write for a website designed to accompany the archives turned out to be a very rewarding method of putting politics into practice and, in a small but meaningful way,” Powell stresses, “improving the teaching of Cherokee culture.” Finally, at the level of professional humanities work, Powell points to a politics of archive building that is familiar to me and to Page 245many other early career scholars who are involved in this work. Powell’s work with the Digital Library of Georgia had no official designation for the first five years, “nor,” he says, “did much of this work ‘count’ on my curriculum vitae or annual report.” He did it anyway, because it offered “a supportive community and, although this took many years to acknowledge, a growing awareness of how digital technology’s power allowed me to realize a political vision that I had written about in books but never fully implemented in academe.”[32]
“We are in the midst of an event of very large proportions,” proclaims Mark Poster, “an emergence that is best studied closely and incorporated into one’s political choices.” “In this conjuncture,” he emphasizes, “discourses that rhetorically paralyze the spirit are especially noxious, however realistic and wise they might appear.”[33] The questions facing developers of American literary resources in electronic form are daunting, but the opportunities to change the world, as Poster suggests, are as great as the challenges. The discourses that Poster emphasizes are certainly important, and so are actual technologies that unparalyze resources, bringing them into relation with others. So, too, are the economics of access: I would venture that there is no encoding choice we have made at the Walt Whitman Archive that is as significant as our decision to keep the archive freely available to all visitors.
Kevin Hearle would agree with Anthony Grafton that the revolution brought about by electronic access is not much of a revolution. Hearle has argued that for independent scholars trying to use university resources, the old open-stacks, no-login-necessary system was better.[34] Others have made this case about the expense of e-journals and their model of subscription access, in which libraries never actually own the materials for which they have paid. This could be regarded as an opportunity lost more than an injustice—universities have long been institutions that protect access to their knowledge resources more or less jealously, depending on the school. So American literary archives, if they embrace the open-access model, can potentially make a formal argument for a different kind of humanities, the digital equivalent of what Whitman famously called “the new life of the new forms” in his preface to the first edition of Leaves of Grass.[35] Implicitly and explicitly, such archives begin to pose the question of whether the entire social field surrounding literary study, including the role of academic authority and the relationships between “fans” and “experts,” should be redefined Page 246using Web 2.0 approaches, public outreach initiatives, and sustainable funding strategies (including both federal and private capital partnerships). This level of political engagement and change is often no longer solely in the hands of slow-moving academic institutions but is in the hands of small groups of editors, historians, and archivists themselves, who will be not just telling literary history but making the spirit of a new cultural future.
Notes
For conversations that shaped this essay, I thank Paolo Mangiafico, Matt Kirschenbaum, Bethany Nowviskie, Erica Fretwell, Bart Keeton, Janine Barchas, Kevin Webb, Terry Catapano, Brian Bremen, Cole Hutchison, Lars Hinrichs, Ken Price, Daniel Pitti, Jerome McGann, Johanna Drucker, Andy Jewell, Amy Earhart, Vanessa Steinroetter, Rachel Price, Ed Gomes, Deborah Jakubs, and the Bibliography and Textual Studies Group at the University of Texas at Austin.
1. For a discussion of the semantic difficulties surrounding such work, see Kenneth M. Price, “Edition, Project, Database, Archive, Thematic Research Collection: What’s in a Name?” Digital Humanities Quarterly 3, no. 3 (Summer 2009), http://www.digitalhumanities.org/dhq/vol/3/3/000053/000053.html.
2. John Lavagnino, “When Not to Use TEI,” in Electronic Textual Editing, ed. Lou Burnard, Katherine O’Brien O’Keeffe, and John Unsworth (New York: Modern Language Association, 2006).
3. Melville’s Marginalia Online, http://www.boisestate.edu/melville/index.html; see also Jennifer Howard, “Call Me Digital,” Chronicle of Higher Education 52, no. 24 (17 February 2006): A14.
4. Matthew G. Kirschenbaum, “Editor’s Introduction: Image-Based Humanities Computing,” Computers and Humanities 36 (2002): 3–6, at 4.
5. N. Katherine Hayles, My Mother Was a Computer: Digital Subjects and Literary Texts (Chicago: University of Chicago Press, 2005), 64.
6. Jerome J. McGann, The Textual Condition (Princeton: Princeton University Press, 1991), 11.
7. Raymond Williams, Marxism and Literature (London: Oxford, 1977), 163.
8. Ann Laura Stoler, Along the Archival Grain: Epistemic Anxieties and Colonial Common Sense (Princeton: Princeton University Press, 2009).
9. Meredith McGill, “Remediating Whitman,” PMLA 122, no. 5 (October 2007): 1592–96, at 1593; Ed Folsom and Kenneth M. Price, eds., Walt Whitman Archive, http://www.whitmanarchive.org.
10. Examples of such federations include both umbrella groups like HASTAC and ones that serve smaller interest groups, such as NINES.
11. Janet Gertz, “Vendor Relations,” in Handbook for Digital Projects: A Management Tool for Preservation and Access, ed. Maxine K. Sitts (Andover: Northeast Document Page 247Conservation Center, 2000), 151–52. For more on this and other aspects of outsourcing digitization, see the essays in the 2003 report by the National Initiative for a Networked Cultural Heritage, “The Price of Digitization: New Cost Models for Cultural and Educational Institutions,” http://www.ninch.org/forum/price.report.html (accessed 29 August 2009); Daniel J. Cohen and Roy Rosenzwieg, Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web (Philadelphia: University of Pennsylvania Press, 2005), especially 103–7. New NEH initiatives for international digital humanities partnerships might help address some of the questions about development raised here, by allowing scholars to leverage the infrastructural strengths of their respective regions.
12. See, e.g., Cathy Davidson, “Humanities 2.0: Promise, Perils, Predictions,” PMLA 123, no. 3 (May 2008): 707–17.
13. Biophysicist Rosalind Franklin’s X-rays contributed to the “double helix” model devised by James Watson and Francis Crick to show the architecture of DNA. The question of whether she should have been included as a coauthor of the resulting publications has been hotly debated. See Brenda Maddox, Rosalind Franklin: The Dark Lady of DNA (New York: HarperCollins, 2002).
14. See, e.g., the discussion and annotation of the Ithaka report “Scholarly Publishing in a Digital Age” using CommentPress, hosted by the Institute for the Future of the Book, at http://scholarlypublishing.org/ithakareport/ (accessed 14 October 2008).
15. For critiques of the structurally similar business model based on “user-generated content,” see Tiziana Terranova, “Free Labor: Producing Culture for the Digital Economy,” Social Text 8, no. 2 (2000): 33–58; Andrew Lowenthal, “Free Media vs. Free Beer,” Transmission, March 2007, http://www.transmission.cc/node/86 (accessed 9 October 2008).
16. Legal definitions of joint copyright resulting from collaboration may be useful; see Paul Goldstein, International Copyright: Principles, Law, and Practice (New York: Oxford University Press, 2001); effectively, to warrant joint ownership, the contribution a person makes to a work must be copyrightable on its own—an original expression of some kind, as differentiated from, say, proofing. Collaboration on digital projects may strain these definitions going forward; debates about the copyright status of Wikipedia entries seem a preview of this. See http://en.wikipedia.org/wiki/Wikipedia:Copyrights for the latest state of copyright at Wikipedia.
17. Siva Vaidhyanathan, The Anarchist in the Library (New York: Basic Books, 2004); Lawrence Lessig, The Future of Ideas: The Fate of the Commons in a Connected World (New York: Vintage, 2002) and Web site and blog, http://www.lessig.org/; James Boyle, The Public Domain: Enclosing the Commons of the Mind (New Haven: Yale University Press, 2008).
18. See Matt Cohen, “Untranslatable? Making American Literature in Translation Digital,” Modern Language Studies 37, no. 1 (Summer 2007): 43–53.
19. See also the Defense Advanced Research Projects Agency Web site, Page 248http://www.darpa.mil, for a list of projects. Those with an interest in linguistics or translation will be particularly interested in the Global Autonomous Language Exploitation (GALE) Program; see http://www.darpa.mil/ipto/programs/gale/gale.asp. On the use of open information sources, including Google, for government intelligence gathering, see Robert O’Harrow Jr., “Even Spies Go to Trade Conferences,” Washington Post, 13 September 2008, D01.
20. See the OAAP Web site, http://oaap.rice.edu/. I am a member of the advisory board for this project. It is funded by an Institute of Museum and Library Services National Leadership grant, which supports work at Rice University and the University of Maryland.
21. Timothy B. Powell, “Digitizing Cherokee Culture: Building Bridges between Libraries, Students, and the Reservation,” MELUS, Summer 2005, par. 4.
22. Elizabeth Povinelli, “Recognizing Digital Divisions, Circulating Socialities” (talk given at Duke University, 26 November 2007); Stewart Brand, The Media Lab: Inventing the Future at MIT (New York: Penguin, 1987), 202. For an interface to aboriginal materials that formally challenges the Western ideal of transparency, see Chris Cooney and Kim Christen, “Digital Dynamics across Cultures,” Vectors, Spring 2006, http://vectors.usc.edu/index.php?page=7&projectId=67.
23. Aboriginal Voice National Recommendations: From Digital Divide to Digital Opportunity, Crossing Boundaries Papers, vol. 5 (November 2005), available at http://knet.ca/documents/Aboriginal-Voices-Final-Report-Vol5_Doc_051122.pdf.
24. Anthony Grafton, “Future Reading: Digitization and Its Discontents,” New Yorker, 5 November 2007, 50–54. For a somewhat panicky but usefully specific argument about what is not getting digitized, see Katie Hafner, “History, Digitized (and Abridged),” New York Times, Sunday Business section, 11 March 2007, 3.1, 3.8–9.
25. Grafton, “Future Reading,” 53.
26. See SparkIP, http://www.sparkip.com; NINES, http://www.nines.org; Collex, http://nines.org/collex.
27. APIs, or application programming interfaces, allow applications to access operating systems, libraries, or services. See Grafton, “Future Reading”; Clive Thompson, “Google’s China Problem (and China’s Google Problem),” New York Times Magazine, 23 April 2006, http://www.nytimes.com/2006/04/23/magazine/23google.html; Jean-Noël Jeanneney, Google and the Myth of Universal Knowledge: A View from Europe, trans. Teresa Lavender Fagan (Chicago: University of Chicago Press, 2006); Siva Vaidhyanathan, “The Googlization of Everything,”[http://www.googlizationofeverything.com/.
28. Walt Whitman Archive, “Encoding Guidelines,” http://segonku.unl.edu/whitmanwiki/pmwiki.php/Main/EncodingGuidelines (accessed 30 October 2008); McGill, “Remediating Whitman,” 1594. Folsom’s response to McGill respecting the generic priorities of the Whitman Archive suggests caution about assessing electronic resources as if they are fixed, rather than evolving, entities; see Ed Folsom, “Response,” PMLA 122, no. 5 (October 2007): 1608–12, at 1611.
29. Jonathan Freedman, “Whitman, Database, Information Culture,” PMLA 122, no. 5 (October 2007): 1596–1602, at 1601.
30. Walt Whitman, Leaves of Grass (New York, 1855), 29, quoted from the Walt Whitman Archive, http://www.whitmanarchive.org/published/LG/1855/whole.html (accessed 1 October 2008).
31. The Advanced Papyrological Information System (APIS), extending the work of the Duke Data Bank of Documentary Papyri, has been developing a suite of tools that will distribute the labor of creating a scholarly resource even more widely, allowing owners of ancient documentary papyri to contribute transcriptions, images, and bibliographical descriptions to a centralized database. See http://www.columbia.edu/cu/lweb/projects/digital/apis/index.html.
32. Powell, “Digitizing Cherokee Culture,” par. 4, par. 8.
33. Mark Poster, Information Please: Culture and Politics in the Age of Digital Machines (Durham: Duke University Press, 2006), 268.
34. Kevin Hearle, “Degrees of Difference,” American Periodicals: A Journal of History, Criticism, and Bibliography 17, no. 1 (2007): 118–21.
Encoding Culture: Building a Digital Archive Based on Traditional Ojibwe Teachings
The advent of digital technology is undoubtedly changing our understanding of the origins and story lines of American literary history, as Randy Bass suggests.[1] This interpretive shift offers a critically important opportunity to think more carefully about the place of Native American expressive culture as an integral, albeit long-neglected, part of “American literature.” While most anthologies in the field now include an opening section on indigenous Page 251origins—irresponsibly reducing thousands of years of precolonial storytelling to a few pages—the selections are invariably limited to stories that fit within the parameters of the white printed page. Rather than reviewing this history of exclusion yet again, I will assume here that the field is ready to acknowledge that indigenous stories are indeed part of American literary history, whether they appear in the form of the oral tradition, rock art, narratives woven in wampum belts, or pictographic images inscribed on birch bark.[2] This may be an overly generous assumption. Nonetheless, my point is to demonstrate how digital technology can be utilized to extend the formal boundaries of the field and to create exciting new interpretive opportunities by taking seriously, at long last, the idea that the Ojibwe “epistemology of beginnings” is an intellectually valid interpretive paradigm.[3] In doing so, the Gibagadinamaagoom digital archive (http://gibagadinamaagoom.info/), whose name means “to bring to life, to sanction, to give authority,” devotes itself to sanctioning the intellectual sovereignty of indigenous wisdom carriers, so that the question of whether American literary history begins with the Puritans or Columbus becomes moot as we set off in search of much deeper origins, wondering whether we are “worthy to translate wind” or to record “knowledge [that existed] long before humans.”[4]
Although victory over Eurocentrism was declared long ago, the field of American literature—particularly in its new instantiation as digital archives devoted to the subject—continues to struggle to achieve greater cultural diversity. This is not to say that the digital archives devoted to canonical American authors are not intrinsically valuable and highly sophisticated. To the contrary, they have set the standard for this new form of literary criticism and greatly inspired the work being done on the Gibagadinamaagoom project. Amanda Gailey articulates the present dilemma in her paper “Digital American Literature: Some Problems and Prospects.” Describing “the strange relationship between the selective canon of print literature and the body of texts digitized by digital libraries and digital scholarly editions,” Gailey writes,
Digital scholarly editions in American literature tend to focus conservatively on highly canonical authors (such as Whitman and Dickinson), and foreground compositional histories by displaying manuscript drafts, applying markup that highlights authorial process, etc. This approach asserts an author-centered view of literature and has resulted in the digitization Page 252of minutiae by a few great authors while the major works of slightly less canonical authors (such as Poe) have been altogether neglected.[5]
From the perspective of the Ojibwe wisdom carriers with whom I work, the concern obviously extends well beyond the exclusion of Edgar Allan Poe, although Gailey’s point is well taken. Again, my energies here are devoted not to another critique but to an affirmation of new media’s potential to integrate cultural codes and digital codes and to expand the scope of American literature beyond “the book.”
To be fair, the current focus on canonical authors derives not from a lack of critical imagination but from all-too-real constraints that continue to confine digital scholarship. As Jerome McGann writes in “Culture and Technology: The Way We Live Now, What Is to Be Done?”: “Digital scholarship—even the best of it . . . [is] typically born into poverty—even the best funded ones. Ensuring their maintenance, development, and survival is a daunting challenge.”[6] Given the enormous expenditures of time, expertise, and money needed to build a state-of-the-art digital archive, it is simply more financially feasible to undertake digitization projects that have already been carefully edited in paper form. The problems grow exponentially when one endeavors to design an archive of traditional Ojibwe knowledge manifest as pictographs etched on birch bark, drums, ceremonial regalia, and treaty minutes, which are enlivened by the stories of Anishinaabe chi-ayy ya agg (Ojibwe wisdom carriers).[7] The Gibagadinamaagoom project received an NEH grant in 2007, which enabled us to create several prototypes. It is still, however, very much a work in progress. Despite being in the early stages of development, we have learned a great deal that I hope will be of interest to digital Americanists and to the Ojibwe students who will use these digital exhibits to learn their language and to revitalize their culture.
Even though the term interdisciplinary is frequently bandied about by university presidents, deans, and faculty, this popular notion rarely translates into working with tribal historians, literary artifacts housed in museums, digital curators from the library, and humanities scholars. Yet this is precisely the partnership that needs to be engaged if we are to think beyond the legacy of print culture and to trace these story lines back to their indigenous origins. More specifically, the present essay will focus on the thought process that created a digital exhibit about one specific Ojibwe artifact Page 253housed in the Penn Museum, where I work: a pictograph of animikii (thunderbird) inscribed on a birch bark case. Using digital video, flash animation, and three-dimensional imaging, in conjunction with stories told by Ojibwe chi-ayy ya agg (wisdom carriers), the goal of the Gibagadinamaagoom digital archive is to bring this object to life and to listen intently to the stories it has to tell.[8]
As already mentioned, in the Ojibwe language, Gibagadinamaagoom (Gee-bag-ah-DEEN-ah-ma-GOOM) means “to bring to life, to sanction, to give authority.” The archive dedicates itself to sanctioning the intellectual sovereignty of Ojibwe chi-ayy ya agg, who possess authority, conferred on them by the tribe, to tell stories that bring empowered objects (artifacts) and history to life. From an Ojibwe perspective, digital technology is valuable because its interactive qualities allow viewers to ask the elders about their history, to look into their eyes, and to hear chi-ayy ya agg speak in their own language and on their own cultural terms. Three-dimensional imaging, in turn, creates greater access to artifacts housed in museums that might otherwise never be seen by students growing up on Ojibwe reservations, and it significantly expands the meaning of the description “literary text.”[9]
There are, however, many dimensions of this dynamic interchange that are simply not possible to explain within the margins of the white page. The University of Michigan Press’s decision to publish The American Literature Scholar in the Digital Age both in print format and on the digitalculturebooks open source Web site creates a unique opportunity to demonstrate how digital technology makes it possible to create for the literary text a highly sophisticated cultural and spiritual context that will allow an interpretive framework that would not be available without the full partnership of the Ojibwe wisdom carriers. Thus, the digital and paper-based versions of this creative diptych work together to tell a single story—how digital technology can more accurately and artistically represent the indigenous origins and spiritual story lines of expressive culture on these continents.
Materiality and Spirituality
In all honesty, it is not easy to explain the relationship between the ancient symbol of animikii (thunderbird), birch bark media, and XML codes. I make no pretense to having “mastered” these complexities. Yet I do believe Page 254that there is something very special about this moment in history and the convergence between digital technology’s unique tools and the Ojibwe wisdom keepers’ willingness to work with this new technology to preserve the old ways.[10] Ironically, whereas the Ojibwe elders working on the project have been quick to grasp digital technology’s unique powers, cybertheorists seem to be struggling to imagine how digital and cultural codes can be effectively integrated. Tara McPherson, writing in a recent special issue of Vectors: Journal of Culture and Technology in a Dynamic Vernacular, notes, “I am continually amazed by how easy it is to hold these two types of work [race and digital media] apart and have come to believe that the very forms of electronic culture encourage just such a partitioning.”[11] In “Cultural Difference, Theory, and Cyberculture Studies,” Lisa Nakamura makes a similar point: “Where is race in this picture? . . . The only way to explain this glaring omission [in cyberculture critique] is through a theory of mutual repulsion.”[12] Perhaps David Silver put it best in his introduction to Critical Cyberculture Studies (2006), when he wrote, “Critical cyberculture studies [now] approaches cultural difference . . . front and center, informing our research questions, frameworks, and findings. The bad news is that we have a long way to go.”[13]
Based on my own experience working with Ojibwe wisdom keepers, I have not found that the cultural, spiritual, and digital dimensions of the archive tend toward the type of “partitioning” and “mutual repulsion” McPherson and Nakamura describe. This surprising insight can perhaps be traced to a deeper set of meanings about “history” and “technology.” To the chi-ayy ya agg, digital technology does not represent a radical break with the past—as implied by the term postmodernism. Rather, the tribal historians working on the project see this new technology as part of an ancient continuum, wherein the Ojibwe have long (for thousands of years) embraced new technology—whether it be carving new kinds of projectile points or accepting the gift of the dance drum from the Dakota—to revitalize their culture. Perhaps, then, the problem that McPherson and Nakamura describe is not necessarily embedded within “the very forms of electronic culture” but derives from certain perceptions about digital technology.
My hope that this problem can be overcome in the near future has been galvanized by Matthew Kirschenbaum’s brilliant new book Mechanisms: New Media and the Forensic Imagination. As Kirschenbaum points out, new Page 255media has been haunted by the widely held view that digital technology constitutes a postmodern phenomenon. (Significantly, both McPherson and Nakamura cite postmodernism as a cause of the problem they seek to overcome.) Postmodernism’s problematic legacy rests on two interrelated assumptions: (1) because electronic texts are infinitely reproducible, the cultural dimensions that characterize the “original” are lost in endless repetition; (2) because any and all content in digital archives ultimately ends up encoded in a “universal language” of zeroes and ones, new media’s capability of representing cultural specificity is inherently compromised.
Mechanisms addresses both of these perceptions directly and, through a minutely detailed analysis that Kirschenbaum calls “computer forensics,” reveals a far more complicated and heterogeneous terrain of electronic textuality. Challenging the “postmodern argument about the digital simulacrum—copies without an original,” Kirschenbaum focuses relentlessly on the inscription mechanisms of the hard drive, to prove that “electronic objects can be algorithmically individualized” to such a degree that the bitstreams that encode data are “in fact a more reliable index of individualization than DNA testing.”[14] This specificity productively counters the problematic “narrative” that any and all content is “reinscribed as the universal ones and zeroes of digital computation.” Kirschenbaum’s intent is to demonstrate how “forensic and formal materiality” restores digital technology’s reliability for presenting highly specified information.[15] In light of Kirschenbaum’s findings, I would argue that it is indeed possible to translate the inscription of animikii (thunderbird) on wiigwaas (birch bark) into digital form without sacrificing the cultural, historical, and spiritual integrity of the original.
Because Kirschenbaum’s work concentrates so intently on the formal and forensic materiality of computer systems, detailed questions about cultural specificity fall outside the parameters of his analysis. I hope to reintroduce the question of culture by going back to a moment early in Mechanisms where Kirschenbaum recounts the intellectual origins that influenced his own understanding of “materiality,” namely, the following passage from Johanna Drucker’s The Visible Word: Experimental Typography and Modern Art, 1909–1923.
The force of stone, of ink, of papyrus, and of print all function within the signifying activity—not only because of their encoding within a cultural Page 256system of values whereby a stone inscription is accorded a higher stature than a typewritten memo, but because these values themselves come into being on account of the physical material properties of these different media. Durability, scale, reflectiveness, richness and density of saturation and color, tactile and visual pleasure—all of these factor in—not as transcendent and historically independent universals, but as aspects whose historical and cultural specificity cannot be divorced from their substantial properties.[16]
This understanding is quite abstract, as some of my nonacademic readers will surely be quick to point out to me. Hopefully, translating Drucker’s theoretical insights into more culturally specific manifestation of Ojibwe epistemology can help bridge the worrisome gap between academic prose and the incarnation of animikii (thunderbird) seen in figure 1. The “force” Drucker associates with the media manifests itself here with the materialism of the birch bark and the inscription of animikii, which are interrelated “because of their encoding within a cultural system.” More specifically, birch bark is associated with Ojibwe traditional spiritual archives inscribed on

Thunderbird on birch bark, Pennsylvania Museum of Archaeology and Anthropology. (Photograph by David McDonald.)
The birch bark media depicted in figure 1 invokes the sacred Midewiwin scrolls—an indigenous inscription mechanism and a form of precolonial archives still maintained by the tribe. No scrolls are depicted in the Gibagadinamaagoom archive, however, because the chi-ayy ya agg (wisdom keepers) who form the Board of Permission Givers for the project have deemed such sacred material inappropriate for use in a digital archive. In this sense, animikii serves as a central image for the project, both as a spiritual messenger who carries stories from Creator’s world and as a powerful protector who guards the tribe’s most sacred pictographic writings on wiigwaas (birch bark).[18]
A Digital Archive Dreams of Thunderbirds
When we can look at an eagle and see it not only as beautiful but also as incarnate of thunderbird, who carries messages to Gitche Manidoo—if we can do this, we realize we are encumbered with the power to understand. But our downfall is lack of humility.[19]
What does it mean to be “encumbered with the power to understand”? From what I have been taught, it is a process that begins with a profound sense of humility—the realization that a PhD does not confer an academic with the right to appropriate this knowledge for publication or self-promotion. Understanding requires a sincere willingness to listen and to wait patiently for meanings to unfold. The reader should bear in mind that what follows is a highly imperfect translation of animikii’s story and that further clarification should be sought from tribally authorized Ojibwe wisdom keepers. This version is neither “true” nor “definitive.” I have been authorized only to say what animikii (thunderbird) means to me at this particular moment in time. I want to begin, then, by stating unequivocally that because I am a novice of Ojibwemowin (the Ojibwe language), my understanding of such powerful symbols is limited, though I will share with you what I know.
To use the first person in the previous sentence, “I want to begin . . . ,” Page 258constitutes the first misstep—“our downfall is lack of humility”—for the story does not begin with me, the author of this essay. According to the Ojibwe “epistemology of beginnings,” the story originates with animikii (thunderbird), oshkaabewis (messenger or translator) to Gitche Manidoo (Creator). The story that follows begins within the familiar framework of chronological time and then gradually shifts to the spiritual temporality of Ojibwe storytelling.
In the winter of 2006, I was hired to be the first director of the Penn Center for Native American Studies. Frankly, I was overwhelmed—not because of the professional honor of working at one of the nation’s oldest and most prestigious academic museums, but because I knew that many of the Indian artifacts housed there were immensely powerful sacred objects too often obtained under legally questionable pretenses. At that point, I had been working with Larry Aitken for about six years, so I was respectfully aware that these artifacts are animate beings, capable of telling stories to Native American wisdom keepers, trained in traditional ways. I also knew that no one employed by the museum possessed these kinds of credentials—although I hasten to add that the staff deeply appreciates this form of knowledge, working assiduously to bring indigenous people to work with the collections and to repatriate sacred objects through the imperfect system put in place by NAGPRA (the Native American Grave Protection and Repatriation Act, passed in 1992). I invited Larry Aitken to perform a sacred pipe ceremony in the courtyard of the museum, to honor these empowered objects and to acknowledge these animate spirits. I am fully aware that an Ojibwe opwaaganinini (sacred pipe carrier) cannot justifiably represent any tribe other than his own, but it was the most meaningful gesture I could make, given my own limited understanding of such complex spiritual matters.
In the winter of 2007, the Gibagadinamaagoom project was awarded an NEH grant. The grant paid for Larry and David McDonald, lead videographer on the project and the head of DMcD Productions, to come to the Penn Museum. We created a short film, Weweni (Be Careful), about Larry’s interaction with deweigan (drum), which is the subject of a digital exhibition in the “Ask the Elders” section of the Gibagadinamaagoom site. In the spring of 2008, Nyleta Belgarde, dean of White Earth Tribal College (WETC) and the primary investigator for the NEH grant, came to Penn to oversee digital imaging of Ojibwe artifacts from the museum and the Page 259training of WETC staff members. It was Nyleta who first noticed the birch bark case with the image of animikii, flanked by mashkode-bizhiki (buffalo). Nyleta looked carefully at the card, which recorded information from the museum database, and observed the metadata was incorrect. The first image was described as a “fish.” Looking at the artifact, then at the card, Nyleta said she was pretty sure the inscribed figure was a thunderbird, but acknowledged that she was not an elder and so could not say definitively.
From an Ojibwe perspective, the story just told might be considered intellectually impoverished, because of its overreliance on facts—chronological dates, individual names, and institutional resources—and its underrepresentation of the active role played by the spirit world. One might more accurately say that the story begins with animikii, who realized Nyleta was a reliable messenger (oshkaabewis) and entrusted her with a message to present to the chi-ayy ya agg (wisdom keepers) working on the Gibagadinamaagoom project—Andy Favorite (sacred pipe carrier, White Earth Band of Ojibwe), Dan Jones (language keeper, Fond du Lac Tribal and Community College), and Larry Aitken (sacred pipe carrier, Leech Lake Band of Ojibwe). As Larry explained later, the thunderbird appeared at this precise historical moment because animikii sensed our need for guidance, thus anticipating an important phase of the project.
When Larry Aitken came to the museum several months later, the embodiment of animikii inscribed on wiigwaas gave him opportunity to explain the relationship between a wisdom keeper and the empowered object in relation to Ojibwe epistemology.
In the old days, the [pictographic form of writing used by the Ojibwe] was only one form of meaning. Actually, the invisible forces are speaking to the wisdom keeper. The empowered object recognizes a wisdom keeper and how to talk to them. The wisdom keeper is startled, surprised by the force nudging them, trying to contact the wisdom keeper. The human imagination thinks this cannot be. This feeling is not self-doubt as much as human insecurity about this higher level of thinking that goes beyond writing or the visual.[20]
Larry’s strikingly honest account relates how the wisdom keeper himself is “startled, surprised by the force nudging [him].” This candor illuminates still more dimensions of what Drucker identifies as the culturally specific Page 260“force” associated with media. More specifically, we begin to see how pictographic writing on birch bark scrolls, when understood at a “higher level of thinking that goes beyond writing or the visual,” invokes “invisible forces,” which can then be translated by a skilled wisdom keeper into digital media (e.g., videotape).
Simply watching the videotape does not, however, begin to explain the epistemological complexity of this exchange. To understand this “content,” the viewer must be provided with interpretive context, which must also be encoded into the site. When I asked Larry how we might achieve this, he explained, “We tend to focus too much on content, rather than spiritual context. You need to realize where the content originates. You need to become part of history.”[21] So we set off in search of origins that go deeper into history than the digital media itself or even the birch bark medium on which animikii is inscribed, back to a sense of origins rooted in Ojibwe cosmology and the symbolic significance of the seven sacred directions:
- East (Waabanong): new beginnings, small birds, yellow.
- South (Zhawonong): warmth/healing, small mammals, white.
- West (Ninagaabiin’inong): gift of sadness, flash of Creator’s power, large hoofed animals, red.
- North (Kiiweinong): purification/cleansing, large birds, black.
- Mother Earth (Nimaamaa-aki): mother of the four orders of the earth and all living things.
- Ancestor’s Realm (Mishomis): the grandfathers that dwell on top of the earth.
- Above World (Ishpiming): Creator’s world, star world, sun and moon world.[22]
The preceding list is, admittedly, a vastly oversimplified sketch of a knowledge system so sophisticated it would take a lifetime of study with a qualified Ojibwe wisdom keeper to understand fully. Yet it provides a helpful, albeit incomplete, context for interpreting how Larry set about engaging the “invisible forces” that spoke through the image of animikii inscribed on wiigwaas (birch bark).
Larry began by addressing animikii in Ojibwemowin (the Ojibwe language), finally pausing to explain in English, “It is important to know that when you see a symbol on anything, it becomes alive, to teach you something.” Page 261Here I must admit to not possessing adequate training to understand whether the invocation of Ojibwe cosmology played a role in bringing animikii to life or whether proceeding through the seven sacred directions was a form of ceremonial oratory, which allowed animikii to recognize Larry as chi-ayy ya agg (an Ojibwe wisdom keeper). In any case, here is an excerpt of the transcription:
East is first and the color of yellow. . . . [Creator] said, when you want to know new things . . . look to the East. Then you look to the South. . . . The color of the South is white. . . . What do you get from the South? Healing and warmth. Not warmth in weather, but warmth in friendship. And you look to the West, the color is red. It is for the sun going down. . . . What gift do we get from the West? Sadness and sorrow. . . . But it’s also a little display of Creator’s power, through thunder and lightning.[23]
This is quite a remarkable moment, for it invokes so many ancient and powerful stories that it becomes difficult, perhaps even counterproductive, to disentangle them in the name of explication. The movement from East to South to West invokes the direction of prayer, this being the beginning of the proper sequence whereby to offer prayers to the seven sacred directions and/or the four cardinal points. Each of the seven directions is also associated with the Seven Grandfathers, whose gifts are considered to be the ancestral origins of sacred knowledge. The movement from East to West also invokes the oral epic of Waynaboozhoo, the Ojibwe cultural hero in many origin stories, and the historic migration of the Ojibwe people from the East Coast to the Great Lakes region, as foretold by prophecy.[24]
Upon reaching the West, in his oratorical progression, Larry then began telling a story about the origins of Ojibwe history. It was a time when the people had become spiritually lost, angering Creator, who threatened to destroy the world. Migizi (bald eagle) bravely took it upon himself to fly to Creator’s world (Ishpiming). Creator spared migizi, who was at risk of being burned into ash by the sun. Impressed with his courage in having come so far, Creator transformed migizi into animikii, so that he could fly past the sun. Migizi pleaded with Creator to spare the people. Finally relenting, Creator explained, “When [the people] see giant, invisible thunderbirds, they will surely see my eyes. Now, fly back and tell the people on earth Page 262. . . I will send them teachers . . . to teach them the good way, . . . to teach honesty, morality, legality.”[25]
Although the symbolism here is more difficult to discern, this story completes the cosmological cycle. More specifically, the story begins with Larry addressing animikii, discussing the gifts associated with the East (Waabanong), South (Zhawonong), and West (Ninagaabiin’inong)—discreetly moving in the direction of prayer as determined by traditional codes of conduct. Migizi, a large bird associated with the North (Kiiweinong), then flies from Mother Earth (Nimaamaa-aki), through Ancestor’s Realm (Mishomis), to Creator’s world (Ishpiming). A literary reading of the narrative sequence suggests greater depths. The first part of the story emanates from Larry. As he describes the “display of Creator’s power, through thunder and lightning,” in the West, the imagery invokes animikii, embodied here by a birch bark pictograph with lightning coming out of his eyes. At this point, animikii takes over the storytelling, relating how migizi transforms into animikii, enters Creator’s world, and returns with the promise that teachers, like Larry, will come. The two stories—one told from the memory of chi-ayy ya agg (wisdom keepers) and the other from the spirit world—become one. Encumbered with the power to understand, we are now prepared to take up the question of how such eminently powerful stories can be translated into digital codes.
Cultural Codes and Digital Codes: Reprogramming American Literature
Our human shortcoming is to have animate objects not known to the academy as storytellers and wisdom keepers. If you work with us, bizindam [listen], you accept the body of Ojibwe knowledge and infuse it into your own work, affected by original modality. If you listen to stories, you will be instilled with responsibility.[26]
Having heard the story of the cosmology told by both an Ojibwe wisdom keeper and an “animate object,” the challenge becomes how to infuse digital technology with “the body of Ojibwe knowledge” and the “original modality.” I turn in this section to more practical matters concerning the integration of Ojibwe epistemology into the design of the interface of the Gibagadinamaagoom archive (what you see on the screen), the archive’s Page 263metadata (how content is described digitally), its database (how the digital material is stored), and its navigation system (how the user moves throughout the site). Although the Gibagadinamaagoom site is obviously unique (because it adheres so closely to the traditional codes of the culture being archived), our hope is that it may serve as a model for other culturally specific archives and, in so doing, play a meaningful role in diversifying the digital humanities.
Interface and Navigation
The Gibagadinamaagoom digital archive has been carefully designed so that visitors find themselves immersed in an Ojibwe worldview the moment they enter the site. At the top of the home page, the viewer encounters the archive’s powerful and daunting name: Gibagadinamaagoom. An audio link has been provided beside the title, so that the viewer can hear the word pronounced by a fluent speaker and can learn the English translation: “to bring to life, to sanction, to give authority.” An elder thus brings the Ojibwe language to life, while the site design implicitly sanctions the authority of the wisdom keepers to speak in their own language and to guide the viewer throughout the site. In doing so, we are challenging the myth of a “universal” digital language of zeroes and ones and the assumption that all archives should conform to “standards” created by those outside the culture being digitized. This problematic, though undertheorized, notion inhibits a fuller discussion of whether, for example, Dublin Core’s emphasis on “author,” “title,” and “publication date,” which clearly derives from print culture, implicitly imposes non-Indian descriptors shaded with an ethnocentrism that does not fully acknowledge Ojibwe epistemology. This is not to say that Dublin Core standards cannot be modified, which is what we are currently compelled to do in order to be eligible for most grants. Rather, our hope is to instigate a robust interrogation of whether an Ojibwe archive would be better served by a more culturally sensitive metadata initiative. This recalibration may be over the horizon at present, but the Gibagadinamaagoom project nevertheless continues to explore systematic approaches to the writing of metadata based on such standards as Ojibwe cosmology, the vicissitudes of “authorship” as understood within the communal context of the oral tradition, and a more culturally accurate understanding of time as freed from the constraints of chronology.
The home page also includes a flash animation slide show, featuring Page 264a series of digital photographs carefully composed into a visual narrative. The sequence includes pictures of dawn breaking over a lake in northern Minnesota, a bald eagle soaring against the blue sky of the Ojibwe’s ancestral homeland, animikii (thunderbird) inscribed on birch bark, and Larry Aitken with his arms outspread like an eagle as he tells the story of bald eagle’s transformation into animikii, while collecting medicine near his home on the Leech Lake reservation. In one sense, the visual narrative reflects the cosmological story recounted in the previous section of this essay—beginning in the East (Waabanong), recalling how eagle is transformed into animikii, and depicting one of the teachers sent by Creator to restore spiritual balance. On another interpretive level, the flash animation sequence implicitly establishes this new digital archive as part of a cultural continuum that carries on in the spirit of older, indigenous archives. These include the knowledge possessed by the Seven Grandfathers/seven directions; the birch bark scrolls used by the Ojibwe to preserve their own tribal histories; the oral tradition in connection with the practice of Native medicine; and the oldest archive of all, the knowledge kept by Nimaamaa-aki (Mother Earth). The viewer undoubtedly will not be able to understand all of these meanings simply by watching a sequence of slides. This visual narrative is not necessarily meant for the viewer, however, but is perhaps better understood as a way of invoking and paying respect to the “invisible forces” that are part of the archive’s living spirit. In this sense, the “force” associated with the older media—birch bark scrolls, oral tradition, migizi (eagle) as oshkaabewis (messenger)—is translated into new media.[27]
At the bottom of the home page are two video clips, designed to act as spiritual and practical guides for the forthcoming journey into Ojibwe cosmology. The first is of Jimmy Jackson, a distinguished medicine man for whom Larry Aitken served as an oshkaabewis (interpreter or messenger) for 17 years.[28] The prayer, asking for protection and guidance from Creator, is spoken in the Ojibwe language, without translation or transcription. The wisdom keepers on the Board of Permission Givers for the site felt that this was appropriate because it tacitly informs the viewer that some parts of the Ojibwe cosmology cannot be rendered in English and will not be shared with outsiders. For non-Indian viewers, part of being “encumbered with the power to understand” means learning to accept that the Ojibwe wisdom keepers maintain sovereign control of their own history and that, Page 265hence, the sacred dimensions of Ojibwe cosmology will not necessarily be translated, although they will be observed. Yet this does not preclude outsiders from learning about Ojibwe culture. Jimmy Jackson’s prayer is meant to prepare the viewer for the journey that lies ahead, in accordance with traditional codes of conduct maintaining that such a spiritual journey should always begin with prayer.
The second video instructs the viewer about the importance of offering asemaa (tobacco) before asking an elder for assistance, engaging the ancestors, or embarking on a spiritual journey. Larry Aitken appears in the traditional role of oshkaabewis (messenger or translator). Here again, multiple interpretive layers are at play. One the one hand, Larry acknowledges his indebtedness to Jimmy Jackson, who taught him so much about medicine and traditional practices. In doing so, the site strives to replicate traditional protocol, which teaches that one should always begin by thanking the elders or ancestors who originally conveyed the story to the storyteller. The fact that Jimmy Jackson passed away many years ago reminds us that we remain connected to the spirit world and to the ancestors, whose knowledge lives on through the wisdom keepers. While this epistemological connection may seem quite foreign to some viewers, it is interesting to note how effectively the digital media conveys these meanings. The dynamic vitality of Jimmy Jackson’s video reinforces the idea that his spirit is alive and plays a fundamentally important role in the teaching of future generations.
The videos of Anishinaabe chi-ayy ya agg (Ojibwe wisdom keepers) implicitly convey another unique aspect of the site’s navigation system. Whereas most other digital archives of American literature actively encourage the viewer to search the content guided by their own scholarly interests, Gibagadinamaagoom works on the assumption that the viewer needs guidance to navigate their way through the seven sacred directions of Ojibwe cosmology. This is in keeping with the way traditional Ojibwe archives operated, in the sense that an initiate would be carefully taught about traditional codes of conduct, prayers, song cycles, and the interpretive techniques in order to understand the pictographs on birch bark scrolls. As Larry explains, the searcher must come to terms with the fact that they “cannot own this knowledge, but [if they follow traditional codes of conduct, they can] stir a wisdom keeper into presenting that body of knowledge.”[29] The wisdom keeper must, in turn, accept their identity as a
Page 266visionary and as a carrier and interpreter of knowledge. This higher state of consciousness is not given to everyone equally. Not everyone can read the hieroglyphs [inscribed on the birch bark scrolls]. A visionary must search for ways to explain the invisible forces’ touch, without seeming “special” or aloof.[30]
Gibagadinamaagoom’s relationship to the historical continuum of traditional Ojibwe archives thus gives new meaning to the technical term search. Archives derived from print culture conceive of searching in relation to the editorial history of the book’s index, expanded into today’s powerful search engines for keywords.[31] Within the spiritual context of Ojibwe cosmology, however, the term search takes on the connotation of a quest for knowledge guided by wisdom keepers, whose insights derive from traditional teachings and their understanding of “invisible forces.”
Metadata and Database
The most difficult technical challenge in constructing the Gibagadinamaa-goom archive in accordance with traditional codes of conduct has been the question of how to create the metadata and the database (i.e., describing the content so that it can be searched and structuring how that data is stored). This involved intense negotiations between Ojibwe wisdom keepers, the videographer, administrators of the Ojibwe Quiz Bowl (which uses the material developed by the Gibagadinamaagoom project to educate Ojibwe high school students about their own language and culture), Web designers, and the head of the Schoenberg Center for Electronic Text and Image at Penn. After more than a year of discussion, we decided that the best way to infuse the archive with the spirit of the “original modality” was to build the site around the seven sacred directions of Ojibwe cosmology. (Please see the digital version of this essay on digitalculturebooks to find a link to the navigation system.)
Before we turn to a fuller discussion of the complexities of the Ojibwe cosmology, it is important to understand the problem at hand more fully. This is perhaps most clearly illustrated by applying a standard library metadata system to one of the stories told in the previous section:
- Title: “The Story of Thunderbird” [videorecording] / Weweni Consultants; A DMcD Production; Directed by David McDonald; Page 267screenplay by Larry P. Aitken; produced by Timothy B. Powell
- Publisher: [United States]: Weweni Consultants, 2008
- Description: Visual Material Videorecording
- Library of Congress Subject Headings: Chippewa Indians[32]
This is accurate information by library standards but culturally misleading by Ojibwe standards. The Library of Congress heading “Chippewa” is many years out of date—an anglicized corruption of Ojibwe no longer in use. To say that the media of the story is a “videorecording” is certainly accurate but problematically truncates a deeper, Ojibwe sense of media history. The digital version of the story derives from older forms of media, such as archives of birch bark inscriptions and the oral tradition, which date back hundreds of years and include multiple authors whose names are not readily available to librarians. It is true the video was copyrighted by Weweni Consultants (a limited liability company founded by Larry Aitken to protect the intellectual property created by the Gibagadinamaagoom project) in 2008, but this chronological date distorts the depth of history involved with the intellectual ownership of the story and elides questions about the cultural sovereignty of indigenous storytelling. Finally, to credit Larry Aitken, Dave McDonald, and Tim Powell is necessary, if anyone hopes to find the video in a library (or if one of the three is seeking tenure or promotion in the academy), but to respect Ojibwe traditional teachings, credit must also go to the medicine man Jimmy Jackson, who trained Larry to be a wisdom keeper and who helped establish a precedent for using video technology to convey traditional teachings, when done in close consultation with elders properly vested with authority by the tribe. There is also the even more challenging question of how to credit animikii, the spirit of the thunderbird, as the originator of the story. Through this example, the need for a metadata system that more accurately describes the media in terms of its tribal genealogy and cosmological origins comes more clearly into focus, even if the solutions are not yet readily apparent.
The hidden cultural dimensions of space and time also still need to be considered much more carefully. In the library system, for example, the descriptor “United States” as the site of publication reveals the assertion of nationalism that implicitly challenges the existence of the Ojibwe Nation and its rightful claims of sovereignty. Here again, solutions remain distant, although the need to involve intellectual property lawyers and tribal leaders Page 268becomes evident. This assertion of national identity can, of course, be traced back to the epistemology of colonialism, although that is outside the parameters of this particular essay.[33] The date “2008” also has its roots in the European colonization of the continents, though more subtly disguised here by the Newtonian myth that time is constituted by mathematical precision and, therefore, remains culturally neutral.[34] The way the library system identifies place and date problematically distorts a more culturally accurate understanding of how space and time function within Ojibwe epistemology as embodied by the cosmology of the seven directions.
What we are trying to describe in the construction of the Gibagadinamaagoom database is the way that the story lines trace the nonnationalistic space of the four cardinal directions and establish a powerful connection between Nimaamaa-aki (Mother Earth), Mishomis (Ancestor’s Realm), and Ishpiming (Creator’s world). More specifically, what is left out of the Western-based metadata system is the all-important relationship between the stories and the spirituality inherent in the Ojibwe knowledge system. We have attempted to rectify this oversight by mapping this spiritual geography onto the database and by designing the navigation system so that the viewer quite literally moves through the seven sacred directions. We feel strongly that the metadata system of the Gibagadinamaagoom project needs to be able to describe accurately the role played by Jimmy Jackson, as an ancestral presence who provides guidance about how technology can be utilized to explain traditional teachings, and to locate places such as Ishpiming, Mishomis, and Nimaamaa-aki as integral sites of the story that the database encodes. In short, we are trying to describe the space and time encompassed by the stories themselves, in addition to external factors such as copyright and publication dates.
We have self-consciously worked against both the notion that chronology is culturally neutral and the illusion that a great temporal distance separates Waynaboozhoo’s time, at the beginning of Ojibwe history, from our own.[35] No chronological date can be assigned to the day that migizi decided that he needed to fly to Creator’s world to restore spiritual balance to the Aninshinaabeg (the people), yet time is still an integrally important part of the story. To create metadata that more accurately describes the way time works in the story about animikii that Larry relayed on 13 October 2008 at the Penn Museum, it is imperative to understand both the chronological date and the temporality of “origin stories” as understood within an Page 269Ojibwe epistemology of beginnings. These two moments—the day migizi set off for Creator’s world and the day we filmed Larry telling the story—are not separated by a vast temporal distance but inextricably intertwined by the act of storytelling in the hands of a skilled and knowledgeable wisdom keeper.
The metadata schema and the database structure we have created thus inscribe a sacred landscape that allows animikii and other oshkaabewisag (messengers) to move freely between the realm of the ancestors and this world. In doing so, we offer a spatiotemporal paradigm that, if acknowledged by Americanists, would perhaps allow us to free ourselves of the deeply problematic concept of periodization and our seemingly endless obsession with nationalism, postnationalism, and transnationalism. It is a sacred landscape that is distinctly Ojibwe yet still part of American literary history. Sadly, many scholars of American literature have become caught up in the belief that inventing neologisms with the prefix post- (e.g., postmodernism, postcolonialism, post-American) can propel the country beyond its monocultural past. Rather than talking to ourselves in a theoretical language that we barely understand and that the rest of world finds impenetrable, my hope is that we can learn to listen more carefully to the original occupants of the land, to value the spiritual dimensions of storytelling, and to think much more carefully about the role that these eminently powerful stories can play in healing historical wounds.
Completing the Circle
One of the great joys of my personal life and most rewarding engagements of my professional life has been the opportunity to work with Jimmy Jackson, Larry Aitken, Andy Favorite, Nyleta Belgarde, Dan Jones, Florence Foy, and David and Barbara McDonald. Poignantly, to pursue the Gibagadinamaagoom project, I made the decision to give up tenure in the English Department at the University of Georgia to accept a job as the director of Digital Partnerships with Indian Communities at the University of Pennsylvania Museum of Archaeology and Anthropology. As Jerome McGann has so eloquently described the situation of digital humanists at a time when projects such as the Gibagadinamaagoom digital archive do not count for tenure or promotion in English departments around the country, “The Jordan will not be crossed until scholars and educators are prepared Page 270not simply to access archived materials online, but to publish and peer-review online—to carry out the major part of our scholarly and educational intercourse in digital forms.”[36] So I conclude this essay while metaphorically standing in the middle of the river Jordan, looking back with heartfelt sadness at the field of American literature’s unwillingness to recognize the origins of American Indian literature or the promise of digital technology and looking forward to continuing to do work that directly benefits Ojibwe students on the reservations of northern Minnesota. My greatest hopes are no longer for academic recognition for this work but for the grandson of Jimmy Jackson, Anthony James Belgarde, who is now maintaining the Quiz Bowl Web site, where the material for the Gibagadinamaagoom project is presented to help Ojibwe high school and tribal college students learn their own remarkably powerful language and to preserve their vibrant and living culture, so that seven generations in the future, we may finally understand digital technology not as a postmodern phenomenon but as part of the great continuum of Anishinaabe history.
Notes
The first person pronoun in this essay refers to Tim Powell; any and all mistakes are his responsibility. Larry Aitken inspired, participated in, and helped write the stories told here.
The Ojibwe spelling has been provided by Professor Aitken and does not, in all cases, conform to the double vowel orthography that has become the standard in the academy.
1. Randy Bass, “New Canons and New Media: American Literature in the Electronic Age,” Heath Anthology of American Literature’s Online Resources, http://www9.georgetown.edu/faculty/bassr/heath/editorintro.html (accessed 29 November 2008).
2. I have addressed these issues at some length in the following articles: Timothy B. Powell, with storytelling by Freeman Owle and digital technology by William Weems, “Native/American Digital Storytelling: Situating the Cherokee Oral Tradition within American Literary History,” Literature Compass (Blackwell) 4, no. 1 (2007); Timothy B. Powell, “Recovering Pre-Colonial American Literary History: The Seneca ‘Origin of Stories’ and the Maya Popol Vuh,” in The Literatures of Colonial America: An Anthology, ed. Susan Castillo and Ivy Schweitzer (New York: Blackwell, 2005).
3. Anyone who undertakes such work owes a debt to Vine Deloria Jr. My own research has been deeply influenced by Vine Deloria Jr., God Is Red: A Native View of Religion (Golden, CO: Fulcrum, 2003); Edward Benton-Banai, The Mishomis Book: The Page 271Voice of the Ojibway (Heyword, WI: Indian Country Communications, 1988); Thomas Peacock and Marlene Wisuri, Ojibwe Waasa Inaabidaa: We Look in All Directions (Afton, MN: Afton Historical Society Press, 2002); Basil Johnston, The Manitous: The Spiritual World of the Ojibway (Minneapolis: Minnesota Historical Society, 2001); Basil Johnston, Ojibway Heritage (Lincoln, NE: Bison Books, 1990); Winona LaDuke, Recovering the Sacred: The Power of Naming and Claiming (Cambridge, MA: South End Press, 2005); Anton Treuer, ed., Living Our Language: Ojibwe Tales and Oral Histories (Minneapolis: Minnesota Historical Society, 2001); Gerald Vizenor, The Everlasting Sky: Voices of the Anishinabe People (Minneapolis: Minnesota Historical Society, 2000); Jace Weaver, That the People Might Live: Native American Literatures and Native American Community (New York: Oxford University Press, 1997); Keith H. Basso, Wisdom Sits in Places: Landscape and Language among the Western Apache (Albuquerque: University of New Mexico Press, 1996); Julie Cruikshank, The Social Life of Stories: Narrative and Knowledge in the Yukon Territory (Lincoln: University of Nebraska Press, 1998).
4. Personal conversation with Larry P. Aitken, 16 October 2008.
5. Amanda Gailey, “Digital American Literature: Some Problems and Prospects” (synopsis for paper delivered at the London Seminar in Digital Text and Scholarship, School of Advanced Study, Institute of English Studies, University of London), http://lists.digitalhumanities.org/pipermail/humanist/2009_May/000439.html.
6. Jerome McGann, “Culture and Technology: The Way We Live Now, What Is to Be Done?” New Literary History 36, no. 1 (2005): 77.
7. I have focused, in this essay, mostly on the potential benefits of digital technology in relation to recognizing the intrinsic value of Ojibwe cultural expression. There are, obviously, many problems that arise in attempting to translate traditional archives into digital archives. These need to be discussed at much greater length, but that is another essay entirely. There are three particular issues on which we have been working that I would like to raise here, albeit briefly: (1) the issue of intellectual property rights in the context of Ojibwe sovereignty, (2) questions about the accuracy of the translation of an empowered object into a digital image, (3) the issue of how representative the views of the wisdom keepers working on the Gibagadinamaagoom project are in relation to the larger Ojibwe community. Briefly, our ever-evolving response is, first, that we are working diligently to address the question of what we call “the sovereignty of storytelling.” A Board of Permission Givers was formed as part of the NEH grant. We have formed a Limited Liability Company, Weweni (Take Care), which will hold the intellectual property rights to all of the digital exhibits and videos produced as part of the project. Second, without getting bogged down in decades of academic arguments about the simulacrum of postmodernism, there are obviously many interpretive dimensions that get lost in translation. To take just one example, to speak with a wisdom keeper is to engage in symbolic exchange, so to speak, wherein the stories are shaped by the interaction between the wisdom seeker and the wisdom keeper. This has been lost in the sense that the stories recorded on videotape cannot possibly take into account how the Page 272story would change if told to the viewer. The wisdom keepers are aware of this problem and have made the decision to record their stories in digital form for the greater good of cultural preservation and revitalization. Third, it should be clearly stated that the wisdom keepers make no claim to represent anyone other than themselves. Stories told by different bands of the Ojibwe Nation and even within the different bands vary widely—for example, concerning the colors associated with the four cardinal points. We are grateful to the wisdom keepers for sharing their views, but I want to state unequivocally that these views do not represent the Ojibwe in all of their regional, linguistic, and cultural complexity.
8. Concerning the goals of the Gibagadinamaagoom project, our primary intent, at the outset, was to create educational material that could be used in the Quiz Bowl extramural competition created by Itasca Community College. See http://www.nativequizbowl.info/.
9. There has not been a great deal written on American Indian culture and digital technology. I am especially grateful to my colleagues at the American Indian Library Association. See http://www.ailanet.org/default.asp (accessed 4 December 2008). Some of the works that have been helpful to me include Loriene Roy and Peter Larsen, “Oksale: An Indigenous Approach to Creating a Virtual Library of Education Resources,” D-Lib Magazine 8, no. 3 (March 2002); “Tribal Archives, Libraries, and Museums: Preserving Our Language, Memory, and Lifeways,” a grant project sponsored by the Arizona State Museum and the University of Arizona, http://www.statemuseum.arizona.edu/aip/leadershipgrant/ (accessed 4 December 2008); Neil Blair Christensen, Inuit in Cyberspace: Embedding Offline Identities Online (Copenhagen: Museum Tusculanum Press, 2003); Mark Christal, Loriene Roy, and Antony Cherian, “Stories Told: Tribal Communities and the Development of a Virtual Museum,” in Collaborative Access to Virtual Museum Collection Information: Seeing Through the Walls, ed. Bernadette G. Callery (New York: Haworth, 2004), copublished as Journal of Internet Catologing 7, no. 1 (2004): 65–88.
10. My thinking here has been inspired by Jerome McGann, Radiant Textuality: Literature after the World Wide Web (New York: Palgrave, 2001), xiii.
11. Tara McPherson, “Editor’s Introduction,” Vectors: Journal of Culture and Technology in a Dynamic Vernacular 3, no. 1, http://www.vectorsjournal.org/ (accessed 2 December 2008).
12. Lisa Nakamura, “Cultural Difference, Theory, and Cyberculture Studies: A Case of Mutual Repulsion,” in Critical Cyberculture Studies, ed. David Silver and Adrienne Massanari (New York: New York University Press, 2006), 32.
13. David Silver, “Introduction: Where Is Internet Studies?” in Silver and Massanari, Critical Cyberculture Studies, 8.
14. Matthew G. Kirschenbaum, Mechanisms: New Media and the Forensic Imagination (Cambridge, MA: MIT Press, 2008), 54, 56.
15. Kirschenbaum, Mechanisms, 6, 15.
16. Johanna Drucker, The Visible Word: Experimental Typography and Modern Art, 1909–1923 (Chicago: University of Chicago Press, 1994), quoted in Kirschenbaum, Mechanisms, 9–10.
17. For more on Midewiwin societies, see Benton-Banai, Mishomis Book; Frances Densmore, Chippewa Customs (Minneapolis: Minnesota Historical Society, 1979); Ruth Landes, Ojibwa Religion and the Midewiwin (Madison: University of Wisconsin Press, 1968); Michael Angel, Preserving the Sacred: Historical Perspectives on the Ojibwa Midewiwin (Winnipeg: University of Manitoba Press, 2002).
18. For a sense of what these scrolls look like, see Selwyn Dewdney, The Sacred Scrolls of the Southern Ojibway (Toronto: University of Toronto Press, 1975).
19. Personal conversation with Aitken, 16 October 2008.
20. Personal conversation with Aitken, 16 October 2008.
21. Personal conversation with Aitken, 16 October 2008.
22. Personal conversation with Aitken, 16 October 2008. While most Ojibwe bands agree that the colors associated with the four cardinal points are yellow, white, black, and red, there is no consensus concerning correspondence.
23. Personal conversation with Aitken, 16 October 2008.
24. For more on Ojibwe history from an Ojibwe perspective, see Benton-Banai, Mishomis Book; Johnston, The Manitous; Johnston, Ojibway Heritage; Peacock and Wisuri, Ojibwe Waasa Inaabidnaa; Thomas D. Peacock and Linda Miller Cleary, Collected Wisdom: American Indian Education (Boston, MA: Allyn and Bacon, 1998); Thomas Peacock and Marlene Wisuri, The Four Hills of Life: Ojibwe Wisdom (Afton, MN: Afton Historical Society, 2006).
25. Personal conversation with Aitken, 16 October 2008. For a fuller account of this story, see Benton-Banai, Mishomis Book.
26. Personal conversation with Aitken, 16 October 2008.
27. Again, many significant aspects of Ojibwe storytelling are lost in this translation. See note 7 for a fuller discussion. More scholarly work needs to be done on this subject, and I encourage others to continue to pursue these questions.
28. The original videotape of Jimmy Jackson was copyrighted in 1987 by the University of Minnesota, Duluth, University Media Resources. We are grateful for permission from the producer of Interview with Jim Jackson, Iver Bogen, for permission to use the video on the Gibagadinamaagoom site.
29. Personal conversation with Aitken, 16 October 2008.
30. Personal conversation with Aitken, 16 October 2008.
31. For more on the history of search engines, see I. H. Witten, Web Dragons: Inside the Myths of Search Engine Technology (Boston: Morgan Kaufmann, 2007); Richard Rogers, Information Politics on the Web (Boston: MIT Press, 2004).
32. The basis of the metadata schema is Franklin: Penn Libraries Catalog, http://www.library.upenn.edu/ (accessed 4 December 2008).
33. For more on this subject, see LaDuke, Recovering the Sacred; Linda Tuhiwai Smith, Decolonizing Methodologies: Research and Indigenous Peoples (London: Zed Books, 1999).
34. For a fuller discussion of this issue, see Wai Chee Dimock, Through Other Continents: American Literature across Deep Time (Princeton, NJ: Princeton University Press, 2006), chap. 6.
35. For a fuller discussion of the problems associated with chronological time, see Fabian, Time and the Other: How Anthropology Makes Its Subject (New York: Columbia, 2002).
36. Jerome McGann, “The Marketplace of Ideas” (talk given at the University of Chicago, 23 April 2004), http://www.nines.org/about/bibliog/mcgann-chicago.pdf (accessed 4 December 2008).

