109dculturedh12172434.0001.001 in

    Hacking Institutions

    The Absent Presence: A Conversation

    Brian Croxall didn’t have enough money to attend the annual convention of the Modern Language Association (MLA) in 2009 in Philadelphia. He was supposed to give a talk at the meeting, but instead another attendee, Sheila Cavanagh, read his candid paper about his situation to a large audience. His plight sparked widespread discussion.

    The Absent Presence: Today’s Faculty

    This year was to be my fourth year in a row attending MLA. I spoke in 2006, interviewed for jobs in 2007, spoke and interviewed in 2008, and had hoped to speak and interview for jobs this year as well. When the job interviews did not materialize, I made the difficult decision to not attend the convention given the financial realities of being an adjunct faculty member. I regretted not having the chance to speak—especially on a panel titled “Today’s Teachers, Today’s Students: Economics”—but the panel chair volunteered to deliver my paper in absentia.

    I’m sorry that I can’t be delivering these comments in person, and I thank Professor Cavanagh for her willingness to read them on my behalf. Hearing talks delivered by the person who did not write them is only slightly better than having to be the person who is reading a talk she didn’t write, so I’ll be brief. At the same time, however, I can think of no more appropriate way for me to give a talk in a panel titled “Today’s Students, Today’s Teachers: Economics” than in this manner.

    After all, I’m not a tenure-track faculty member, and the truth of the matter is that I simply cannot afford to come to this year’s MLA. I know that we as a profession are increasingly aware of the less than ideal conditions under which contingent faculty members—and graduate students—labor while providing more than half of the instruction that undergraduates receive across the nation: a fact that the Chronicle of Higher Education and other publications have reported on frequently.[1] If we are talking about “today’s teachers,” then more of them look like me—at least in a professional sense—than look like the people who will be on the dais at the presidential address later on this evening. That means that most of the students in America are also taught by people that are like me. In a very real sense, I—and the people situated in a similar professional and economic quandary—are today’s teachers of today’s students. And for the most part, we’re not at the MLA this year.

    Again, I’m not at the MLA this year because it’s not economically feasible. I had hoped to be here for job interviews—as well as to speak as a member of this panel discussion. This was my third year on the job market, and I applied to every job in North America that I was even remotely qualified for: all forty-one of them. Unfortunately, I did not receive any interviews, despite having added two articles accepted by peer-reviewed journals, five new classes, and several new awards and honors to my curriculum vitae. According to my records, applying to those forty-one jobs cost me $257.54. I was prepared to pay the additional expenses of attending the MLA—$125 for registration, $279.20 for a plane ticket, approximately $180 for lodging with a roommate: a total of $584.20—out of pocket so that I could have a chance of getting one of those forty-one jobs. I was even luckier than most faculty—remember, most of today’s faculty are contingent—in that my institution was willing to provide me with $200 support to attend conferences throughout the academic year. But once it became apparent that I wasn’t going to be having any interviews, I could no longer justify the outlay of $400 out of a salary that puts me only $1,210 above the 2009 Federal poverty guidelines. (And yes, that means I do qualify for food stamps while working a full-time job as a professor!)

    I can’t imagine that I’m alone in this dilemma of not attending this year’s convention due to finances and the anemic job market. After all, as the New York Times reported on December 17, 2009, the number of listings in the MLA’s job information list was down 37 percent from 2008’s numbers—the sharpest decline since MLA started tracking job ads in 1974. It’s not like 2008 was a banner year, however. The listings a year ago were down 26 percent from what they had been in 2007.[2]

    Landing a job in the professoriate has been difficult for well more than this decade, but the recent economic crisis has necessitated—or allowed, if we’re feeling cynical—administrators trimming budgets so that less and less tenure-track faculty are hired. What this means is that more and more contingent faculty are employed to teach the increasing number of students who are matriculating at the nation’s universities. So . . . perhaps it’s not that employment is going down for humanists with the PhD. Rather, it is sustainable employment that is evaporating. (I’m looking at you, California.) After all, the demand for contingent faculty labor will probably rise sharply as the number of students enrolling in colleges rises due to the nation’s recent economic crisis. Since we can’t expect other schools to be as generous as mine with travel funds to contingent faculty, there should be fewer and fewer faculty members at the MLA in the future because less and less of the nation’s faculty will be able to afford to get here.

    “But”—the administrators say—“the MLA is only a conference, one where people read papers at each other. What difference does it make whether you attend or not?” Such questions are of course misleading since it’s not as if my department is willing to give me more money to travel to other conferences instead of the MLA. So the problem of not being able to afford to attend the MLA is really the problem of attending any conference, other than a local one. And attending conferences is critical for one’s scholarship since it allows one to hear the latest research in one’s field. I especially appreciate how large the MLA is since I can find opportunities to attend panels that represent the full 150 years of American literature that my research covers. Attending this conference—or others—keeps me abreast of the latest scholarship, and helps me produce scholarship that pushes the state of my fields forward. As one of today’s teachers, attending conferences helps me be more prepared to teach today’s students these new developments, preparing them to be more effective readers of literature, whether they are English or biostatistics majors. Moreover, it is at conferences that I am most likely to have the opportunity to meet with old and new colleagues whose work intersects most closely with my own. Schools only need so many Shakespeare scholars; not so the MLA! Yet attending conferences isn’t just about seeing old friends; the relationships formed with colleagues at conferences again help us produce scholarship. For example, the panel that I spoke on last year has resulted in a book-length collaboration among the four panelists, none of whom had met previously. When the majority of faculty—who are, again, contingent faculty—cannot attend the MLA, or any other conference, it results in a faculty that cannot advance; that does not, in other words, appear to be doing the things that would warrant their conversion to the tenure track. Our placement as contingent faculty quickly becomes a self-fulfilling event.

    But having a faculty majority comprised of contingent faculty means a lot more than just conferences being less and less attended. In my case, it means that my students cannot easily meet with me for office hours since contingent faculty don’t really have offices. It means that they do not get effective, personal mentoring because I have too many students. It means that I cannot give the small and frequent assignments that I believe teach them more than a “three-paper class” because I do not have time to grade ninety students’ small and frequent assignments. It means that the courses they can take from me will not be updated as frequently as I think is ideal because I will be spending all of my spare time looking for more secure employment—or working a part-time job. In other words, when we shortchange (pun -intended) today’s teachers—the majority of us who are, finally and for the last time, contingent and not present at this year’s MLA—we simultaneously shortchange today’s students. And those students will be that much less likely to become literature professors in the future. Why should they? It’s not currently a sustainable profession; but even more so, they will have had that many fewer chances to have those interactions with teachers that lead to today’s students wanting to become tomorrow’s teachers.

    Be Online or Be Irrelevant: Brian Croxall, the MLA, and Social Media

    One of the much talked-about items at this year’s MLA was Brian Croxall’s paper, or nonpaper, titled, “The Absent Presence: Today’s Faculty.” I say nonpaper because Brian, who is currently on the job market and an adjunct faculty, didn’t attend the MLA; instead, he published his paper to his own website. For several reasons, Brian’s paper hit a nerve. Indeed the Chronicle of Higher Education picked up the story—a piece which for a few days was listed as the most popular story on the Chronicle’s website.[3] His paper became, arguably, the most talked-about paper of the convention.

    In part, Brian’s story is a story of the rise of social media and its influence. If you imagined asking all of the MLA attendees, not just the social-media enabled ones, what papers/talks/panels were influential, my guess is that Brian’s might not make the list, or if it did, it wouldn’t top the list. That is because most of the chatter about the paper was taking place online, not in the space of the MLA.

    Let’s be honest, at any given session you are lucky if you get over 50 attendees; assuming the panel Brian was supposed to be on was well attended, maybe 100 people actually heard his paper being read. But, the real influence of Brian’s paper can’t be measured this way. The real influence should be measured by how many people read his paper who didn’t attend the MLA. According to Brian, views to his blog jumped 200–300 percent in the two days following his post; even being conservative one could guess that over 2,000 people performed more than a cursory glance at his paper. Brian tells me that in total, since the convention, there have probably been close to 5,000 unique views. 5,000 people: that is half the size of the convention.

    So, if you asked all academics across the United States who were following the MLA—reading the Chronicle, following academic websites and blogs—what the most influential story out of MLA was, I think Brian’s would have topped the list, easily. Most academics would perform serious acts of defilement to get a readership in the thousands, and Brian got it overnight.

    Or, not really . . . Brian built that readership over the last three years.

    As Amanda French argues on her blog, what social media affords us is the opportunity to amplify scholarly communication.[4] As she points out in her analysis (interestingly enough, Amanda was not at MLA, but she was still tweeting about the MLA during the conference), only 3 percent of the people at MLA were tweeting about it. Compare that to other conferences, even other academic ones, and this looks rather pathetic. Clearly MLAers have a long way to go in coming to terms with social media as a place for scholarly conversation.

    What made Brian’s paper so influential/successful is that Brian had already spent a great deal of time building network capital. He was one of the first people I followed on Twitter, and was one of the panelists at last year’s MLA-Twitter panel. He teaches with technology. I know several professors who borrow/steal his assignments. I personally looked at his class wiki when designing my own. Besides having a substantial traditional CV, Brian has a lot of street cred in the digital humanities/social-networking/academic world. More than a lot of folks, and deservedly so. It isn’t that he just “plays” with all this social media, he actually contributes to the community of scholars who are using it, in ways that are recognized as meaningful and important.

    In this regard, I couldn’t disagree with Bitch Ph.D. more—someone with whom I often agree—when she claims on her blog that, “Professor Croxall is, if I may, a virtual nobody.”[5]

    Not true. Unlike Bitch Ph.D., he is not anonymous, or even pseudo-anonymous; his online identity and real-world identity are the same. He is far from a virtual nobody. Indeed, I would say he is one of the more prominent voices on matters digital and academia. He is clearly a “virtual somebody,” and he has made himself a “virtual somebody” by being an active, productive, important, member of the “virtual academic community.” If he is anything he is a “real nobody,” but a “virtual somebody.” In the digital world, network capital is the real coin of the realm, and Brian has a good bit of it, which when mustered and amplified through the network capital of others (Kathleen Fitzpatrick, Dan Cohen, Amanda French, Matt Gold, and Chuck Tryon—all of us tweeted about Brian’s piece), brings him more audience members than he could ever really hope to get in one room at the MLA.

    Therefore, Brian isn’t a “virtual nobody,” and he isn’t a “potential somebody”—he is a scholar of the digital humanities—one that ought to be recognized. But here is the disconnect. Brian has a lot of coin in the realm of network capital, but this hasn’t yielded any coin in the realm of brick-and-mortar institutions. If we were really seeing the rise of the digital humanities, someone like Brian wouldn’t be without a job, and the fact that he published his paper online wouldn’t be such an oddity; it would be standard practice. Instead, Brian’s move seems, in the words of Bitch Ph.D., “all meta- and performative and shit”—when in fact it is what scholars should be doing. The fact that a prominent digital scholar like Brian doesn’t even get one interview at the MLA means more than the economy is bad, that tenure-track jobs are not being offered, but rather that universities are still valuing the wrong stuff. They are looking for “real somebodies” instead of “virtual somebodies.”

    This is the brilliance of Brian’s paper, content not withstanding: he made his material more relevant than all the other papers that weren’t published, he engaged the outside—even if it was a paper that was a lot of inside baseball on the workings of the academy—because he opened his analysis and thinking to a wider audience, and as Amanda French and Bitch Ph.D. remark, did it with a real-time spin that enhanced the level of content and delivery. The real influence should be measured by how many people read his paper who didn’t attend the MLA. Or maybe, the real influence of his paper should be measured by how many nonacademics read his paper. Scholars need to be online or be irrelevant, because our future depends upon it, but more importantly, the future of how knowledge -production and dissemination takes place in the broader culture will be determined by it.

    Reflections on Going Viral at the MLA

    Recently, I’ve had to come to grips with the fact that I’ve quite likely peaked. The paper that I was supposed to read at the 2009 Modern Language Association’s convention went viral.

    When I chose at the last minute not to attend the conference, given my lack of job interviews, insufficient travel funds, and the low salary of a visiting professor, I rewrote the paper that I had planned to present at a panel on “Today’s Students, Today’s Teachers: Economics” to talk about “The Absent Presence” of people who, like me, could not afford to attend conferences. I sent it to the panel chair to read on my behalf, posted it to my blog, and mentioned on Twitter that I had done so. The result was shocking. Within twenty-four hours, some 2,000 people had read my paper, spurred in no small part by an article in the Chronicle of Higher Education, a blog post by the anonymous academic blogger Bitch Ph.D., and countless mentions on Twitter and other blogs. By the end of the convention, my blog had received over 7,000 page views.

    The scope of going viral became more apparent when I returned to campus a week later, for the start of the semester, to discover that every colleague I ran into had read the piece. Instead of being heard by a small group of people who attended the panel at which I was to speak, my paper had been read by more people—and colleagues!—than I could ever reasonably expect to read any article or book that I might write in the future. So there it is: I’ve had my fifteen minutes.

    It’s a compelling narrative: A “virtual nobody,” as Bitch Ph.D. put it, comes out of nowhere, takes one of the biggest academic conferences by storm, and gets noticed by thousands. He rides off triumphantly into the sunset and even gets to write a follow-up for the Chronicle. But if there’s one thing that I learned in graduate school, it’s that every narrative can—and probably should, if you’re looking to get published—get deconstructed. On reflection, it seems to me worthwhile to explore one thing that was said about my paper, and one thing that was repeatedly said to me about my paper.

    First is the suggestion that my paper was, as the Chronicle put it, possibly the “most-talked-about presentation” at the conference. But let’s be honest: The number of people talking about my paper in Philadelphia could only have been very small. After all, the chair informs me that there were approximately thirty-five people who attended the panel. Far more people certainly attended Catherine Porter’s presidential address and discussed her call to reconsider the importance of translations and those who create them. My paper could not have been anything more than a blip on the conversational radar. It seems certain that practically no one at the real MLA was talking about my paper. How could they have? They hadn’t heard it.

    Instead, my paper and the response it generated happened at a virtual MLA. I’m not talking about a conference taking place in Second Life, but rather the real-time supplement to the physical conference that was conducted via social-media tools. The crowd presenting at the virtual MLA was considerably smaller than the approximately 7,400 scholars who came to Philadelphia. For example, Amanda French estimated that only 256 people used Twitter with the official #mla09 hashtag, based on data from the tweet-storage service TwapperKeeper. And while it’s nearly impossible to tell how many people blogged about the MLA, one can reasonably assume that they were fewer than those using Twitter, since participation on Twitter takes less time than blogging.

    But if the number of those participating in the virtual MLA was so much smaller, how did so many people read my paper? The difference is that it is only the number of people presenting at the virtual MLA that is small; the audience is much, much larger. The virtual MLA requires no registration fee or travel, and when you lower those bars via social media, anyone can attend. That includes not only people like me, who couldn’t afford the real MLA, but also scholars from outside the field of literary studies. My website’s views really started spiking when my paper was tweeted by two historians: Dan Cohen, Director of the Roy Rosenzweig Center for History and New Media at George Mason University, and Jo Guldi, a junior fellow at the Harvard Society of Fellows. But it’s not only people without funds to travel or academics outside the field who attend the virtual MLA; it really can be anyone. Curious onlookers who might want to know what exactly it is that literature professors do can suddenly find out; it is that group that caused my paper to go viral.

    The virtual MLA suggests a few things about humanities scholarship in the twenty-first century. First, scholarship will be freely accessible online. Online scholarship not only is the next logical step for publication, but also presents a way to address an expanding audience. The much-discussed crisis in the humanities has at its origin the question of what—if anything—the humanities are good for. It has been difficult to answer that question, in part because our scholarship is frequently inaccessible, published in small journals, or contained in subscription-only databases. Making our work freely accessible—whether in open-access journals or on our own websites—means that more people will be able to see what we are doing. While I’m not naive enough to think that access alone will make people see why the study of film or history matters, it seems certain that, as David Parry—an assistant professor of emerging media and communications at the University of Texas at Dallas—recently put it, humanities scholars must “be online or be irrelevant.”

    Second, scholarship in the age of the virtual MLA will become increasingly collaborative and participatory. We all know that collaboration in the humanities is made difficult by institutional pressures associated with tenure and promotion. Moving scholarship online lowers some other practical barriers to collaboration. Moreover, cooperation will not only be with our colleagues down the hall. We need to be ready to work with knowledgeable hobbyists—aka independent scholars—and to share credit with those partners. We may find that the focus of our work shifts a bit in response to engagement with people outside academia. And, again, we may find that what we as humanities scholars do will be better understood and valued.

    Let me extract myself from the unlikely role of futurist and focus on what was said to me in the days following my paper’s going viral. In blog comments, on Twitter, via e-mail messages, and even in real life, people repeatedly told me that they hoped the exposure I was receiving would lead to some new career opportunities for me. I naturally appreciated such wishes, and must confess to having thought something similar myself.

    But upon further reflection, I think that such hopes—mine included—miss the point of my paper.

    What caught people’s attention was not so much my personal experience, but rather how it reflected that of an ever-increasing portion of today’s faculty members. While I would certainly like to have more secure employment, the conversion of just one person from contingent faculty to the tenure track will not change any of the conditions that prevented me and other members of the new faculty majority from attending the MLA conference. Naturally, almost everyone who wished me well would have expressed similar thoughts to the rest of the nation’s non-tenure-track faculty members had they the venue to do so. I found myself wondering, then, if my paper really had put me in the position of an Everyman, as the Chronicle suggested. Were the calls for someone to do something for Brian Croxall reflective of a faint hope that saving Everyman could result in saving the entire profession?

    As wonderful as it would be for the wasteland of academic career opportunities to be saved by the revivification of some Eliotic Adjunct King, it just can’t work that way. The problems of contingent academic labor are systemic, and perhaps cannot be adequately addressed by a single department or even a university, let alone the blogosphere.

    But one solution is to make sure that those who are applying to graduate school know very, very clearly what they are getting into. No one at my undergraduate alma mater told me in 2001 about the realities of the job market, and it certainly wasn’t in the interest of the university that accepted me for graduate study to do so. If we humanists want to be humane, we ought to level with our undergraduates.

    By chance, I just received an e-mail message from someone who attended my college and is interviewing as a candidate in my graduate department. She wanted to know what she could do to prepare. What did I do? I answered her questions as best I could. I also pointed her to several articles by Thomas H. Benton in the Chronicle that outline the risks of graduate school in the humanities, and I mentioned a paper by Brian Croxall. That guy may have peaked, but he made a good point.


    1. For example, Audrey Williams June, “Nearly Half of Undergraduate Courses Are Taught by Non-Tenure-Track Instructors,” Chronicle of Higher Education, December 3, 2008, http://chronicle.com/article/Non-Tenure-Track-Instructors/1380/.return to text

    2. Tamar Lewin, “At Colleges, Humanities Job Outlook Gets Bleaker,” New York Times, December 18, 2009, http://www.nytimes.com/2009/12/18/education/18professor.html.return to text

    3. Jennifer Howard, “Missing in Action at the MLA: Today’s Teachers of Today’s Students,” Chronicle of Higher Education, December 29, 2009, http://chronicle.com/article/Missing-in-Action-at-the-ML/63276/.return to text

    4. Amandafrench.net, “Make ‘10’; Louder, or, the Amplification of Scholarly Communication,” blog entry by Amanda French, December 30, 2009, http://amandafrench.net/blog/2009/12/30/make-10-louder/.return to text

    5. Bitch Ph.D., “Auld Lang Syne,” December 29, 2009, http://bitchphd.blogspot.com/2009/12/auld-lang-syne.html.return to text

    Uninvited Guests: Twitter at Invitation-Only Events

    Invitation-only gatherings are often designed as specific interventions in a certain scene or subdiscipline, and therefore a lot of care goes into identifying and recruiting participants who are either positioned to make a desired intellectual contribution to the immediate proceedings, or to synthesize and take the work of a group forward after the lights go out in the auditorium. Other events are imagined as learning experiences or sites for advanced training, and participants may be identified—and excluded—based on level of need, or on the relative merit of their applications to attend.

    Organizers know—and generally regret—that pragmatic concerns and financial constraints result in the exclusion of a multitude of interesting people and perspectives. Closed events are not crafted with the goal of keeping “the wrong people” out, but of bringing enough—or, more accurately, a manageable number—of the right people in. These things need to be worth the investments they require, both of funds—often quite scarce for humanities undertakings—and other “costs of opportunity,” including the work the organizing group is therefore not engaged in, and the invaluable time and energy of all participants.

    But goal-oriented, laser-like focus and a predetermined guest list naturally put an event in danger of over-determined—predictable, excessively conservative, even tedious—conversations and outcomes. This is a risk of which good organizers are conscious, and against which they press. The most common way to work within attendance constraints and still leave a crack in the door is to think of invited participants as ambassadors of certain communities. Many symposium attendees will adopt a representative stance even without being asked to, as soon as they realize that they are the only—whatever: literary theorist/material-culture expert/digital historian/etc.—in the room. And some moderators will make desired personae explicit. (I use that word deliberately, because this kind of representation is necessarily masquerade, and no one seriously thinks it compensates for absence—however, ritual and performative aspects of academic interaction are often particularly highlighted at smallish events.)

    At the same time, there’s room elsewhere to ramble, and ways to include a broader set of voices. Traditional professional-society meetings are rarely closed, but typically finance “openness” through membership and conference fees and—often—by sacrificing the degree of attention to product and coherence that can be paid at a smaller, more carefully crafted gathering. Or you could build your own conference, on the fly. In our DIY U, Edupunk era, we’re experiencing an explosion of “unconferences.” The premier model in the humanities is THATCamp (The Humanities and Technology Camp), which originated at the Roy Rosen-zweig Center for History and New Media at George Mason University. This is a do-it-yourself digital -humanities conference, at which a hat is passed for donations, only the loosest practicable vetting of attendees is done, and participants collaboratively set the discussion and demonstration agenda at an opening session and “vote with their feet” thereafter; that is to say, they take continual responsibility for their own conference experience by freely floating—at any point—to other scheduled sessions or spontaneously creating new sessions that strike them as more useful. (Some of my most productive and stimulating professional experiences of the past few years have taken place at unconferences.) Many events are now streaming passive audio and video live, and experimenting with venues like Second Life as substitutes for the expense of physical presence and embodied interaction. In the past year, I have even unexpectedly “attended” an event or two that combined live streaming with the DIY sensibility, when a local participant realized the proceedings would be of interest to a larger group, called out, “Anybody mind if I broadcast this?,” and set up a spontaneous Ustream.

    And then there’s the pervasiveness of Twitter. The litany of invitation-only gatherings in my second paragraph had associated Twitter hashtags, which are themselves a public invitation to aggregate perspectives and join in conversation. A hashtag is a small piece of metadata, agreed upon by Twitter users informally—by virtue of collective use—as an appropriate marker for a particular concept or moment. Some hashtags are jokes, some are prayer beads, some are signifiers for emerging perspectives and nascent online communities (see #alt-ac, the hashtag for discussions of alternative academic careers), and some mark Twitter messages as relevant to the discussion at a conference or other event.[1] Twitter has played an important and occasionally transformative role at every academic gathering I have attended since early 2008. It has provided useful—and sometimes surprising—demonstrations, for conference and meeting participants, of the engagement of broad and underrepresented communities with issues under debate. It has brought divergent perspectives helpfully into play, sharpening discussion, and leading to proposals with broader reach and impact. In a time of dwindling travel budgets, it has allowed key, already well-networked community members to participate in meetings from afar, with little technical overhead and less disruption to their working lives than formal, virtual participation would require through an interface like Second Life.

    Twitter also allows invited conference goers to spread a wealth of ideas being voiced behind closed doors. These ideas are shared with established but evolving networks, which—at the conferences I attend, but each one is different—largely consist of students and colleagues in higher education, and in the worlds of academic publishing; libraries; museums and archives; information technology; and humanities centers, labs, and institutes. I have seen Twitter use at academic conferences promote valuable exchange among university and K–12 educators, and contribute to and demonstrate value in the public humanities in an immediate and tangible way. If Twitter itself—as commonly used by academics—operates as a gift economy, then conference hashtags are little beacons of that generosity.

    But it’s not all sunny in closed-conference-open-Twitter land.

    There are two conflicting tensions, which are commonly expressed by both sets of my interlocutors—sometimes even simultaneously—in online and face-to-face communications during private conferences. The voice from Twitter cries: “Elitism! Hypocrisy! How can you be discussing—pick your poison: the public humanities, the future of scholarly communication, the changing nature of the disciplines—in a cloister? Who are these privileged few? And why weren’t we all invited to attend?” To be fair: in my experience, messages of thanks to those who have tweeted, for broadcasting the ideas of the gathering to a wider audience, far outweigh any complaints—but a strident complaint or two, often from colleagues from sadly under-funded institutions, is invariably present. It is to the complaining Twitterati that I have addressed my long preamble on the aims and necessary limitations of smaller gatherings. Sorry, guys—really. It’s usually about the money and the focus, but sometimes it’s even because they couldn’t manage to book a larger room.

    And of course my lengthy disquisition on Twitter was meant to level the playing field for those senior colleagues—yes, this divide is largely generational—who have not engaged with Twitter, and who have indicated to me how troubling they find its use in academic settings. For it is the anti-Twitter reproach from within the conference room that I most want to address.

    I suspect conference followers and participants on Twitter—whose presence Margaret Atwood likens to “having fairies at the bottom of your garden”—have no idea how magically disruptive they are. If they sense it, they may still be surprised at the character of that disruption. Several times now, I have heard the technology the Twitter community embraces and explicitly figures as democratizing and personalizing described in terms of alienation, invasion, and exclusion. These face-to-face conversations about Twitter are so fraught that delicacy cannot accord with 140-character limitations, and therefore they do not make it into the online record. Sometimes, indeed, they only come in a private, kindly meant word over drinks, or in shared taxicabs after the tweeting has ceased. Other times, it gets heated and publicly awkward.

    Five problems with Twitter use at closed gatherings have been expressed to me.

    The first is dismay that its application was not evident to everyone from the outset of the event. A small group of us deliberately heightened this response at a recent gathering, when we decided to “pull the curtain” on a hashtagged Twitter conversation that had been going on unnoticed by the majority of the fairly traditional scholarly crowd. The criticism is fair; that Twitter changes a conference dynamic in ways that may be invisible to some participants. The possibility of its presence probably should be addressed at the outset of closed conferences for a little while, in order that any requested ground rules can be discussed and agreed upon, and to make participants aware of the option to engage. Some professional societies, such as the Modern Language Association, and membership organizations, such as the Coalition for Networked Information, have begun promoting Twitter hashtags or even publicizing them well ahead of a conference event. Regardless, you can basically assume that if people have open laptops or handheld devices at a gathering, and still seem alert, they’re note-taking or tweeting—not reading email or playing games. At least, not much.

    The second issue is related: a feeling that Twitter use is exclusionary. At the outset of a closed conference, some people may have access to it, and others may not. I have figured Twitter as a democratizing medium; however, participation in it is not universal. For most people in academic settings, this is a choice. Because accounts are free and easy to set up, the only reason you can’t rapidly remedy the problem, if you wish to, is that you may lack a laptop or smartphone. When you first set up your account—especially if you do so in the middle of a rapid-fire exchange—you are likely to be a little inept and lost. This is a sinking feeling you might recall from your early days of graduate school, or your first academic conference. It passes quickly, as you learn the lingo and cultural codes.

    Next comes the concern that Twitter damages one’s ability to engage and converse in the room, or that it lowers the level of discourse. Attentional demands may be a problem for some, as Twitter use is a learned skill. As to the latter issue, I will address only deliberate rudeness, because I worry that statements about lowered discourse are simply code for “discourse with people not like me,” and suspect that no arguments of mine will shake the foundations of that view. New-media scholar danah boyd and others have exposed rudeness in back-channel chatter as a real concern, with immediate and dreadful implications for speakers at popular conferences.[2] However, it is important to say that Twitter use does not inherently promote inattention or bad behavior.

    I’ve never witnessed a nasty backchannel in an academic setting—where we generally do share notions of fairness and propriety. More frequently, there’s a little lag between the themes expressed in a Twitter conversation and the topics being discussed in the room, which can cause participants to divide their attention, but which can also evolve as an interesting counterpoint to later discussions.

    Privacy concerns related to Twitter use at closed gatherings are a real issue. Often the greatest virtue of an invitation-only event, for participants who represent administrative units or high-profile organizations, is the opportunity to speak a little more candidly than they can in public. In my experience, Twitter users are sensitive to these moments and either moderate their observations and reportage accordingly or refrain from tweeting at all. If, as it seems, we are moving into a period in which always-on, networked communication becomes the norm, even at private academic events, it is the responsibility of participants to remain sensitive to desires for confidentiality or discretion—and, in the moment, speakers may need to make these desires a little more plain.

    Finally, the need for privacy is not the same as a wish for control. I am fairly unsympathetic to an ownership frustration I have heard from a small number of scholars, manifesting as a desire that ideas they express at conferences—even well attributed—not be circulated via Twitter. I have come to understand that this concern stems less from a kind of proprietary interest over the ideas—that is to say, it is less a matter akin to copyright—than from a sensation of the loss of control. The level of control we used to feel over the distribution and reception of scholarly statements was only ever an illusion made possible by the small scale and relative snail’s pace of print publication. It was also enabled by authority systems that—while they have performed a salutary function of filtering and quality assurance—are under scrutiny in an age of electronic text, because of their incongruence, economic instability, and cumulatively stifling effect.

    One manifestation of this lack of control is the acknowledged “telephone game” of Twitter—the degree to which repetition with a difference can lead to partial or missed understandings. Sometimes, offhand, minor points that slip right past the sanctioned, face-to-face conversation can make it big online: that’s human interaction for you. The Twittering fingers tweet, and having tweeted, twitter on; or live blog, or take notes in wikis, et cetera. Although it can be helpful when speakers are plugged in enough to be able to influence conversation in both offline and online streams—not even necessarily simultaneously—it is simply folly to think that we can control what’s being said about us on the Internet. That was never what scholarly communication was about, anyway.

    I’d offer three strategies to address concerns about the immediacy of web publishing of conference proceedings via Twitter.

    The first is something we’re always doing anyway: simply working to express our ideas as clearly as possible in the room, and to listen actively for feedback that may suggest misunderstanding or lack of conveyed nuance. Good luck with that (sincerely!).

    Perhaps a more implementable suggestion for speakers and conference participants concerned about these matters is that they publicly request their names not be attached to tweets or blog posts. This strikes me as most valid when it touches on issues of privacy and confidentiality—but be aware that when your name is used on Twitter, it is likely done in an innocent spirit of attribution. If your ideas are cited, chances are good that the writer approves of them and wishes to lend you a microphone—or at least that he or she thought your statements interesting and worthy of further discussion. If, on the other hand, your perspective is represented in a critical way and you are cited as its source, it’s probably because you are known to be on Twitter and presumed to be as able to defend yourself there as elsewhere. In other words, I have heard some anxiety expressed about personal attack, but—while contentious conversations have been opened up on Twitter in a familiar spirit of academic debate—I cannot recall ever seeing a specific, much less ad hominem, hostile response to a colleague who lacks a presence on Twitter, or might be thought defenseless in that medium. There’s not a lot of passive aggression in an environment that trades on professional identity, necessarily precise language, clear attribution, and open exchange.

    Most of what I’ve said is relevant to public as well as invitation-only academic events—but the turmoil around conference use of Twitter over the past year has seemed most acute at private gatherings. It clearly relates to the ethos of the academic Twitter demographic—mostly consisting of tech-savvy, early-career scholars or #alt-ac professionals—and the expectations and longstanding traditions that inhere in private events. Invitation-only meetings often involve more established scholars and administrators who have put in their dues under a very different set of academic protocols and for whom networked communication is important, but not necessarily ever-present.

    These groups need to find ways to move forward together within the new norms of scholarly communication, and in a way that enhances shared work and promotes meaningful interconnectedness. Which brings me to the final strategy I’d suggest we all adopt: simply to—or continue to—participate.


    1. http://www.twapperkeeper.com/hashtag/alt-ac. Also see http://www.twapperkeeper.com/hashtag/reenx and http://tagdef.com/uvashape. Each of these references will—depending on the ebb and flow of networked conversation—lead you to current or archived tweets stemming from a referenced gathering, or maybe even indicate to you that nobody has been chatting under a particular rubric lately. I’ve taken a variety of approaches in these references to demonstrate a few ways of accessing Twitter conversations and highlight the degree to which tweets are both ephemeral in that they are part of a fairly volatile landscape of protocols and interfaces, and capturable, as part of our cultural record. Whatever you see when going to those links is unlikely to be what I saw when I chose to publish them here—and it’s not unlikely that a link or two will break. However, the Twitter back-channel conversation for at least one of those conferences (#uvashape) is to be published by Rice University Press. Also, the Library of Congress has announced an initiative to archive the entire Twitter corpus—an amazing resource for future scholars. Library of Congress Blog, “How Tweet It Is!: Library Acquires Entire Twitter Archive,” blog entry by Matt Raymond, April 4, 2010, http://blogs.loc.gov/loc/2010/04/how-tweet-it-is-library-acquires-entire-twitter-archive/.return to text

    2. danah boyd: apophenia, “Spectacle at Web2.0 Expo . . . from My Perspective,” blog entry by danah boyd, November 24, 2009, http://www.zephoria.org/thoughts/archives/2009/11/24/spectacle_at_we.html.return to text


    Notes on Organizing an Unconference

    While the term “unconference” has been applied—or self-applied—to a wide variety of events, it usually refers to a lightly organized conference in which the attendees themselves determine the schedule. In most cases, unconferences attempt to avoid the traditional unidirectional paper model in favor of meaningful and productive conversations around democratically agreed upon topics—organized into sessions. Unconferences traditionally have low registration fees, and therefore run on a much more conservative budget, compared to more traditional meetings or conferences. The other thing that sets unconferences apart from traditional conferences is that they usually have far fewer attendees: it is not uncommon for unconferences to be attended by no more than 75–100 people.

    Despite the fact that the unconference idea got its start—and is still going very strong—in the tech sphere, at events like BarCamp, Foo Camp, and BloggerCon, they are becoming increasingly popular in the scholarly landscape. This is no great surprise as many scholars are beginning to feel that traditional academic conferences and meetings are perhaps not as productive as they once were. In addition, in today’s economic climate (with many departments reducing, or even completely removing, travel funds), the financial burden—including the often high cost of registration—of a traditional conference has made it impossible for many scholars to attend more than one or two conferences in their domain, or perhaps none at all. Hence, the often very low registration fees of an unconference make them quite appealing.

    I’m not saying there isn’t a place for traditional conferences in academia. They are important for a lot of reasons—not the least of all being part of the tenure and promotion machine. However, unconferences fill an extremely important niche in the scholarly ecosystem. It is worth noting that several traditional conferences are planning on experimenting, or have already experimented, with unconference sessions—essentially, an unconference within a conference.

    I have been very fortunate to co-organize Great Lakes THATCamp (a regional version of The Humanities and Technology Camp), and found it one of the most rewarding and exciting things I’ve ever done. As such, there were some things that I learned during the process which might prove useful to those adventurous souls who are thinking about organizing their own unconference—either as a stand-alone event, or as part of a traditional conference.

    “Lightly Organized” Doesn’t Mean No Organization

    Just because an unconference doesn’t have the organizational and logistical trappings of a traditional conference—lengthy paper submission/acceptance cycle, mind-boggling schedule, detailed conference program, and complete conference abstracts—doesn’t mean that a lot of work doesn’t go into making sure they are organized well. I was quite surprised by the number of colleagues—people unfamiliar with the unconference model—who, upon hearing that I was co-organizing Great Lakes THATCamp, said something akin to “well, I guess that means you don’t have a lot to do.” Nothing could be further from the truth. If an unconference is to be done right, it’s not just a matter of getting some rooms, setting a date, and spreading the word. “Light organization” is an art unto itself. There are things that need to be organized and controlled—there is absolutely no doubt about that. However, you can’t step over the line into over-organization, and try to control every little bit of the event.

    A Venue that Facilitates Conversation

    One of the most important hallmarks of an unconference are meaningful and productive conversations—whether they take place in large groups, small groups, or between two or three attendees. As such, unconference organizers should do their best to arrange a venue that facilitates these kinds of conversations. If you can manage it, a venue with a variety of room types and sizes is great. If all you can manage are classrooms—which might be the case if your unconference is taking place on a university campus—try to to get rooms where the chairs/desks aren’t bolted to the ground. This allows the attendees to reconfigure the space as they see fit. If you are able, also try to find a venue that has smaller, informal conversation spaces as well. Conference rooms are great for this. Don’t discount two or three comfortable chairs—or even benches—strewn hither and yon in hallways and corners. Anywhere where people can hang out comfortably during the day and have meaningful conversations.

    Remember, An Unconference Isn’t About You

    An unconference is as much about the participants themselves as it is about you. You might have organized the event, but it doesn’t belong to you. As such, you need to make sure that, whenever possible, decisions are made by the attendees themselves. In many ways, each attendee should be seen as much of an organizer as you.

    Be Flexible

    This is easily the most important thing I learned when organizing Great Lakes THATCamp: be flexible. Flexibility and fluidity is the name of the game at an unconference. Attempting to control every aspect of the event with an iron fist will probably end up in disaster. If the participants want to change the overall schedule on the fly, let them—remember, the participants are as much in charge as you are. If participants decide to change the topic of a particular session midway through, don’t raise a fuss. If you need to push lunch forward so that the momentum of a particularly fruitful and exciting session can continue, do so. If the way in which you planned on building the initial schedule isn’t working out, figure out a better way, and don’t be afraid to ask the attendees themselves.

    The Bottom Line

    The subtext of all of these thoughts is that you should never forget that the conversations between attendees drive an unconference. You need to do everything you can to facilitate these conversations.

    Getting the Most Out of an Unconference

    Over the past couple of years, I have been fortunate enough to be able to attend several unconferences, both locally and nationally. I say fortunate because these experiences have opened my eyes to how amazing the unconference format can be. I cannot think of a better way to share ideas, make personal and professional connections and generally have an extremely productive yet enjoyable time. That being said, the unconference format can be challenging and confusing, especially for those used to a more traditional conference model. Sharing some of my unconference experiences might make things a little easier.


    Participation is by far the most important factor in determining whether or not an unconference will be successful. For the organizer, it is essential to get people together who truly want to be involved. For the attendee, an unconference is one of those situations where you really get back what you put in. The best sessions by far had the feel of an engaging graduate seminar class, with contributions coming from everyone, and where there was freedom for even the topic to evolve with the discussion. In other words, everyone came to participate.

    I will also point out that while it’s completely natural to spend the majority of your preparation time on your own presentation, my experience suggests that bringing thoughtful questions to other presentations is equally important. The best thing about an unconference is that professionals are able to come together and discuss real issues face to face. So don’t lose sight of the fact that your input could be the difference between moving someone else’s project forward—perhaps in ways they never expected. Related to this, make sure to pay attention to the other participants’ blog/website postings and comments leading up to the conference—this, of course, being dependent on the unconference having a blog or website. Knowing what other people are thinking about before the event can jump-start discussion in a powerful way.

    What to Propose?

    Another common question for prospective unconference participants is what to propose. The most important thing I learned about unconference proposals, as both a presenter and an audience member, is that interactivity is essential. No one wants to sit around and be read to, especially when it’s possible to give them a chance to react and share their own ideas.

    Along with this, it cannot be stressed enough that big ideas should be welcome. Even if these ideas—as is often the case—are challenging to define, explain, or put into practical terms. Remember that because these discussions can be free-flowing, there is no need to arrive at the unconference with predetermined conclusions. Simply asking the interesting question is all that is required.

    On the other hand, some great sessions were remarkably down-to-earth and practical. This was especially true when talking about technology, coding, implementation of new tools, etc. The point is, while big ideas are encouraged, practicality and pragmatism are also important components in many excellent proposals.

    Enjoy Yourself

    The unconference model allows for relatively informal discussions to take place. Also, because everyone is technically a presenter, many of the hierarchies found in some more traditional conferences are eased. I would advise everyone attending an unconference to take advantage of this. Make connections with people from different levels of seniority or experience. I’ve found that the more people enjoy themselves, the better the conversations flow, which, in turn, leads to better discussion and a more successful event. So have fun.

    Let’s Do It Already

    Many have loathed the rigidity, formality, and expense of traditional academic conferences. In contrast, unconferences thrive on flexibility, collegiality, and thrift. More to the point, they rely heavily on the attendees themselves—their attitudes, motivations, and work ethics—for success or failure. At unconferences, it generally doesn’t matter who says something first; what matters more is who says something thoughtful, and what that thoughtful thing is. Discovery happens through group cooperation. Insight and knowledge are not guarded for the next publication; they’re shared openly, with hopes that others can contribute to ongoing conversations that make our work better.

    This really gets to the heart of the issue: why do we attend conferences, and why do we contribute to them? Ideally, we give conference papers in hopes of sharing our research, getting recognition for such research, and getting critical feedback. We might also hope that conference paper’s mere presence on the conference program grants it weight on CVs and tenure reviews, even if only half a dozen people actually came to the session to hear it read.

    What if instead we start fostering systems that reward you if your unconference session spawns half a dozen projects from attendees? The focus in this case is not on what you produce yourself, but what you help others produce.

    Academic conferences as they are now are increasingly expensive, poorly attended—not necessarily in terms of registrations, but in terms of people actually attending sessions—and rarely seem to generate the kind of innovative work needed to meet the challenges of education and scholarship today. If we want to start hacking the academy, let’s start hacking this cornerstone of academic culture by incorporating unconference elements into the programs of traditional conferences. If you’re going to an annual conference, try to organize an unconference yourself, either with support of the organization, or on your own off-site. We should start small; test some things out; make changes when necessary. But we should start, if for no other reason than to make the work we and our colleagues do better, and to make our experiences at conferences richer and more productive.

    Voices: Twitter at Conferences

    Buried within the sense that the 140-character form trivializes our work—a complaint about condensation that might not be so far removed from faulting poetry for its failure to present extended realist narratives—is an implied concern about who it is that sees us being trivial. This is a concern that has dogged public scholarly work for eons, from those scholars who have written crossover books, to those who have written editorials for major publications, to those who have developed blogs and other online presences. Yes, Twitter is the most elliptical of these, but it’s a key form of outreach not just to our colleagues but to the broader intellectual public, and to those whom we need to support higher education. All of these public forms of writing have the potential to demonstrate what it is that we as scholars do, and why the broader culture should care about it—and until we get over our fears of talking with the broader culture, in the forms that we share with them, we’ll never manage to convince them that what we do is important.

    —Kathleen Fitzpatrick

    Twitter is one way to explain to graduate students what you do at big conferences. In addition to the actual intellectual conversation, the critical mass of faculty on Twitter means that you can see what faculty do: how often people go to panels, when they visit the book exhibit, when they need downtime, whether they’re still working on papers, and more. There’s a comfort in seeing the different ways in which faculty and graduate students inhabit the conference: There’s not just one way of participating in a conference, and so you should feel empowered to make the event as meaningful/productive for you as possible, without worrying too much about whether you’re “doing it right.”

    —Jason B. Jones

    Twitter is an invaluable ready-made network, particularly for newbies and junior scholars for whom the convention often looms like an orbital Death Star poised to suck every ion of individuality and intellectual self-worth into its all-consuming tractor beam. Twitter, by contrast, is the cantina in Mos Eisley spaceport. The “tweet-ups” are a great example of this: if you need a break, need a drink, or just need some time to turn off and chill out, you know when and where to go without the pressure and hang-ups of “Am I really invited?” “Will anyone talk to me?” Nothing in an institutionalized world is ever purely democratic or transparent of course, but I think it’s fair to say that academic rank and status are markedly less important than if, say, you try sidling up to someone at the New Literary History cash bar. Most of all what I think Twitter does at a conference is create a common narrative; or better, it’s a kind of communal narrative to which all can write simply by virtue of opening an account and invoking the hashtag. Retweets and replies define the plot and tempo. The narrative is not complete or comprehensive, but that’s not the point. Narratives are enabling precisely because they are partial representations. Who knows this better than scholars?

    —Matthew G. Kirschenbaum

    The lesson digital humanists learn, especially by using Twitter, is that scholarly conversations move quickly now, because they can; therefore, one had better be as quick as possible to join in that conversation. Monthly or quarterly journals and annual conferences used to be the way that scholars wrote among themselves, but now it’s e-mail listservs—yes, still—and, better, the much more public blogosphere and Twittersphere.

    —Amanda French

    The Entropic Library

    In the United States, over the past century, the practice of health care has transitioned from being a largely distributed and generalist profession to a much more corporatized and specialized one. It is a change that many greet with regret, despite the obvious advances in health care. One of our cultural touchstones is a romanticized image of the doctor or caregiver tending to patients in their homes; a leather satchel containing crucial instruments nearby. Still, we acknowledge a new reality—of health care as a consumer product: tranched and parsed into products designed for maximum efficiency. Home health care is considered a scarce and expensive resource. In other sectors, we see a similar trend. Local mechanics, hardware stores, and groceries are disappearing in favor of one-stop box stores. Geek Squad and Facebook are replacing specialists who used to fix computers in the home or provide websites for small businesses.

    Academic libraries are different. They are, and have been for a long time, highly centralized institutions whose services and organizational structures are often designed to reflect a certain order that is perceived to exist within the broader institution.

    Departments have liaisons, collection development often falls along disciplinary lines, and the library is treated as a destination—a physical and virtual domain—out of which the tools for scholarship will be doled. Academic libraries are faced with a challenge that is the inverse in other sectors: we are faced with a digital-scholarship environment that screams for decentralizing many library services. In order to do so, we must overcome a static cultural momentum.

    In 2002, the American Library Association launched the massive Campaign for America’s Libraries. The centerpiece of the campaign was a new marketing effort built around the slogan, “@ Your Library.” According to the ALA’s website, the campaign has several purposes.

    • Promote awareness of the unique role of academic and research libraries and their contributions to society;
    • Increase visibility and support for academic and research libraries and librarians; help librarians better market their services on-site and online;
    • Position academic and research librarianship as a desirable career opportunity.[1]

    While these are mostly admirable goals, they betray the extent to which the library profession, as represented by the ALA, is willing to respond to the challenges of the digital era by simply marketing traditional services more aggressively. This approach is flawed; not because patrons do not value traditional library services, but because the services no longer reflect the character of the institutions that they serve.

    When the traditional disciplines engage more with digital technologies, the familiar practices become fragmented and less familiar—a phenomenon that Wendell Piez describes as akin to “a field where native plants and wildflowers are overtaking a tidy lawn.” This unruliness disrupts the mappings that libraries have traditionally applied to the disciplines. Instead of designing liaison, cataloging, and collection-development services that support a predictable mode of scholarly work, libraries need to support scholarship that emerges from a state of relative entropy. The new mapping, in other words, is not to make traditional library services more digital, but rather to explode them out into a complementary state of entropy.

    The entropic library is one in which the library is not only a physical destination and an institutional cornerstone, but also is a gravitational force in the digital scholarly life of the campus. It is a force that is exerted by library staff acting as consultants, software developers, funders, principal investigators, data curators, and mad scientists. It acts as a resource for the university’s scholars by helping to shape and support new digital methodologies, which it channels into programmatic activities when there is a potential benefit to the wider university community. Its first concern is not to get digital things into the library as new collections, but to get the library to where the digital things are being used, and make them accessible and sustainable.

    Embracing entropy is difficult for an institution whose identity has been defined by its advocacy of order, and it can be difficult for lovers of libraries to see entropy as anything but a threat to everything that we cherish in our libraries. Our romanticized image of the library tends to be of the library as a destination. In this image we might imagine the cloistered stacks, the hours spent ingesting the wisdom in the books, and the boundless potential in the unread volumes. It is a powerful image, and it is made more poignant by the sensory associations we often have with the library: the smell of the bindings, the muted sounds in the stacks, the concentration evident on the faces of readers. It is understandable that libraries, faced with the emergence of digital technologies in the 1990s, would design services that attempt to preserve the appeal of that library. Reference areas crammed with tables, lamps, and books transformed into computer labs, but the space retained its purpose as a destination for study and work. Card catalogs were replaced by Online Public Access Catalogs (OPACs), which were largely digital renditions of the same tools that libraries had always offered. Print-journal collections thinned as digital subscriptions became more cost-effective, although real challenges to the academic-publishing paradigm would not gain traction for at least another decade. The roles of librarians, however, largely remained the same—as gatekeepers and guides for information resources housed within and, to a limited extent, outside of the library’s physical and digital bounds.

    Creating digital surrogates for traditional services was a necessary, evolutionary step toward modernization. But there remains a chasm between the notion of the modern library as a purveyor of traditional resources delivered digitally, and the entropic library—steeped in and defined by the new digital scholarship. The entropic library needs to cultivate physical spaces in which to do scholarly work using digital media. Yet it is no longer a font from which information flows. It is a kaleidoscope of data, knowledge, and interaction, brought together by the scholarly primitives and crystallized for moments in the physical spaces that the university contains.


    1. “Welcome to the Academic and Research Library Campaign,” American Library Association, http://www.ala.org/advocacy/advleg/publicawareness/campaign@yourlibrary/prtools/academicresearch/academicresearch.return to text

    The Wrong Business for Libraries

    Our academic libraries have been in the wrong business for about 150 years. It was in the mid- to late nineteenth century that they began to be characterized as storehouses or warehouses of information. This information-centered model is a mistake. Before then, they were not stand-alone collections of books, but great complexes of mental and physical activity, and included museums, gymnasiums, and baths. The goal of the library was to support the great scholars of the day by providing them access to the most important sources of information, but also to everything else that was needed to turn that information into new knowledge—including a space for discourse and debate. Not that we should put baths or gymnasiums back in our libraries. We simply need to completely rethink both what it is that libraries do and why they do it.

    The struggle of the academic library to stay relevant today is due to this switch from a scholar-centered model to an information-centered one. The imminent collapse of the latter model is causing tension not only across academic libraries and the field of library science, but across academia as a whole.

    Prior to the Victorian era, most academic libraries were what Matthew Battles might characterize as Parnassan—small, well-focused institutions where what mattered was not the quantity of the collections, but the quality.[1] Then our system of universities exploded, and at the same time the cost of printing went down. Libraries began to put collecting at the top of their priorities. The result was that libraries changed from circumscribed institutions that fostered the entire life cycle of scholarship to what Andrew Abbott describes as a “universal identification, location, and access machine.”[2] Where the Internet has made it possible to finally fulfill the idea of our university library as universal library (again, to use one of Battles’s terms), our academic libraries have failed. In just a few short years, Google has come much closer to the creation of a universal library than our libraries have.

    The problem is, of course, that we have spent nearly 50 years crafting this idea that our academic libraries are centers for information retrieval. Only one ALA-accredited graduate program has maintained the title “library science”; thirty have changed to “library and information science”; four put information first, but retain library—“information and library science”; and seventeen have dropped the library all together and are simply schools of “information science” or “information studies.”[3] Similar trends can be seen in the United Kingdom, where most recently the program at University College London has changed from the department of “information and library science” to the department of “information studies.” We don’t even produce librarians anymore—we produce information scientists.

    We librarians put all of our eggs into the “information basket” and it feels a bit late to turn back now. But the Internet has completely changed our relationship to information, and as a result, the model of library as information center is going to collapse.

    It is time for a new theory of libraries—well past time, in fact. The user—the scholar—must be put back in the center of the academic research library again, but the users’ needs must be considered within the broader context of the process of scholarship. In focusing on information, academic research libraries have, in part, been trying to address what users want, not what they need. As Ranganathan stated, “the majority of readers do not know their requirements.”[4] It has long been the role of library and librarians to help scholars understand them.

    The goal of any new theory of libraries must of course accommodate the increasing needs in research and scholarship for large quantities of information, but should not preface quantity of information over all else. As important as the information itself is, providing and supporting an environment that allows for the transformation of that information into new knowledge is essential.

    What has been forgotten, for example, is that libraries were, and should be again, inherently social places. That these are spaces not just for getting access to resources, but to people—librarians, archivists, other scholars—with whom discourse can be entered about the resources therein. An academic research library should first be seen as a collection of services that support the creation of new knowledge. From this perspective, the library is not defined by its walls or by its collections, but by those very services. The goal of a library is not, then, to provide access to information, it is to provide a space—whether literal or virtual—for the support of all aspects of the scholarship process, with information provision being just one of these services. The information commons, gateway, or storehouse should not be the goal or the fate of the academic research library.

    The library is a combination of tangible and intangible elements: a collection—of the tangible or digital—an organizational system, and scholarship, but also the invisible environment that contributes to and connects all three. There is no library, for example, without a culture of inquiry. Everything that is done in the library (entering, lingering, reflecting), and everything the library holds (collections of objects, living things, knowledge, information, contexts, lessons, memories), when bound together by a systematic, continuous, organized knowledge structure supports the act of new-knowledge creation known as scholarship. The result of the resources invested in the library, therefore, is not measured in the size of the collection, or even in the number or satisfaction of users, but in their experiences.


    1. Matthew Battles, Library: An Unquiet History (W. W. Norton & Company, 2004).return to text

    2. Andrew Abbott, “Publication and the Future of Knowledge,” 2008, http://home.uchicago.edu/%7Eaabbott/Papers/aaup.pdf.return to text

    3. American Library Association, “ALA: Alphabetical Accredited List,” http://web.archive.org/web/20110605063115/http://www.ala.org/ala/educationcareers/education/accreditedprograms/directory/list/index.cfm.return to text

    4. S. R. Ranganathan, Five Laws of Library Science, 2nd ed. (Bombay: Asia Pub. House, 1963).return to text

    Reimagining Academic Archives

    ‘Does the past exist concretely, in space? Is there somewhere or other a place, a world of solid objects, where the past is still happening?’
    ‘Then where does the past exist, if at all?’
    ‘In records. It is written down.’
    ‘In records. And—?’
    ‘In the mind. In human memories.’
    ‘In memory. Very well, then. We, the Party, control all records, and we control all memories.”
     —George Orwell, Nineteen Eighty-Four[1]

    Archives are rarely created for the express purpose of being preserved, but develop organically as people live their—typically chaotic—lives. Archivists—many of whom serve in university archives and manuscript libraries—are dedicated to identifying, preserving, and providing access to a selective, authentic, and usable record of that messy human experience. People from all walks of life use archives to generate new ideas—or test existing ones—to confirm rights, to hold others accountable for their actions, to gain personal depth of understanding, to establish a connection with society or to the past, and to perform functions that help preserve democratic institutions, sustain civil society, or ensure social justice.

    The archivist’s charge was difficult enough to fulfill before the advent of networked computing technologies. Many people make overblown claims that a “digital dark age” is now upon us—that all of the electronic files we are creating will someday vanish. At first blush, we instinctively wonder how this could be possible: if there is one thing our lives do not lack, it is access to information. People demand, and are constantly developing better ways to control, index, and sort massive stores of information, but few believe that it will all someday vanish, or perhaps slowly rot away.

    It is trite to say that e-mail, websites, blog entries, digital photographs, textual records, database files, and other electronic records are very susceptible to accidental loss, deletion, or decontextualization, even if we do not accept the premises of dystopian predictions that civilization will collapse after the oil runs out, or a catastrophe besets humanity. Nevertheless, records become more fragile and vulnerable as individuals, business, and even governments outsource data storage and management to the warm embrace of commercial vendors, ostensibly under the rubric of cost cutting and efficiency. Also, most people now create records using a wide range of tools, services, and hardware, leaving interrelated records strewn across hard drives, shared servers, social-networking sites, and cloud applications. These documents reside under the care, custody, and control of many different people and organizations—not simply the person or organization that created, and has a vested interest in, their content.

    Leaving aside the factors mentioned above, every set of electronic records is itself a constructed and contested entity. The person who creates or assembles the documents molds them into an archive through their activities, interests, and sometimes, their malfeasance, subterfuge, or inertia. Those who control its means of access also have a chilling ability to shape how that record is presented to the public, as certain citizens of the People’s Republic of China know all too well.

    However one wishes to slice or dice technical issues related to the creation and management of records, we know for certain that it is impossible to construct accurate histories without accurate and faithful evidence of people’s actions. Those who use archives can reconstruct or understand those actions only when records are maintained in an intellectually coherent fashion. The contextual relationships between the individual documents that comprise an individual or corporate entity’s intellectual output must be preserved. Similarly, future users of archives need to know how the records they are using are related to records produced by other records creators. Given these facts, what types of organizations are best placed to serve as the long-term, trusted custodian of authentic, verifiable, and accurate electronic records?

    It is tempting to think that the preservation of digital heritage can be left to those who provide the service of storing and disseminating the thoughts that we distill using keyboards, video cameras, or other digital devices. But to do this would leave the records at extreme risk of loss. At the eighth European Conference on Digital Archiving, Steve Bailey described this problem using an apt metaphor: Imagine if we had trusted the preservation of the records left by Samuel Pepys—the eighteenth-century London diarist—to those who produced his communication media: the stationer who sold him his notebooks, the tanner who sold him his vellum, and the cartographer who sold him the maps he carefully annotated.[2]

    Of course, each of the businesses Pepys patronized has long since passed gently into the night. We believe that the same fate will not await Google, Facebook, or Twitter, but even if they manage to survive, what will happen to the content stored in minor services, on contracted webhosts? Tellingly, the terms of service for nearly every free platform or low-cost web host make absolutely no promises regarding digital preservation, or even the return of content to users in case of business failure. Catastrophic business failure is hardly beyond the realm of possibility, as a shareholder in Arthur Andersen will point out. Over a fifty-year period, Google is as vulnerable to social or economic change as the newspaper industry, or perhaps a revolt over its privacy policies may mortally wound it. Even now, its revenue stream is highly reliant on a single source of income: advertising sales.

    The recent archiving deal announced between Twitter and the Library of Congress may or may not portend a partial solution to the problem of relying on commercial entities to preserve information needed for historical research. But let’s not kid ourselves: the Library of Congress is extremely unlikely to strike deals with every commercial entity providing social-media services, much less every web host, in the country. Other factors will undermine the effectiveness of mass archives. Users, quite understandably and predictably, have already begun to assert a—self-declared—right to remove content from the Library of Congress. The Twitter terms of service put into effect on September 10, 2009, provides Twitter express permission to make tweets available to anyone they choose, and the disposition of public tweets made prior to this date, as well as all of the private tweets, should be an interesting issue for the California judicial system to resolve.

    Even if the mass archiving of materials from millions of records creators did not face significant legal hurdles, the methods that libraries use to catalog and make information available are not well suited to preserving the full context necessary to make individual records understandable. To oversimplify at the risk of stereotyping: libraries deal well with items, such as books, or consistent runs of uniform media, such as serials; archives deal well with aggregations of mixed media, and with preserving the contextual information that make them understandable. While large repositories such as the Library of Congress can use cutting-edge tools to mine and repurpose large volumes of data, most tweets cannot be understood without extensive recourse to other online materials, such as blog posts or videos.

    Using their professional principles of provenance, sanctity of original order, collective appraisal, and active custodianship, archivists possess the conceptual tools to preserve and make accessible the raw materials of future history: e-mail, digital photographs, and other electronic records. Unfortunately, most archives have made little systematic progress in identifying, preserving, and providing access to electronic records.

    Why have most archives failed to effectively address electronic records issues? The reasons are many, but in the end the typical answers are that “digital preservation is hard,” and “we don’t have enough money to do it properly.”

    Nevertheless, working closely with university faculty, staff, and students, archivists must reorient archival programs toward electronic records, and to appropriate a set of low-cost tools and services to preserve digital information in a trustworthy fashion. The exact way in which local archives may choose to rethink, reconceptualize, reconstruct, or re-create itself will vary and must be shaped by local context, but almost any institution can cobble this together with existing open-source software. Ultimately, traditional archives must be reimagined in an act of constructive transformation.


    1. George Orwell, Nineteen Eighty-Four (New York: Plume, 1949).return to text

    2. Steve Bailey, “In Whose Hands Does the Future of Digital Archiving Lie?,” presented at the eighth European Conference on Digital Archiving, [formerly http://www.vsa-aas.org/de/aktuell/eca-2010/2010-4-29/].return to text

    Interdisciplinary Centers and Spaces

    Centers of Attention

    I’ve been around digital humanities centers for a long time—fifteen years at least. I’ve worked at them—in positions ranging from part-time staff member to Fellow—consulted for them, given speeches at various openings and anniversaries, and been present at a few center funerals. So, I’m always interested in how these things get started and how they end.

    One of my favorite founding stories involves the Institute for Advanced Technology in the Humanities (IATH) at the University of Virginia, where a lot of my ideas about centers were formed. According to the story, IBM offered to donate a server to the University of Virginia—this was back when such things were much rarer, and a lot more expensive. The university naturally approached the computer science department, asking if they’d like the equipment. The department, amazingly, said “no.” They had heard, however, that there were some people over in the English and history departments who were doing things with computers. Maybe ask them.

    I’ve always imagined the server washing up on the shores of the College of Arts and Sciences and starting a strange cargo cult among a group of people who normally didn’t talk to each other much. There’s a guy in history who’s into computers, and there’s someone in English. Neither of them really knows what they’re doing, and the computer science people are too busy with serious computational matters to help out the poets. The librarians, fortunately, know more than the computer scientists about how to actually run a rack server, and so they get involved. Questions arise: Where do we put this thing? Who pays for its upkeep? Doesn’t it need, like, maintenance or witchcraft, or something? Are we really qualified to design websites, given that none of us have the faintest idea how to draw?

    That this turned into one of the most vibrant centers of intellectual activity in North America—a hugely influential research group that would be widely imitated by such contemporary powerhouses as the Maryland Institute for Technology in the Humanities and the University of Nebraska–Lincoln’s Center for Digital Research in the Humanities—should surprise no one.

    We like to marvel at the technological wonders that proceed from things like servers, but in this case—I would say, in all cases—the miracle of “computers in the humanities” is the way it forced even a highly balkanized academy into new kinds of social formations. Anyone involved with any of these big centers will tell you that they are rare sites of genuine collaboration and intellectual synergy—that they explode disciplinary boundaries and even the cherished hierarchies of academic rank. They do this, because . . . well, really because no one really knows what they’re doing. Because both the English professor and the history professor need to learn MySQL; because the undergraduate student from art history happens to be the only one who knows PHP; because actually, you do need to learn how to draw—or at least know something about design—and the designers are pleased to reveal their art to you. Because you know Java.

    These may not sound like disruptive modalities, but in an area of scholarship where coauthorship is viewed with suspicion and collaboration is rare, the idea that you couldn’t master everything necessary to create a digital archive or write a piece of software was a complete revelation. It forced scholars to imagine their activities in terms of highly interdependent groups. To succeed, you had to become like the Clerk in The Canterbury Tales; “gladly would he learn and gladly would he teach.” Working as a full-time programmer at IATH in the late 1990s—while finishing a PhD in English—not only changed the way I think about computers in the humanities, but changed the way I think about the humanities, and about higher education itself.

    Universities are designed around subject areas. But what if they were designed, like centers, around methodologies or even questions? Right now, we have English departments, and political science departments, and biology departments. These various units—made up of people who only occasionally talk to each other—band off to form things like the Graduate Certificate Program in Eighteenth-Century French Drama, or the Center for Peace Studies, or the Bioinformatics Initiative. What would it be like if that was all there was—structures meant to bring people and students together for as long as a methodology remains useful or a question remains interesting? Such entities would be born like centers—born with all the excitement and possibility of not knowing what you’re doing—of having to learn from each other what the methodologies and questions are really about. They might also die like centers. I mentioned that I’ve been at a few center funerals, and I can tell you that they don’t die the way you think—lack of funding, for example, is probably the least common reason. Mostly, they die because people move on to other questions and concerns—and what’s wrong with that? You could imagine a university in which scholars move through a number of different centers over the course of a career, and students pass through a number of them on the way to a degree—we’d have to change the names of the degrees to something vague, like “Bachelor of Arts” or “Doctor of Philosophy.”

    Years ago, while working at IATH, my dissertation director—Jerome McGann, one of the cargo cult founders—stopped me in the hallway and said, “Steve, be sure to treasure this experience. I’ve worked in this field a long time, and I can tell you: you may never see this again.” I think Jerry was right and wrong about that. He was wrong; I’ve managed to see it several times since leaving IATH, most especially at the center I’m now involved with—the Center for Digital Research in the Humanities. But he was also right. It’s easy to treasure the wrong thing about digital centers: to see the excitement brewing in a community of teachers, students, and researchers as a new opportunity for what we might do, rather than a way to affirm an amazing thing that has already happened.

    Hacker Spaces as Scholarly Spaces

    A hallmark of the hacker/maker culture is community collaboration. That community is often physically manifest in a particular space—a rented warehouse, a shed, somebody’s garage. Hacker spaces often grow out of a common need for a place to work, exchange ideas, share knowledge, and pool resources. In these cases, the community essentially exists without the space, but it is the space that breathes life into the community. Interdisciplinary practice works in much the same way. Many in academia are already interested in—and often work across—multiple disciplines, but lack a common space to facilitate both independent disciplinary work and collaborative interdisciplinary work. A hacker space.

    Such a scholarly space—of which HUMLab, the digital humanities and new-media lab at Umea University in Sweden, serves as an excellent established example—exists not to institute interaction, but to provide a creative environment for scholars, researchers, artists, students, teachers, anyone with interest (hence paradisciplinary), to work, exchange ideas, share knowledge, and pool resources. A flexible scholar/hacker space encourages exchange of ideas, collaboration, and discovery beyond the discipline through an organic process of interaction, sharing, and learning from each other. Possibly the most valuable aspect of such a space would be the creation of a hacker/scholar/maker community in which members are free to pursue their own research and academic projects, and also to collaborate and interact with the community as a whole.

    Like a discipline, such a community would provide a living repository of common knowledge and quality practice, but instead of establishing a single shared heuristic, it would serve as a dynamic collection of varied modes of thinking and questioning. This model is certainly not for everyone, and would likely not replace the current disciplinary model, but should it? One of the strengths of the hacker/maker model is that it is not an attempt to eliminate previous models so much as it represents a drive to modify and improve upon elements of those models.

    In conjunction with a more flexible disciplinary framework, paradisciplinary scholar spaces could provide an organic—and fun—means of thinking and doing across the academic disciplinary divide. Hacking is about doing: creating, thinking, questioning, observing, learning, and teaching. The core of academic work is, at its heart, hacking. The scholar-hacker takes this and runs with it; breaking open previous modes of thought to see how they tick, rearranging them, adding to them, and then taping, soldering, and gluing them back together again.

    Take an Elective

    Tasked with establishing a university for Catholics in Ireland in the 1850s, Cardinal John Henry Newman distilled his understanding of the university as a place for teaching, learning, and conversation where inquiry is pushed forward. Though Newman was focused on the undergraduate education of men, by men, his insights hold import for all of us, including those of us with advanced degrees. Newman discussed the importance of exposing students to many perspectives in his essay, “The Idea of a University.”

     . . . the drift and meaning of a branch of knowledge varies with the company in which it is introduced to the student. If his reading is confined simply to one subject, however such division of labour may favour the advancement of a particular pursuit . . . certainly it has a tendency to contract his mind. If it is incorporated with others, it depends on those others as to the kind of influence which it exerts upon him. . . .

    It is a great point then to enlarge the range of studies which a University professes, even for the sake of the students; and, though they cannot pursue every subject which is open to them, they will be the gainers by living among those and under those who represent the whole circle. This I conceive to be the advantage of a seat of universal learning, considered as a place of education. An assemblage of learned men, zealous for their own sciences, and rivals of each other, are brought, by familiar intercourse and for the sake of intellectual peace, to adjust together the claims and relations of their respective subjects of investigation. They learn to respect, to consult, to aid each other. Thus is created a pure and clear atmosphere of thought, which the student also breathes, though in his own case he only pursues a few sciences out of the multitude.[1]

    Thus, this effort to produce well-rounded human beings rather than intensely specialized practitioners appeared to have significant benefits for both the students and the faculty.

    If we are to consider how we might change the practices of the academy to help us begin to move past a place of systemic dysfunction, we have to propose solutions that seem realistic to both junior and senior faculty in more traditional positions. How? Take an elective. Embrace eclecticism, and give yourself permission to dedicate some percentage of your week to learning or investigating something completely new, in the service of having more intellectual fun.

    Remember what it felt like to take an elective that truly excited you? Remember the joy of doing something just because it was fun and challenging, in and of itself? Perhaps this is a scholarly version of Google’s 20 percent rule, where employees get one day a week to work on their own projects. But since as academics we are mostly self-directed, this time be dedicated to moving beyond the core forms of individual work that are the benchmarks of disciplinary promotion and tenure. Consider a new methodological approach. Produce work that takes a nontraditional form. Work with colleagues from other disciplines. Then, step forward and proclaim the results as being central to the future health and welfare of the academy. This elective work has the potential to enlarge the way that we think about and evaluate scholarship. Thus, it can remind the academy as a whole that the value of our work is not that it results in a monograph or a bevy of articles in major scholarly journals, but that it opens up new lines of inquiry and pushes our collective understanding of the world forward.


    1. John Henry Newman, “The Idea of a University,” Newman Reader, September 2001, http://www.newmanreader.org/works/idea/index.html, 100–101.return to text

    Voices: Interdisciplinarity

    Many institutions pride themselves on encouraging interdisciplinary scholarship. However, the reality is that it is much easier to have a traditional, one-field identity—e.g., English, geology, physics, etc.—than it is to create and maintain an interdisciplinary identity. The very structure of most universities is based on a model of one scholar, one discipline—the unit of discipline being the department. Departments are usually walled gardens, little islands of thought and practice that are surrounded by moats filled with sharks, and patrolled by giant killer robots with instructions to kill on sight. (What? Your department doesn’t have giant killer robots?)

    —Ethan Watrall

    Debates about field definition are often less about determining what good work in a field might be than they are about turf wars—turf wars driven less by intellectual questions than by institutional and economic imperatives. I wonder about the cost of that disciplinarity; about the degree to which we are now being disciplined by our need to define the field. What conversations won’t take place, now that our structure has become officially institutionalized? I hope that we can find a way—and perhaps a way that might model a new mode of interdisciplinary affiliation for the university at large—to imagine our borders less as walled structures than as the containing elements of Venn diagrams, somehow semipermeable, allowing for overlap and intermingling, rather than producing territorial invasion and defense.

    —Kathleen Fitzpatrick

    If what the digital does is just take the old disciplines and make them digital, leaving disciplinarity and the silo structure of the university intact, it will have failed. I want to see the digital transform not just the content or practice of the disciplines, but the very idea of disciplinarity.

    —David Parry