109dculturedh12172434.0001.001 in

    Hacking Teaching

    Dear Students

    Dear students:

    I’m about to say something a college professor shouldn’t say to his students, but I care about you a lot so I’m prepared to break the code and say what needs to be said: Your college experience is likely to set back your education, your career, and your creative potential. Ironically, this will be done in the name of education. You deserve to know about this! You have what it takes to reclaim, reform, and remix your education. Don’t let college unplug your future!

    Reality Check no. 1: The Digital World Is Your Home Campus

    You already know this on some level. The campus for your education isn’t made principally of buildings and books; it’s made mostly of microchips and media. Any other school is a satellite now, subordinate to the main, digital campus where you reside and thrive. And since you grew up digital, you’ve been matriculated since the first click of a mouse button, with no need ever to graduate. Your world of learning and your world of play are seamless in the digital domain, and you are pretty much a senior on that campus, even in your teens. You spend your spare cash to get that iPhone or laptop, and you move effortlessly between virtual and physical worlds. The reality check is that physical schools and structured curricula and degree-seeking programs form a system that makes enormous demands upon you, but which is fundamentally out of sync with the fact that your identity, development, education, and success will be intimately intertwined with the digital domain.

    And why shouldn’t they be? No generation of youth has ever lived in a more exciting era than ours, nor learned in more compelling ways than are granted to you electronically today. Frontiers of opportunity have been opened for you through digital means that would make Cortés weep at how comparatively little spoil he carted off from the Aztecs. Each of you can reach across the planet, exploring the topography of our world with the ease of a soaring bird. You can befriend others from foreign places and cultures with the click of a key. You can get up-to-the-minute updates from a robot on Mars on your cell phone, or Google Alexandrian libraries with an ease that would surpass the fantasies of generations of scholars. You can be a spectator to the cosmos or to the local city-council meeting. But your new world does not leave you watching on the sidelines! You can share your lifestream, add your perspective to countless conversations, and have the world comment back—interacting with people who will value your ideas and your style. And what style! Modes of creative expression are being opened to your generation that none have known before. You can shape and share your identity in a thousand different ways, testing what you like, feeding your own passions, carving your own way. What a fantastic time to be alive!

    Reality Check no. 2: Surviving in the Real World

    Hold on. It’s one thing to trick out your avatar for the metaverse of your choice or suction Limewire for some fresh tracks, but what about earning your bread? Generations of parents and high-school counselors have convinced you that college is the answer. After all, how are you going to get a job if you can’t show that shiny sheepskin to the suit across the desk from you in the personnel department? Blogging won’t pay the bills! Maybe not.

    Reality Check no. 3: Sheepskin vs. Online Identity

    It will be a long time before a college diploma is as quaint as, say, getting a public notary’s stamp. But there is another system already competing with college, and it will start those bean counters in the tuition office sweating soon enough. This alternative to college credentials is as huge as the Stay Puft Marshmallow Man from Ghostbusters, and he’s towering over the skyline right where town meets gown: online identity.

    That’s right. Who you are and what you’ve done will in the very near future be so well documented by your online activities that a resume will be redundant. The time will come when a college degree will be suspect if not complemented by an admirable online record—and I’m not talking about transcripts. Your transcripts will consist of your lifestream: your blog, your social networks, your creative work published or otherwise represented online. Cyberspace is already more real to you than the physical space of your college campus—and it is becoming so for your future employers.

    Sincerely,

    A concerned professor

    Lectures Are Bullshit

    The following is an excerpt from Jeff Jarvis’s talk at TEDxNYED, an independent regional version of the TED conferences, with their spotlighted lectures. Jarvis took the opportunity to turn against this form of academic theater.

    Right now, you’re the audience and I’m lecturing. That’s bullshit.

    What does this remind of us of? The classroom, of course, and the entire structure of an educational system built for the industrial age, turning out students all the same, convincing them that there is one right answer—and that answer springs from the lectern. If they veer from it they’re wrong; they fail.

    What else does this remind us of? Media, old media: one-way, one size fits all. The public doesn’t decide what’s news and what’s right. The journalist-as-speaker does.

    We must question this very form. We must enable students to question the form. We should want questions, challenges, discussion, debate, collaboration, quests for understanding, and solutions. Has the Internet taught us any less?

    But that is what education and media do: they validate. They also repeat. In news, I have argued that we can no longer afford to repeat the commodified news the public already knows because we want to tell the story under our byline, exuding our ego; we must, instead, add unique value.

    The same can be said of the academic lecture. Does it still make sense for countless teachers to rewrite the same essential lecture about, say, capillary action? Used to be, they had to. But not now, not since open curricula and YouTube. Just as journalists must become more curator than creator, so must educators.

    A few years ago, I had this conversation with Bob Kerrey at the New School. He asked what he could do to compete with brilliant lectures now online at MIT. I said don’t complete, complement. I imagined a virtual Oxford based on a system of lecturers and tutors. Maybe the New School should curate the best lectures on capillary action from MIT and Stanford, or a brilliant teacher who explains it well even if not from a big-school brand; that could be anyone in YouTube U. Then the New School adds value by tutoring: explaining, answering, probing, enabling.

    The lecture does have its place to impart knowledge and get us to a shared starting point. But it’s not the be all and end all of education—or journalism. Now the shared lecture is a way to find efficiency in ending repetition, to make the best use of the precious teaching resources we have, to highlight and support the best. I’ll give the same advice to the academy that I give to news media: Do what you do best and link to the rest.

    I still haven’t moved past the lecture and teacher as starting point. I also think we must make the students the starting point.

    At a Carnegie event at the Paley Center a few weeks ago, I moderated a panel on teaching entrepreneurial journalism and it was only at the end of the session that I realized what I should have done: start with the room, not the stage. I asked the students in the room what they wished their schools were teaching them. It was a great list: practical, yet visionary.

    So we need to move students up the education chain. They don’t always know what they need to know, but why don’t we start by finding out? Instead of giving tests to find out what they’ve learned, we should test to find out what they don’t know. Their wrong answers aren’t failures—they are needs and opportunities.

    But the problem is that we start at the end, at what we think students should learn, prescribing and preordaining the outcome: we have the list of right answers. We tell them our answers before they’ve asked the questions. We drill them and test them and tell them they’ve failed if they don’t regurgitate back our lectures as lessons learned. That is a system built for the industrial age, for the assembly line, stamping out everything the same: students as widgets, all the same.

    But we are no longer in the industrial age. We are in the Google age. Hear Jonathan Rosenberg, Google’s head of product management, who advised students in a blog post. Google, he said, is looking for “non-routine problem-solving skills.” The routine way to solve the problem of misspelling is, of course, the dictionary. The nonroutine way is to listen to all the mistakes and corrections we make and feed that back to us in the miraculous, “Did you mean?”

    “In the real world,” he said, “the tests are all open book, and your success is inexorably determined by the lessons you glean from the free market.” “It’s easy to educate for the routine, and hard to educate for the novel,” Rosenberg adds. Google sprung from seeing the novel. Is our educational system preparing students to work for or create Googles? Googles don’t come from lectures.

    So if not the lecture hall, what’s the model? I mentioned one—the distributed Oxford—lectures here, teaching there.

    Once you’re distributed, then one has to ask, why have a university? Why have a school? Why have a newspaper? Why have a place or a thing? Perhaps, like a new news organization, the tasks shift from creating and controlling content and managing scarcity to curating people and content, and enabling an abundance of students, teachers, and knowledge: a world where anyone can teach and everyone will learn. We must stop selling scarce chairs in lecture halls and thinking that is our value.

    We must stop our culture of standardized testing and standardized teaching. Fuck the SATs. In the Google age, what is the point of teaching memorization?

    We must stop looking at education as a product—in which we turn out every student giving the same answer—to a process, in which every student looks for new answers. Life is a perpetual beta.

    Why shouldn’t every university—every school—copy Google’s 20 percent rule, encouraging and enabling creation and experimentation, with every student expected to make a book, or an opera, or an algorithm, or a company? Rather than showing our diplomas, shouldn’t we show our portfolios of work as a far better expression of our thinking and capability? The school becomes not a factory, but an incubator.

    From Knowledgeable to Knowledge-able

    Most university classrooms have gone through a massive transformation in the past ten years. I’m not talking about the numerous initiatives for multiple plasma screens, moveable chairs, round tables, or digital whiteboards. The change is visually more subtle, yet potentially much more transformative. I recently wrote about this in an Encyclopædia Britannica online forum.

    There is something in the air, and it is nothing less than the digital artifacts of over one billion people and computers networked together collectively producing over 2,000 gigabytes of new information per second. While most of our classrooms were built under the assumption that information is scarce and hard to find, nearly the entire body of human knowledge now flows through and around these rooms in one form or another, ready to be accessed by laptops, cellphones, and iPods. Classrooms built to re-enforce the top-down authoritative knowledge of the teacher are now enveloped by a cloud of ubiquitous digital information where knowledge is made, not found, and authority is continuously negotiated through discussion and participation.[1]

    This new media environment can be enormously disruptive to our current teaching methods and philosophies. As we increasingly move toward an environment of instant and infinite information, it becomes less important for students to know, memorize, or recall information, and more important for them to be able to find, sort, analyze, share, discuss, critique, and create information. They need to move from being simply knowledgeable to being knowledge-able.

    The sheer quantity of information now permeating our environment is astounding, but more importantly, networked digital information is also qualitatively different than information in other forms. It has the potential to be created, managed, read, critiqued, and organized very differently than information on paper, and to take forms that we have not yet even imagined. To understand the true potentials of this information revolution on higher education, we need to look beyond the framework of “information.” For at the base of this information revolution are new ways of relating to one another; new forms of discourse; new ways of interacting; new kinds of groups; and new ways of sharing, trading, and collaborating. Wikis, blogs, tagging, social networking, and other developments that fall under the Web 2.0 buzz are especially promising in this regard because they are inspired by a spirit of interactivity, participation, and collaboration. It is this spirit of Web 2.0 which is important to education. The technology is secondary.

    This is a social revolution, not a technological one, and its most revolutionary aspect may be the ways in which it empowers us to rethink education and the teacher-student relationship in an almost limitless variety of ways.

    Physical, Social, and Cognitive Structures Working Against Us

    Yet there are many structures working against us. Our physical structures were built prior to an age of infinite information, our social structures formed to serve different purposes than those needed now, and the cognitive structures we have developed along the way now struggle to grapple with the emerging possibilities.

    The physical structures are easiest to see, and are on prominent display in any large “state-of-the-art” classroom. Rows of fixed chairs often face a stage or podium housing a computer from which the professor controls at least 786,432 points of light on a massive screen. Stadium seating, sound-absorbing panels, and other acoustic technologies are designed to draw maximum attention to the professor at the front of the room. The message of this environment is that to learn is to acquire information, that information is scarce and hard to find (that’s why you have to come to this room to get it), that you should trust authority for good information, and that good information is beyond discussion (that’s why the chairs don’t move or turn toward one another). In short, it tells students to trust authority and follow along.

    This is a message that very few faculty could agree with, and in fact some may use the room to launch spirited attacks against it. But the content of such talks is overshadowed by the ongoing hour-to-hour and day-to-day practice of sitting and listening to an authority for information, then regurgitating that information on exams.

    Many faculty may hope to subvert the system, but a variety of social structures work against them. Radical experiments in teaching carry no guarantees, and even fewer rewards in most tenure and promotion systems, even if they are successful. In many cases, faculty are required to assess their students in a standardized way to fulfill requirements for the curriculum. Nothing is easier to assess than information recall on multiple-choice exams, and the concise and “objective” numbers satisfy committee members busy with their own teaching and research.

    Even in situations in which a spirit of exploration and freedom exist, where faculty are free to experiment to work beyond physical and social constraints, our cognitive habits often get in the way. Marshall McLuhan called it “the rear-view mirror effect,” noting that “We see the world through a rear-view mirror. We march backwards into the future.”[2]

    Most of our assumptions about information are based on characteristics of information on paper. On paper, we thought of information as a “thing” with a material form, and we created elaborate hierarchies to classify each piece of information in its own logical place. But as David Weinberger—in his book Everything Is Miscellaneous, and Clay Shirky—in his essay “Ontology Is Overrated,” have demonstrated, networked digital information is fundamentally different than information on paper.[3] And each digital innovation seems to shake us free from yet another assumption we once took for granted.

    Even something as simple as the hyperlink taught us that information can be in more than one place at one time, challenging our traditional space-time-based notions of information as a “thing” that has to be “in a place.” Google began harnessing the links and revolutionized our research with powerful machine-assisted searching.

    Blogging came along and taught us that anybody can be a creator of information. Suddenly anybody can create a blog in a matter of seconds—and people have responded. Technorati now reports that there are over 133 million blogs—almost 133 million more than there were just five years ago. YouTube and other video-sharing sites have sparked similar widespread participation in the production of video. Over 10,000 hours of video are uploaded to the web every day. In the past six months, more material has been uploaded to YouTube than all of the content ever aired on major network television. While such media beg for participation, our lecture halls are still sending the message, “follow along.”

    Wikipedia has taught us yet another lesson: that a networked information environment allows people to work together in new ways to create information that can rival—and even surpass—the content of experts by almost any measure. The message of Wikipedia is not “trust authority,” but “explore authority.” Authorized information is not beyond discussion on Wikipedia, information is authorized through discussion, and this discussion is available for the world to see and even participate in. This culture of discussion and participation is now available on any website with the emerging “second layer” of the web through applications like Diigo, which allow you to add notes and tags to any website, anywhere.

    As we note and tag these sites, we are also collectively organizing them, so that the notion that this new media environment is too big and disorganized for anybody to find anything worthwhile and relevant is simply not the case. Our old assumption that information is hard to find is trumped by the realization that if we set up our hyperpersonalized digital network effectively, information can find us. For example, I have set up my own Netvibes portal so that the moment anybody anywhere tags something with certain keywords I am interested in I will immediately receive a link to the item. It is like continuously working with thousands of research associates around the world.

    Taken together, this new media environment demonstrates to us that the idea of learning as acquiring information is no longer a message we can afford to send to our students, and that we need to start redesigning our learning environments to address, leverage, and harness the new media environment now permeating our classrooms.

    A Crisis of Significance

    Unfortunately, many teachers only see the disruptive possibilities of these technologies when they find students Facebooking, texting, IMing, or shopping during class. Though many blame the technology, these activities are just new ways for students to tune out—part of the much bigger problem I have called “the crisis of significance”—the fact that many students are now struggling to find meaning and significance in their education. Nothing good will come of these technologies if we do not first confront the crisis of significance and bring relevance back into education. In some ways, these technologies act as magnifiers. If we fail to address the crisis of significance, the technologies will only magnify the problem by allowing students to tune out more easily and completely. With total and constant access to their entire network of friends, we might as well be walking into the food court in the student union and trying to hold their attention. On the other hand, if we work with students to find and address problems that are real and significant to them, they can then leverage the networked information environment in ways that will help them achieve the “knowledge-ability” we hope for them.

    We have had our why’s, how’s, and what’s upside-down, focusing too much on what should be learned, then how, and often forgetting the why altogether. In a world of nearly infinite information, we must first address why, facilitate how, and let the what generate naturally from there. As infinite information shifts us away from a narrow focus on information, we begin to recognize the importance of the form of learning over the content of learning. It isn’t that content is not important; it is simply that it must not take precedence over form. But even as we shift our focus to the “how” of learning, there is still the question of “what” is to be learned. After all, our courses have to be about something. Usually our courses are arranged around subjects. In Teaching as a Subversive Activity, Neil Postman and Charles Weingartner note that the notion of subjects has the unwelcome effect of teaching our students that “English is not History and History is not Science and Science is not Art . . . and a subject is something you ‘take’ and, when you have taken it, you have ‘had’ it.” Always aware of the hidden metaphors underlying our most basic assumptions, they suggest calling this “the Vaccination Theory of Education,” as students are led to believe that once they have “had” a subject they are immune to it and need not take it again.[4]

    Not Subjects but Subjectivities

    As an alternative, I like to think that we are not teaching subjects but subjectivities: ways of approaching, understanding, and interacting with the world. Subjectivities cannot be taught. They involve an introspective intellectual throw down in the minds of students. Learning a new subjectivity is often painful because it almost always involves what psychologist Thomas Szasz referred to in The Second Sin as “an injury to one’s self-esteem.”[5]

    To illustrate what I mean by subjectivities over subjects, I have created a list of subjectivities that I am trying to help students attain while learning the “subject” of anthropology: (1) Our worldview is not natural and unquestionable, but culturally and historically specific. (2) We are globally interconnected in ways we often do not realize. (3) Different aspects of our lives and culture are connected and affect one another deeply. (4) Our knowledge is always incomplete and open to revision. We are the creators of our world. (5) Participation in the world is not a choice, only how we participate is our choice.

    Even a quick scan of these subjectivities will reveal that they can only be learned, explored, and adopted through practice. We can’t teach them. We can only create environments in which the practices and perspectives are nourished, encouraged, or inspired (and therefore continually practiced).

    My own experiments in this regard led to the creation of the World Simulation, now the centerpiece of my Introduction to Cultural Anthropology course at Kansas State University. As the name implies, the world simulation is an activity in which we try to simulate the world. Of course, in order to simulate the world, we need to know everything we can about it. So while the course is set up much like a typical cultural anthropology course, moving through the same readings and topics, all of these learnings are ultimately focused around one big question, “How does the world work?”

    Students are cocreators of every aspect of the simulation, and are asked to harness and leverage the new media environment to find information, theories, and tools we can use to answer our big question. Each student has a specific role and expertise to develop. A world map is superimposed on the class, and each student is asked to become an expert on a specific aspect of the region in which they find themselves. Using this knowledge, they work in 15 to 20 small groups to create realistic cultures, step-by-step, as we go through each aspect of culture in class. This allows them to apply the knowledge they learn in the course and to recognize the ways different aspects of culture—economic, social, political, and religious practices and institutions—are integrated in a cultural system.

    In the final weeks of the course, we explore how different cultures around the world are interconnected and how they relate to one another. Students continue to harness and leverage the new media environment to learn more about these interconnections, and use the wiki to work together to create the “rules” for our simulation. They face the daunting task of creating a way to simulate colonization, revolution, the emergence of a global economy, war and diplomacy, and environmental challenges. Along the way, they are exploring some of the most important challenges now facing humanity.

    The World Simulation itself only takes 75–100 minutes, and moves through 650 metaphorical years—1450–2100. It is recorded by students on twenty digital video cameras and edited into one final “world history” video using clips from real-world history to illustrate the correspondences. We watch the video together in the final weeks of the class, using it as a discussion starter for contemplating our world and our role in its future. By then it seems as if we have the whole world right before our eyes in one single classroom—profound cultural differences, profound economic differences, profound challenges for the future, and one humanity. We find ourselves not just as cocreators of a simulation, but as cocreators of the world itself, and the future is up to us.

    Managing a learning environment such as this poses its own unique challenges, but there is one simple technique, which makes everything else fall into place: love and respect your students and they will love and respect you back. With the underlying feeling of trust and respect this provides, students quickly realize the importance of their role as cocreators of the learning environment, and they begin to take responsibility for their own education.

    New Models of Assessment for New Media Environments: The Next Frontier

    All of this vexes traditional criteria for assessment and grades. This is the next frontier as we try to transform our learning environments. When I speak frankly with professors all over the world, I find that, like me, they often find themselves jury-rigging old assessment tools to serve the new needs brought into focus by a world of infinite information. Content is no longer king, but many of our tools have been habitually used to measure content recall. For example, I have often found myself writing content-based multiple-choice questions in a way that I hope will indicate that the student has mastered a new subjectivity or perspective. Of course, the results are not satisfactory. More importantly, these questions ask students to waste great amounts of mental energy memorizing content instead of exercising a new perspective in the pursuit of real and relevant questions.

    Of course, multiple-choice questions are an easy target for criticism, but even more sophisticated measures of cognitive development may miss the point. When you watch somebody who is truly “in it” —somebody who has totally given themselves over to the learning process—or if you simply imagine those moments in which you were “in it” yourself, you immediately recognize that learning expands far beyond the mere cognitive dimension. These additional dimensions, as Randy Bass noted in his introduction to the January 2009 issue of Academic Commons, include “emotional and affective dimensions, capacities for risk-taking and uncertainty, creativity and invention,” and more.[6] How will we assess these? I do not have the answers, but a renewed and spirited dedication to the creation of authentic learning environments that leverage the new media environment demands that we address it.

    The new media environment provides new opportunities for us to create a community of learners with our students seeking important and meaningful questions. Questions of the very best kind abound, and we become students again, pursuing questions we might have never imagined, joyfully learning right along with the others. In the best case scenario the students will leave the course, not with answers, but with more questions, and even more importantly, the capacity to ask still more questions generated from their continual pursuit and practice of the subjectivities we hope to inspire. This is what I have called elsewhere “anti-teaching,” in which the focus is not on providing answers to be memorized, but on creating a learning environment more conducive to producing the types of questions that ask students to challenge their taken-for-granted assumptions and see their own underlying biases.

    The beauty of the current moment is that new media has thrown all of us as educators into just this kind of question-asking, bias-busting, assumption-exposing environment. There are no easy answers, but we can at least be thankful for the questions that drive us on.

    Notes

    1. Encyclopædia Britannica Blog, “A Vision of Students Today (& What Teachers Must Do),” blog entry by Michael Wesch, October 21, 2008, http://www.britannica.com/blogs/2008/10/a-vision-of-students-today-what-teachers-must-do/.return to text

    2. Marshall McLuhan and Quentin Fiore, The Medium Is the Massage (New York: Bantam, 1967).return to text

    3. Clay Shirky, “Ontology Is Overrated—Categories, Links, and Tags,” http://www.shirky.com/writings/ontology_overrated.html. David Weinberger, Everything Is Miscellaneous: The Power of the New Digital Disorder, 1st ed. (New York: Times Books, 2007).return to text

    4. Neil Postman and Charles Weingartner, Teaching as a Subversive Activity (New York: Delacorte Press, 1969), 21.return to text

    5. Thomas Stephen Szasz, The Second Sin (New York and London: Routledge, 1974), 18.return to text

    6. Randy Bass, “New Media Technologies and the Scholarship of Teaching and Learning: A Brief Introduction to This Issue of Academic Commons,” Academic Commons, January 7, 2009, http://www.academiccommons.org/commons/essay/introduction-issue.return to text

    Voices: Classroom Engagement

    Sometimes it seems to me that whenever things go wrong in college teaching, the first impulse of the professor is to blame the students. They aren’t prepared for class. They don’t want to grapple with the hard concepts. They don’t want to read what I assign. They do all their work at the last minute. And now come laptops, smartphones, and other digital devices. We’ve all seen it. The student with a laptop who has clearly checked out of lecture. Is he reading his email? Is she chatting with a friend? Is he playing World of Warcraft? And then there are the other students peering covertly or openly at the open screen. I’m sorry to report that laptops aren’t the problem, nor are students. Instead of blaming our students for wandering away on their laptops, it’s time we looked a little more closely in the mirror and asked ourselves why they wander off. Let’s take a step back and stop blaming our students—and their laptops. Doing so will force us to think more carefully about our own teaching practice and how we—as opposed to they—might improve.

    —Mills Kelly

    It has always seemed extremely odd and unacceptable to many of us that faculty members of most universities, while being experts in their areas of research, have not received even a single hour of training on how to be an effective educator. In any other occupation, training is an intensely integral part of the job. Airplane pilots must log thousands and thousands of hours in simulators and in simple planes before they are allowed to fly commercial jets. There are even federal regulations to ensure that every airplane pilot is not only trained appropriately, but also can demonstrate that his training has resulted in him being an excellent pilot. However, for arguably the most important job—educating the next generation—no one blinks an eye at the zero hours of training logged by the pilots of the classrooms.

    —David Doria

    Faculty need to be more like hackers. The old-school conceptualization of the classroom as a place to receive knowledge has outlived its usefulness. Society in general, and today’s college students specifically, are more interested in participatory methodologies. Students are able to participate in their consumption of information from other sources, why not allow—better yet, encourage—them to participate in the consumption of academic information? Furthermore, most of today’s college students have never known a time without the communications technologies that are blended into their lifestyles. There is evidence that high media users and multitaskers have different information-processing styles than low users. Ask any pilot and they will tell you that it is surprising how well humans can adapt to situations where we need to divide our attention between various tasks. There’s an old pilot saying that “driving a car is like sleeping compared to flying.” Now, imagine your students processing information like pilots. In a typical day they are connecting, consuming, and creating in the digital space paying attention to many things at once. Then, they walk into the college classroom where things move a lot slower and engagement demands are low (possibly near zero). Can we blame them for being disengaged?

    —Rey Junco

    Digital Literacy and the Undergraduate Curriculum

    Digital Literacy and the Undergraduate Curriculum

    The notion of digital literacy is sometimes criticized for being overused and having multiple definitions. Those are real problems, but they are also opportunities. I actually like the phrase for people’s familiarity with it and for that very richness of meanings, and I’ve viewed the goals of my undergraduate digital -history course through some of those definitions.

    One goal of my digital -history course is to teach the most conventional form of digital literacy: How does one find and evaluate online materials for scholarly—and nonscholarly—uses? How does one begin to sift through the massive content that is available in a systematic and/or creative way? What are the pitfalls and perils, the promises and potentialities of the online information experience?

    Another facet of digital literacy is the notion of digital identity: This is a class that, through individual and group online presence—often blogs and wikis, but many other tools are available as well—explicitly engages students in discussions of their digital identity. How should we present ourselves to the online world—personally, professionally, and intellectually, but also individually, and in groups? In future iterations, it might encourage them to create their own centralized online presence that wouldn’t necessarily be housed by the university—or restricted by a single course. We’ve been engaged recently at University of Mary Washington in a number of discussions related to this notion of enabling students to take control of their digital identity.

    Increasingly, I have become convinced that a key, but often overlooked, aspect of digital literacy is a willingness to experiment with a variety of online tools, and then to think critically and strategically about a project, and to identify those tools that would be most useful to that project. Note that I’m not talking about training in a specific tool or even a set of tools. This is not a Microsoft Word or Blackboard skills class. This digital history class offers students a “digital toolkit” from which to choose. There certainly needs to be some basic exposure and technical support, but part of the goal is to get students to figure out how a new tool—system, software, historical process—works on their own.

    Broadening the previous point, one of my desires for students is for them to be comfortable with being uncomfortable as they try new things. Figuring how to deal with constantly changing technology is something we all are dealing with, yet in higher education we often put students in new situations only when they first begin. Before long, they’ve got the process and procedures down, and can churn out eight- to ten-page papers in their sleep. Yet what kind of preparation is that for the larger world? I know, I know. There are much larger philosophical and practical and even political issues at work here. But my point is simply that it’s good for college classes to shake students—and faculty—out of their comfort zone. Real learning happens when you’re trying to figure out the controls, not when you’re on autopilot.

    Finally, I think digital literacy for undergraduates in history should encompass at least some exposure to the complex new approaches to research in the discipline offered by recent advancements in computing, including text mining or GIS—if only because those methods are influencing a new generation of scholarship that students will need to understand. As they become more accessible and widely used, there will be more opportunities for students to also engage in the application of these tools in their own work.

    Three Roles for Teachers Using Technology

    Instructor as Role Model

    I think any instructor using technology, in the class or out, should think of themselves as a role model for how those technologies can be used for responsible, beneficial goals. One way I do this is to be completely transparent with students regarding my use of technology. I provide links to my blog, Twitter account, Flickr account, YouTube and Vimeo usernames, Facebook page, and my instant-messenger screennames. I encourage them to follow me, and contact me through any of these methods. I set up rules for contacting me, though, which are followed 99.9 percent of the time, and that 0.1 percent is not enough of a problem for me to change my transparency. I also show students how I’ve used my blog, Twitter feed, and other accounts to build a professional network and share information. While others warn about the ill effects of putting too much of yourself online—which can be true—I try to show students how I use technology to expand my opportunities, not limit them. Overall, I’ve had positive feedback from students about my openness. I think that I use technology and social media responsibly—though I could work on the efficiency part. Setting an example that students can follow is important if we want those students to be more critical about their use of technology.

    Instructor as Tech Support

    When utilizing social media and technology in my courses, I’ve found myself serving as the primary tech-support person when students run into trouble. With my tech background, I’m comfortable with this, but I suspect a lot of teachers are not. Explaining the technical aspects of blogging, wikis, RSS feeds, YouTube, and Flickr can take up time spent on other things in class and out, but I think it’s very important to take on this role. In a lot of cases, support involves me showing students how to find answers to their questions on the web, on support forums, or other resources. In other cases, support involves me taking five–ten minutes at the end of class to explain how a particular technology works. While this can be an enormous amount of work, serving as tech support has, I think, given my students more confidence in my ability to teach with and use technology (going back to Instructor as Role Model).

    For example, I have an assignment that asks students to research and write an article on Wikipedia. It’s not a big article—around 500 words—but the assignment does ask a lot from students such as: learn how to do proper formatting for Wikipedia, research an article, and try as hard as they can to ensure their article isn’t vandalized or deleted, and encourage other users to contribute to the article. Learning these things requires a lot of my time for tech support: explain how Wikipedia works; how to format footnotes, headings, et cetera; and how to find guidelines to follow if a student’s article is up for deletion. This is not the kind of task I’d ask of university tech support, because the assignment is as much about learning these technical things as it is learning about collaborative writing and research. The fact that I can take on a role of tech support helps make the assignment successful.

    Instructor as Cheerleader

    Out of the three, I think the role of Instructor as Cheerleader is the most important. I really think that there’s a lack of cheerleading or positive reinforcement in higher education in general, particularly when trying to teach students to use new kinds of technology or social media. At the beginning of the semester, usually after the first class when I’ve introduced all the things we’ll be doing with computers, I get a few emails from students saying something to the effect that “I’m not good with all this computer stuff.” And they probably aren’t; I’m not convinced that this generation, like previous generations, is that tech savvy. But I do think every student I have is capable of becoming more proficient with technology than before they entered my class, and can learn how to use the technology they’re exposed to every day in new, meaningful, efficient ways.

    The prospect of editing a Wikipedia article, to return to that example, is a strange—and sometimes frightening—proposition for my students. Learning how to format footnotes in Wikipedia, insert images, and write the proper code for headings and bulleted lists can be daunting to many, let alone connecting with a few dozen completely unknown Wikipedians to discuss the merits of their articles as some face deletion. Encouragement and genuine interest in the success of each student’s project is imperative, as is patience. There may be some hand-holding involved as students negotiate with sometime rude Wikipedia admins—I’ve done this—or spending some extra time during office hours explaining wiki formatting while encouraging students that they are in fact smart enough to do all this computer stuff—I’ve also done this. Pointing out successes in class, even if it’s as simple as successfully inserting a YouTube clip into a blog post, goes a long way to get students vested in the assignments, and class as a whole.

    Results

    All of these roles help me accomplish one of my goals in class: help my students become more savvy, more responsible consumers and producers of media and technology. I think trading of some time covering some particular historical topic to teach students how to extend learning beyond my classroom is more than worth it. In the end, I get more students interested in exploring history, and help shape more responsible social-technology users. Even if I only influence a handful of students, I’ll consider my class a success.

    Opening Up the Academy with Wikipedia

    Like an uninvited guest at a party, Wikipedia hovers at the fringes of academia. Yet the online encyclopedia’s aims are eminently academic: it collects, processes, stores, and transmits knowledge. Judging by the site’s three-million-plus articles, many of which are extensively referenced to the scholarly literature, and its popularity on the Internet, Wikipedia has been remarkably successful at promoting a culture that honors intellectual inquiry, yet it is derided by many academics.

    Still, we all use Wikipedia in one way or another—even scholars, although we might not want to admit to the fact. Most of us find it a very convenient resource. Above all, students use Wikipedia, openly or otherwise; as Alison J. Head and Michael B. Eisenberg wrote for First Monday in 2010, over half of U.S. undergraduates use it “always” or “frequently” in their research.[1] However, these students do so without necessarily knowing how the information is written and revised. They are often told not to use Wikipedia because it is “bad”—but they are not told why.

    We do not want to debate whether or not Wikipedia is a reliable source for research: we agree that it is not. However, many academics use Wikipedia as a first source on a topic with which they are unfamiliar. The extent to which Wikipedia is a credible source is one of many conversations about Wikipedia we can enter into with our students—but it is not the most interesting. Such discussions are already a de rigueur part of any research assignment, since we raise the same questions regarding other online sources such as blogs and other self-published websites. The deeper, more interesting conversations we want to foster with our students are about how, and by whom, knowledge is created and gatekept.

    We three have welcomed Wikipedia into our teaching in structured ways, as have other teachers and academics referenced in this volume. What we all share is the belief that incorporating Wikipedia into our teaching is a form of hacking the academy, giving those who contribute to Wikipedia—Wikipedians—a mechanism by which to bypass the typical, hierarchical routes of knowledge construction and to become knowledge makers themselves.

    Students who analyze Wikipedia articles and participate in their development are made aware of the construction of knowledge and the ends towards which it is put. Most students utilize Wikipedia only to find information, and therefore have little understanding of how the articles are developed, who develops them, or the oftentimes extensive discussion and review that goes into making an article. For example, many students are unaware that every article on Wikipedia has an associated “discussion” page, also known as a “talk” page. Such pages are filled with ongoing conversations about the development and revision of the articles; introducing students to them is an excellent way to begin a conversation about what knowledge is, and who makes it. For example, asking students to analyze the threads on discussion pages shows them that there are often multiple narratives about a particular historical event or person, and that these competing narratives have important political valences.

    As with any research paper, students learned the basics of researching, citing, summarizing, and quoting. However, because they were doing this on Wikipedia, unique learning experiences were offered. The premise of the project was that students had been using Wikipedia as a source without properly considering its drawbacks. So it should come as no surprise that, when seeking sources for the Wikipedia articles they were writing, students all too often made analogous mistakes of scholarship. They added information that was unsourced, poorly referenced, or even plagiarized, or they resorted to referencing other web pages and online encyclopedias.

    Yet herein lay a great benefit of the assignment. Because Wikipedia asks that assertions be referenced, students were forced to reveal their sources. These poor sources might never have been revealed, had the students been writing a term paper. Moreover, because writing on Wikipedia is a process of continual revision, they could be asked to go back and reevaluate their sources, find better ones, and try again. Even with plagiarism, there was no longer a need to make a fuss, because at no time were they handing in what purported to be a final product. They simply had to start over.

    In short, the assignment not only reveals the weaknesses in students’ research skills, but also teaches them those skills. It shows them that research—like writing—is a process, often a lengthy one. Although you might start with suboptimal—such as Wikipedia itself—you progress to look for ever stronger evidence for the information at hand, or for new information that the first sources did not reveal.

    Note

    1. Alison J. Head and Michael B. Eisenberg, “How Today’s College Students Use Wikipedia for Course-related Research,” First Monday 15, no. 3 (March 2010), http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2830/2476.return to text

    What’s Wrong with Writing Essays: A Conversation

    What’s Wrong with Writing

    I have become increasingly disillusioned with the traditional student paper. Just as the only thing a standardized test measures is how well you can take a standardized test, the only thing a student essay measures is how well a student can conform to the rigid thesis/defense model that—surprise!—eliminates complexity, ambiguity, and most traces of critical thinking.

    I don’t believe that my mission as a professor is to turn my students into miniature versions of myself or of any other professor, yet that is the only function that the traditional student essay serves. And even if I did want to churn out little professors, the essay fails exceedingly well at this. Somehow the student essay has come to stand in for all the research, dialogue, revision, and work that professional scholars engage in. It doesn’t.

    The student essay is a twitch in a void. A compressed outpouring of energy—if we’re lucky—that means nothing to no one. My friend and occasional collaborator Randy Bass has said that nowhere but school would we ask somebody to write something that nobody will ever read.

    This is the primary reason I’ve integrated more and more public writing into my classes. I strive to instill in my students the sense that what they think and what they say and what they write matters—to me, to them, to their classmates, and through open-access blogs and wikis—to the world.

    In addition to making student writing public, I’ve also begun taking the words out of writing. Why must writing, especially writing that captures critical thinking, be composed of words? Why not images? Why not sound? Why not objects? The word text, after all, derives from the Latin textus, meaning that which is woven, strands of different material intertwined together. Let the warp be words and the weft be something else entirely.

    “Captain’s log.” Photograph courtesy of Mark Sample.
    “Captain’s log.” Photograph courtesy of Mark Sample.

    With this in mind, I am moving away from asking students to write toward asking them to weave. To build, to fabricate, to design. I don’t want my students to become miniature scholars. I want them to be aspiring Rauschenbergs, assembling mixed-media combines, all the while through their engagement with seemingly incongruous materials developing a critical thinking practice about the process and the product.

    For instance, I asked students to design an abstract visualization of an NES video game, a kind of model that would capture some of the game’s complexity and reveal underlying patterns to the way actions, space, and time unfold in the game. One student “mapped” Sid Meier’s Pirates! onto a piece of driftwood. This “captain’s log,” covered with screenshots and overlaid with axes measuring time and action, evokes the static nature of the game more than words ever can. Like Meier’s Civilization, much of Pirates! is given over to configurations, selecting from menus, and other nondiegetic actions. Pitched battles on the high seas—what would seem to be the highlight of any game about pirates—are rare, and though a flat photograph of the log doesn’t do justice to the actual object in all its physicality, you can see some of that absence of action here, where the top of the log is full of blank wood.

    What’s Right with Digital Storytelling

    As Mark Sample eloquently points out, student essays generally measure how well students conform to a standard model of essay writing far more than they measure students’ ability to think critically, explore complexity and ambiguity, and engage as learners.

    One of my goals in teaching a graduate-level digital storytelling (DST) class at George Mason University was to experiment with digital storytelling as a substantive, content-rich assignment.

    A short project early in the semester asked students to tell a story in five photos—along the lines of the Flickr group “Tell a Story with 5 Photos for Educators.”[1] One student told a tale of two goldfish bowls entitled An Escape.

    ©shutterstock/khz.

    ©shutterstock/khz.

    ©shutterstock/newphotoservice

    ©shutterstock/newphotoservice

    ©shutterstock/newphotoservice

    ©shutterstock/newphotoservice

    ©shutterstock/newphotoservice

    ©shutterstock/newphotoservice

    ©shutterstock/Sergej Khakimullin

    ©shutterstock/Sergej Khakimullin

    The fish leaves a crowded fishbowl to explore a solitary life. After swimming alone, though, the fish returns to group, choosing companionship over solitude.

    But this is what happens when the pictures are rearranged.

    ©shutterstock/newphotoservice

    ©shutterstock/newphotoservice

    ©shutterstock/newphotoservice

    ©shutterstock/newphotoservice

    ©shutterstock/Sergej Khakimullin

    ©shutterstock/Sergej Khakimullin

    Leader goldfish. ©shutterstock/khz

    Leader goldfish. ©shutterstock/khz

    ©shutterstock/newphotoservice

    ©shutterstock/newphotoservice

    The images tell a very different story.

    We experimented with this in class, arranging and rearranging several sets of photos. For some of the nineteen master’s and doctoral students from the Department of History and Art History and the Higher Education Program, this was their first experience telling a story visually. As simple as it was, it started the process of shifting their thinking from a text-based world to one in which images tell stories and communicate meaning. This was one step of many on the path to creating the final project, a ten-minute digital story.

    For me, one of the successes in the class was seeing the projects grow and develop, watching students grapple with and learn to utilize the digital medium. As students developed project pitches, scripts, and storyboards and then moved into production to create rough cuts and final projects, they experimented with a process that changed their thinking about their topics, as well as about the nature of producing knowledge. The process was intentionally scaffolded to emphasize experimentation, reflection, peer feedback, and iterative learning. At each stage, students examined their purpose, intended audience, main point, and narrative arc, and received instructor and peer feedback, pushing them to create stronger projects; more compelling pieces that engaged the digital in the storytelling.

    One student, for example, chose to explore competing scholarly interpretations of Primavera, painted by Italian Renaissance artist Sandro Botticelli in the late fifteenth century.

    The initial script read like an academic article: introducing, comparing, and contrasting academic analyses of the painting with long scholarly quotes. Through conversations, feedback, and the experience of watching a host of digital stories—the good, the bad, and the ugly—the student reexamined her approach and began to investigate strategies for maximizing the potential of DST to tell this story.

    This process also surfaced the student’s larger goals: to make art history accessible, and to empower viewers without a background in art history to ask questions about the broader context of paintings and their meaning.

    The process of creating a digital story forced the student to confront these questions, and in the end, she created a lively digital tale that put the painting into a simulated courtroom trial as Exhibit A. Art historians served as “witnesses,” explaining how and why they interpreted the painting in specific ways, presenting their credentials and the evidence for their arguments. Visually and through “testimony,” the story explored debates over the painting, such as whether the third figure from the right represented the personification of spring or the goddess Flora, and whether the figure on the far left represented Hermes or Mercury. The “jury” (viewer) was asked to evaluate the testimony and competing narratives, but also to consider the constructed nature of meaning and the process of scholarly discourse.

    Sandro Botticelli, Primavera, 1477. Used with permission by Art Resource, New York.
    Sandro Botticelli, Primavera, 1477. Used with permission by Art Resource, New York.

    DST challenged students to think in new ways, to ask new questions, and to interrogate the sources and ideas they were reading, researching, and developing. It challenged students to use their academic interests and research to tell a compelling story digitally—one that both made a clear argument, and fully utilized the tools and power of digital storytelling.

    One story by Rwany Sibaja explored the protest movement started by mothers and grandmothers of the 30,000 desaparecidos—those who disappeared during the military dictatorship of Jorge Rafael Videla. Integrating video of the protests, interviews with former military officials defending their actions, and footage of the 978 World Cup in Argentina, the story presents a powerful historical narrative, contrasting a nation’s celebration with ongoing persecution, and exploring the complicated nature of history when examined through multiple lenses.[2]

    Another, “Re-inventing the Lecture (Or, Why Online Lectures Don’t Work, and What We Can Do About It),” by Tad Suiter, tackled the nature of the digital medium and one of the most common academic uses, posting video of lectures online. Tad’s video not only discussed weaknesses of online lectures, it demonstrated their shortcomings, investigated alternative modes of communicating and conveying information—especially emerging best practices in the video blogging (vlogging) community—and encouraged viewers to think about the potential for online pedagogy. As Tad wrote on his blog, The Leisurely Historian, “The only thing more boring than a bad lecture is a decent lecture on YouTube.”[3]

    In one last example from the class, “Multisensory Music Making: Unleash the Power of Music Within You!” examined a new theory of teaching and learning music: connecting visual and auditory stimuli to help musicians “connect with music on a deeper level,” and “expand their range of emotional and expressive playing.” A paper on this topic could not begin to capture this approach, but seeing a student play a musical phrase, tell a story about it through an image, and play it again shows how this works, transforming “expression into music.”[4]

    All of this argues for multimedia and visual literacy, but why digital storytelling?

    Primarily, because it is accessible, relatively easy to teach basic technical skills, and a useful practical skill. It allows students to engage with visual and multimedia sources while researching a topic and crafting thoughtful arguments; it also creates an end product that can be shared and revised.

    Several students adapted this approach to weekly assignments, submitting vlogs in place of blog postings. The blog discussion on copyright was thoughtful and lively, but Mark Bergman’s vlog on the topic accomplished what a text-based blog could not. He explored the music involved in a recent copyright dispute over Coldplay’s song “Viva La Vida.” Guitarist Joe Satriani accused Coldplay of copyright infringement based on his 2004 song “If I Could Fly,” which led to further claims of copyright infringement by the artist formerly known as Cat Stevens based on his 1973 song “Foreigner Suite.” A written blog assignment could link to audio excerpts, but playing the excerpts one after the other engaged the reader/watcher in deciding whether the case had merit, creating a powerful example of the nature of copyright dispute.[5]

    As Jeff McClurken likes to say, making students uncomfortable, but not paralyzed, often leads them to ask new questions, explore content more deeply, and take ownership of their learning. While this DST class experienced its fair share of technical difficulties and near disasters—more laptops died during this semester than I care to count, from natural and unnatural causes—and teaching nineteen students with various technical skills introduced its own challenges, creating ten-minute digital stories focused on historical research or on teaching and learning at the college level challenged students to think in new ways, to question not only the sources they used, but how they crafted and presented their arguments.

    DST is not a silver bullet. It made students uncomfortable at different times, and for different reasons. But they all survived, emerged on the other side of the semester not only with a ten-minute digital story, but with a new appreciation for the power of iterative learning, of rethinking and questioning research, central questions, and presentation—something that doesn’t always happen with essays, even at the graduate level.

    Notes

    1. “Tell a Story with 5 Photos for Educators,” Flickr image sharing, http://www.flickr.com/groups/fivephotos/.return to text

    2. Rwany Sibaja, Silent Voices, http://vimeo.com/11165331.return to text

    3. The Leisurely Historian Blog, “Why Digital Lectures Don’t Work,” blog entry by Tad Suiter, May 4, 2010, http://www.leisurelyhistorian.net/why-digital-lectures-dont-work.return to text

    4. “Multisensory Music Making: Unleash the Power of Music Within You!,” http://vimeo.com/11424032.return to text

    5. Mark Bergman, “Copyright Vlog,” http://vimeo.com/12140910.return to text

    Assessment versus Innovation

    Most of us think that the current emphasis on assessment is a contemporary phenomenon. In fact, the rationale for testing, grading, assessing, and evaluating in a quantified fashion goes straight back to the dawn of the assembly line and the modern office; back to the beginning of education schools and business schools. If you look at most educational institutions, corporate HR departments, and government agencies today, they have adopted forms of evaluation that bear the legacy of methods designed in the early twentieth century to make evaluating the quality of people and their work as easy as inspecting a Model T as it rolls off the assembly line. The byword of the Model T is that you can have it in any color so long as its black. One size fits all. We’re still judging as if we’re trying to ensure that uniform, efficient sizing up of human achievement, accomplishment, effort, and productivity.

    The world has changed in the last two decades, but evaluation methods have not. We have entered a new era of distributed customizable knowledge, where tasks are shared and accomplishments are iterative—in the sense that others can emend the result, that improvement is continual, and participation is the desired goal. That’s how the Internet was built, and how the Firefox browser and Apache server are both sustained and maintained. Yet our prevailing methods of assessment presume nothing has changed since Ford rolled out his first automobiles, and that the goal is exactly, precisely the Model T.

    More and more assessment is detached from the standard of excellence it is supposed to measure in some productive way. Because of the growing mismatch between the ways we work and learn today and the antiquated—and increasingly rigid—forms of assessment to which we subject ourselves and others, it’s time for a major rethinking. At my workplace, I am required to provide an assessment of those I supervise. That’s fine. But I’m also required to rank them. Since I spend the year working hard—we all do—to improve how we work together as a collaborative team, I can think of nothing more harmful to what we accomplish together than saying Person 1 is better than Person 2. That method of assessment undermines the efficiency and excellence of the team. It is also arbitrary. If I am a truly good supervisor working throughout the year to ensure that each person performs not only to his or her potential, but to the specific requirements of his or her job, I am exactly not trying to encourage my teammates to compete against one another but to, together, strive for excellence. If one member is not performing to full potential, it should be my job to say where improvement is needed, and what the path to that improvement is. It’s not even relevant to specify that he or she happens not to be as good as Sarah or Johnny: that’s not aiming high enough. It’s simply aiming relative to our small group. That comparison happens to be gratuitous and arbitrary, relevant not to his or her job, but to who happens to work around him or her. It is destructive of management goals that, as a supervisor, I set and aspire to throughout the year.

    I recently spent time with a British scholar who noted that the new government promotion and salary guidelines require that she produce four refereed articles a year. Why four? Two great ones don’t mean more than four that may not be great? That’s how we measure intellectual productivity? One refereed book does not count? This is a standard that is harmful to the sciences, since it says publication of those four works a year is more important than the major scientific find that might result in one hugely influential and important article in due time—not four turned out to someone else’s measurement. But in her field of film studies, where a book has been long deemed more important than articles, it also means an arbitrary application of someone else’s arbitrary standard to her field. It undercuts excellence in all fields.

    More and more of us experience such discrepancies. The rigidity of contemporary assessment may well turn out to be a death knell. Practices often become more stringently enforced when they no longer have real utility and before they are about to be transformed or discarded. In the meantime, many of us are stuck with assessment methods that inhibit excellence, impede creativity, and serve as the antithesis to innovation. The measure may well be simple and efficient. The tragedy is that, in many cases, we have reached a binary: assessment versus innovation.

    A Personal Cyberinfrastructure

    Cyberinfrastructure is something more specific than the network itself, but it is something more general than a tool or a resource developed for a particular project, a range of projects, or, even more broadly, for a particular discipline.
    —American Council of Learned Societies, “Our Cultural Commonwealth,” 2006.[1]

    Sometimes progress is linear. Sometimes progress is exponential: according to the durable Moore’s Law, for example, computing power doubles about every two years. Sometimes, however, progress means looping back to earlier ideas whose vitality and importance were unrecognized or underexplored at the time, and bringing those ideas back into play in a new context. This is the type of progress needed in higher education today, as students, faculty, and staff inhabit and cocreate their online lives.

    The early days of the web in higher education involved workshops on basic HTML, presentations on course web pages, and seed money in the form of grants and equipment to help faculty, staff, and occasionally even students to generate and manage content in those strange “public.html” folders that suddenly appeared on newly connected desktops. Those days were exciting, but they were also difficult. Only a few faculty members had the curiosity or stamina to brave this new world. Staff time was largely occupied by keeping the system up and running, and few people understood how to bring students into this world, aside from assigning them e-mail addresses during orientation.

    Then an answer seemed to appear: template-driven, plug-and-play, turn-key web applications—learning management systems—that would empower all faculty, even the most mulish Luddites, to “put their courses online.” Staff could manage everything centrally, with great economies of scale and a lot more uptime. Students would have the convenience of one-stop, single-sign-on activities, from registering for classes, to participating in online discussion, to seeing grades mere seconds after they were posted. This answer seemed to be the way forward into a world of easy-to-use affordances that would empower faculty, staff, and students without their having to learn the dreaded alphabet soup of HTML, FTP, and CSS. As far as faculty were concerned, the only letters they needed to know were L, M, S. Best of all, faculty could bring students into these environments without fear that they would be embarrassed by their lack of skill or challenged by students’ unfamiliar innovations.

    But that wasn’t progress. It was a mere “digital facelift”—Clay Shirky’s phrase for the strategies that newspapers pursued in the 990s when they couldn’t “think the unthinkable,” and see that their entire world was about to change.[2] Higher education, which should be in the business of thinking the unthinkable, stood in line and bought its own version of the digital facelift. At the turn of the twenty-first century, higher education looked in the mirror and, seeing its portals, its easy-to-use LMSs, and its “digital campuses,” admired itself as sleek, youthful, and attractive. But the mirror lied.

    Then the web changed again: Google, Blogger, Wikipedia, YouTube, Facebook, and Twitter. The medium is the message. Higher education almost completely ignored Marshall McLuhan’s central insight: new modes of communication change what can be imagined and expressed. “Any technology gradually creates a totally new human environment. Environments are not passive wrappings but active processes. . . . The ‘message’ of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs.”[3] Print is not advanced calligraphy. The web is not a more sophisticated telegraph. Yet higher education largely failed to empower the strong and effective imaginations that students need for creative citizenship in this new medium. The “progress” that higher education achieved with massive turn-key online systems, especially with the LMS, actually moved in the opposite direction. The digital facelift helped higher education deny both the needs and the opportunities emerging with this new medium.

    So, how might colleges and universities shape curricula to support and inspire the imaginations that students need? Here’s one idea: suppose that when students matriculate, they are assigned their own web servers—not 1-GB folders in the institution’s web space, but honest-to-goodness virtualized web servers of the kind available for $7.99 a month from a variety of hosting services, with built-in affordances ranging from database maintenance to web analytics. As part of the first-year orientation, each student would pick a domain name. Over the course of the first year, in a set of lab seminars facilitated by instructional technologists, librarians, and faculty advisors from across the curriculum, students would build out their digital presences in an environment made of the medium of the web itself. They would experiment with server management tools via graphical user interfaces such as cPanel or other commodity equivalents. They would install scripts with one-click installers such as SimpleScripts. They would play with wikis and blogs; they would tinker and begin to assemble a platform to support their publishing, their archiving, their importing and exporting, their internal and external information connections. They would become, in myriad small but important ways, system administrators for their own digital lives. In short, students would build a personal cyberinfrastructure—one they would continue to modify and extend throughout their college career—and beyond.

    In building that personal cyberinfrastructure, students not only would acquire crucial technical skills for their digital lives, but also would engage in work that provides richly teachable moments ranging from multimodal writing to information science, knowledge management, bibliographic instruction, and social networking. Fascinating and important innovations would emerge as students are able to shape their own cognition, learning, expression, and reflection in a digital age, in a digital medium. Students would frame, curate, share, and direct their own “engagement streams” throughout the learning environment. Like Doug Engelbart’s “bootstrappers” in the Augmentation Research Center, these students would study the design and function of their digital environments, share their findings, and develop the tools for even richer and more effective metacognition, all within a medium that provides the most flexible and extensible environment for creativity and expression that human beings have ever built.

    Just as the real computing revolution didn’t happen until the computer became truly personal, the real IT revolution in teaching and learning won’t happen until each student builds a personal cyberinfrastructure that is as thoughtfully, rigorously, and expressively composed as an excellent essay or an ingenious experiment. This vision goes beyond the personal learning environment in that it asks students to think about the web at the level of the server, with the tools and affordances that such an environment prompts and provides.[4]

    Pointing students to data buckets and conduits we’ve already made for them won’t do. Templates and training wheels may be necessary for a while, but by the time students get to college, those aids all too regularly turn into hindrances. For students who have relied on these aids, the freedom to explore and create is the last thing on their minds, so deeply has it been discouraged. Many students simply want to know what their professors want and how to give that to them. But if what the professor truly wants is for students to discover and craft their own desires and dreams, a personal cyberinfrastructure provides the opportunity. To get there, students must be effective architects, narrators, curators, and inhabitants of their own digital lives. Students with this kind of digital fluency will be well prepared for creative and responsible leadership in the post-Gutenberg age. Without such fluency, students cannot compete economically or intellectually, and the astonishing promise of the digital medium will never be fully realized.

    To provide students the guidance they need to reach these goals, faculty and staff must be willing to lead by example—to demonstrate and discuss, as fellow learners, how they have created and connected their own personal cyberinfrastructures. Like the students, faculty and staff must awaken their own self-efficacy within the myriad creative possibilities that emerge from the new web. These personal cyberinfrastructures will be visible, fractal-like, in the institutional cyberinfrastructures, and the network effects that arise recursively within that relationship will allow new learning and new connections to emerge as a natural part of individual and collaborative efforts.

    To build a cyberinfrastructure that scales without stifling innovation, that is self-supporting without being isolated or fatally idiosyncratic, we must start with the individual learners. Those of us who work with students must guide them to build their own personal cyberinfrastructures, to embark on their own web odysseys. And yes, we must be ready to receive their guidance as well.

    Notes

    1. P. N. Courant et al., “Our Cultural Commonwealth: The Report of the American Council of Learned Societies,” Commission on Cyberinfrastructure for Humanities and Social Sciences, University of Southern California, 2006.return to text

    2. Clay Shirky, “Newspapers and Thinking the Unthinkable,” March 13, 2009, http://www.shirky.com/weblog/2009/03/newspapers-and-thinking-the-unthinkable/.return to text

    3. Marshall McLuhan and Lewis H. Lapham, Understanding Media: The Extensions of Man (New York: McGraw Hill, 1964).return to text

    4. Educause Learning Initiative, “7 Things You Should Know About Personal Learning Environments,” 2009, http://net.educause.edu/ir/library/pdf/ELI7049.pdf.return to text

    Voices: Learning Management Systems

    The problem with learning management systems lies in the conjunction of three words that should not appear together. Learning is not something that can be managed via a system. We’re not producing widgets here—we’re attempting to inspire creative thought and critical intelligence. Learning management systems have dominated online education up until now, but must they be what we rely on in the future? Having found our way out of one box, must we immediately look for another? Can we imagine no other possibilities?

    —Matt Gold

    Companies like Blackboard emerged as all-in-one solutions for managing courses online due to the relative difficulty of using the open web in the late 1990s given the unilateral nature of content delivery, limited access to the web, and the general difficulty designing and maintaining one’s own space. Course-management systems fit a need. They were designed for a learning environment that posed a high threshold of difficulty for two-way participation. Yet, over the the next ten years the web became a far more conducive space for dynamic interaction and participation. At the same time, Internet penetration throughout the Western world became more and more ubiquitous, and applications that offer similar functionality as course management systems began to emerge at a fraction of the cost of centralized, proprietary systems.

    So, what happens? The companies that make the learning management systems gentrify the frontier; they try and assimilate the power of these new tools within a controlled space that is safe, closed, and convenient. It is a two-pronged attack—exploiting fears about student safety along with a promise of a centralized convenience and peace of mind. So, like the artists that moved into SoHo and the Lower East Side of New York City in the 1960s and 1970s, their pursuit of an affordable and diverse alternative to mainstream logic ultimately paves the way for capital to roll in and develop and gentrify these neighborhoods, eliminating most, if not all, of the original spaces that made them interesting and compelling to begin with.

    —Jim Groom

    Hacking the Dissertation

    When I teach, I’m constantly asking my students to work in open and collaborative spaces. I prefer student work that faces outward: wikis, Twitter, blogs, game projects, etc. Like Mark Sample, I believe that the student essay is flawed—“a compressed outpouring of energy . . . that means nothing to no one.”

    Can’t the same be said of my dissertation? To a large extent, that’s even expected. The dissertation is the large work that stands as a bridge to future research. Writing it is more the process of induction: a launching point, rather than an end product. It exists, it goes in front of a committee, and mostly it is of vast significance only to the person writing it.

    There are several traditional venues for feedback during the dissertation-writing process: the most common is the conference presentation, a strictly scheduled event in which a portion of the work that has presumably been tailored into a stand-alone paper. From there, draft exchanges are possible, and social media certainly has eased the exchange of these types of documents. This type of limited collaboration is a sidenote to the bulk of the writing process, which was recently satirized by the website PhD Comics as a “trip down the rabbit hole” that amounts to a personal struggle with one’s research.[1]

    That still hard-to-dismiss picture of the humanist surrounded by papers, not people and networks, stands in contrast to online communities where peer feedback can enhance a lonely process. The desire to share progress is seen even in tongue-in-cheek experiments like Is My Thesis Hot or Not?—a website where only the thesis statement is in play, and subject to user votes on the binary of “hot” or “not” with an open-comment system that can be an outlet for snark, or, more rarely, helpful criticism.[2]

    This is one of the realities of putting work in open-access environments: it can be mocked and torn apart. More likely, it will be ignored completely. The most commonly used database for academic dissertations encourages work to be put into stasis: the ProQuest UMI Dissertation database now has an open-access model for digital publication, but the work once archived sits as a PDF and cannot evolve dynamically.[3]

    There are already many projects that have experimented with open peer review and collaboration. Of those, the most successful tend to be launched by an already established academic, as with Lawrence Lessig’s collective revision of his work via wiki, Code 2.0. Humanities dissertations have occasionally embraced dynamic digital forms: Vika Zafrin’s blog, RolandHT, was designed for the web, and is conscious of that form in every aspect of the data and methodology. Zach Whalen’s blog, The Videogame Text, is a working example of the dissertation text brought into an interactive space, though the stated final goal remains a traditional book proposal.[4]

    In these and other cases of experimental publishing, the exclusivity of the book is being overthrown. Many grad students I’ve spoken with are hesitant to place their work in open access venues for fear of decreasing its value down the road: they dream—and, yes, I myself will admit to having daydreamed—of making the leap from dissertation to monograph. The reality of such leaps, of course, is that they demand transformation: take Noah Wardrip-Fruin’s recent book from MIT Press, Expressive Processing, and compare it to his earlier dissertation of the same name.[5]

    The traditional dissertation as product reflects the dominance of the book: it creates a monograph that sits in a database. The processes of the humanities are to some extent self-perpetuating: write essays as an undergraduate, conference papers as a graduate student, a dissertation as a doctoral student, and books and journal articles as a professor. Making a work open access doesn’t give it an audience, just as engaging in a dynamic project and seeking community input doesn’t make a work inherently valuable—but it does more seriously reflect the purposing of the dissertation as a launching point.

    Perhaps as all these stages of academic production are “hacked” we’ll see more dissertations embracing the models that are now experimental. I’d like to see a community form online that resembles the collaborative social networks I’ve made an object of study. For instance, a community like Fanfiction.net brings value to its many users not only by offering a place to share one’s story, but by offering a community of collaborators—other creators of content who are enthusiastic about sharing their own knowledge and opinions because they are engaged in the same processes for themselves.[6] These types of communities go a step beyond the social networks we now have as graduate students (like Gradshare and Grad Cafe) and become spaces that encourage continual revision, collaboration, and extension.[7] Embracing these models might bring some of the same challenges we see in the classroom, like sorting out the different values of individual authorship and dealing with the ever-present risks of plagiarism, but the results might produce dissertation work that can move more easily to relevance in a larger discourse. A dissertation written—and blogged, and revised, and remixed—in networked space need not be condemned to stasis.

    Notes

    1. Jorge Cham, “PhD Comics: Cecilia in Thesisland, Pt. 2: Down the Raw Bit Code,” http://www.phdcomics.com/comics/archive.php?comicid=1275.return to text

    2. http://ismythesishotornot.com/.return to text

    3. “ProQuest Open Access Publishing PLUS,” ProQuest, http://www.proquest.com/en-US/products/dissertations/epoa.shtml.return to text

    4. Lawrence Lessig, Code: And Other Laws of Cyberspace, Version 2.0 (New York: Basic Books, 2006); Vika Zafrin, RolandHT, http://rolandht.org/; The Video-game Text, “Typography and Textuality: Blogging the Book Proposal,” blog entry by Zach Whalen, http://www.thevideogametext.com/vgt.return to text

    5. Noah Wardrip-Fruin, Expressive Processing: Digital Fictions, Computer Games, and Software Studies (Cambridge, MA: MIT Press, 2012).return to text

    6. http://www.fanfiction.net/.return to text

    7. GradShare, [formerly http://www.gradshare.com/answers.html]; the Grad Cafe forums—graduate school Admission, advice, discussions, help and information, http://forum.thegradcafe.com/.return to text

    How to Read a Book in One Hour

    As children, we are taught that reading is always linear: you start on page one and end on page three-hundred-and-sixty-seven, and skipping pages is cheating. That is the way you read all through school, and the way most people read their whole lives. Once you get to graduate school, however, it is time to leave that childhood illusion behind.

    You are no longer reading books for the stories contained inside. You are reading them for other reasons—to understand the authors’ arguments, to see how they handle evidence, to examine how they structure their arguments, and to analyze their work as a whole. Perhaps above all, you need to understand how any given book fits into the theoretical landscape, how it speaks to other works on the subject, and its strengths and weaknesses. Plodding through a book one page at a time is not the best way to do this.

    You need to devour books—to fall on them like a hungry weasel on a fat chicken. You break their spines, rummage about in their innards for the tasty bits, and make your way to the next chicken coop. Here is how to do it.

    1. Create a clean space—a table, the book, paper, a writing utensil, and nothing else.
    2. Read two academic reviews of the book, photocopied beforehand. Don’t skip this step: these will tell you the book’s perceived strengths and weakness. Allow five minutes for this.
    3. Carefully read the introduction. A good introduction will give you the book’s thesis, clues on the methods and sources, and thumbnail synopses of each chapter. Work quickly, but take good notes—with a bibliographic citation at the top of the page. Allow twenty minutes here.
    4. Turn directly to the conclusion and read that. The conclusion will reinforce the thesis and have some more quotable material. In your notes, write down one or two direct quotes suitable for using in a review or literature review, should you later be assigned to write such a beast. Allow ten–fifteen minutes here.
    5. Turn to the table of contents and think about what each chapter likely contains. You may be done—in many cases in grad school the facts in any particular book will already be familiar to you; what is novel is the interpretation, and you should already have that from the introduction and conclusion. Allow five minutes here.
    6. (Optional) Skim one or two of what seem to be the key chapters. Look for something clever the author has done with her or his evidence, memorable phrases, glaring weaknesses—stuff you can mention and sound thoughtful yourself when it is your turn to talk in the seminar room. Allow ten minutes, max.
    7. Put the notes and photocopied review in a file folder and squirrel it away. These folders will serve as fodder for future assignments, reviews of similar books, lectures, grant applications, etc.
    8. Miller time. Meet some friends and tell them the interesting things you just learned—driving it deeper into your memory.

    Will you learn as much using this method as you would if you spent the five–eight hours reading it in the conventional method? Heck no. But the real meat of the book—the thesis and key points—will actually be more clear to you using this method. Otherwise it is too easy for a graduate student to get lost in the details and miss the main points.

    This method works better with some books than others. If a book is considered especially important, or if it falls squarely within your research area, you should give it more time. And never, ever tell the professor that you read the assignment in an hour. Not even if that professor is me. I’ll flunk you.