Pastplay: Teaching and Learning History with TechnologySkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. The print version of this book is available for sale from the University of Michigan Press. :
For more information, read Michigan Publishing's access and usage policy.
5. The Hermeneutics of Screwing Around; or What You Do with a Million Books
According to the world wide web, the phrase “So many books, so little time” originates with Frank Zappa. I do not believe it, myself. If I had had to guess, I would have said maybe Erasmus or Trithemius. But even if I am right, I am probably wrong. This is one of civilization’s oldest laments—one that, in spirit, predates the book itself. There has never been a time when philosophers—lovers of wisdom broadly understood—have not exhibited profound regret over the impedance of mismatch between time and truth. For surely, there are more books, more ideas, more experiences, and more relationships worth having than there are hours in a day (or days in a lifetime).
What everyone wants—what everyone from Sargon to Zappa has wanted—is some coherent, authoritative path through what is known. That is the idea behind “Dr. Elliot’s Five Foot Shelf,” Mortimer Adler’s Great Books of the Western World, Modern Library’s 100 Best Books, and all other similar attempts to condense knowledge into some ordered list of things the educated should know. It is also the idea behind every syllabus, every curriculum, and most of the nonfiction books that have ever been written. The world is vast. Art is long. What else can we do but survey the field, introduce a topic, plant a seed (with, what else, a seminar). Amazon.com has a feature that allows users to create reading guides focused on a particular topic. They call it, appropriately, “Listmania.”
While the anxiety of not knowing the path is constant, moments of cultural modernity provide especially fertile ground for the creation of epitomes, summae, canons, and bibles (as well as new schools, new curricula, and new ways of organizing knowledge). It is, after all, at the end of history Page 112that one undertakes summation of “the best that has been thought and said in the world.” The aforementioned “great books” lists all belong to the early decades of the twentieth century, when U.S. cultural anxiety—especially concerning its relationship to Europe—could be leavened with a bold act of cultural confidence. Thomas Jefferson had said something similar at a time closer to the founding of the country, when he noted that “All that is necessary for a student is access to a library, and directions in what order the books are to be read.” But the same phenomenon—the same play of anxiety and confidence—was at work in the writing of the Torah, the Summa, Will Durant’s Story of Civilization, and all efforts of similar grandeur. All three of those works were written during moments, not just of rapid cultural change, but during periods of anxiety about change. “These words YHWH spoke to your entire assembly at the mountain from the midst of the fire, the cloud, and the fog (with) a great voice, adding no more”; “We purpose in this book to treat of whatever belongs to the Christian religion, in such a way as may tend to the instruction of beginners”; “I wish to tell as much as I can, in as little space as I can, of the contributions that genius and labor have made to the cultural heritage of mankind.” This essay will not aim quite so high.
Even in the very early days of the web, one felt the soul-crushing lack of order. One of the first pages I ever visited was Jerry and David’s Guide to the World Wide Web, which endeavored to, what else, guide you through what seemed an already impossibly vast expanse of information. Google might seem something else entirely, but it shares the basic premise of those quaint guides of yore, and of all guides to knowledge. The point is not to return to the more than three million pages that relate in some way to Frank Zappa. The point is to say, “Relax. Here is where you start. Look at this. Then look at this.”
We might say that all such systems rely on an act of faith, but it is not so much trust in the search engine (or the book, or the professor) as it is willingness to suspend disbelief about the yellow wood after having taken a particular road. Literary historian Franco Moretti states the situation starkly:
We’ve just started rediscovering what Margaret Cohen calls the “great unread.” “I work on West European narrative, etc.” Not really, I work on its canonical fraction, which is not even one per cent of published literature. And again, some people have read more, but the point is that there are thirty thousand nineteenth-century British novels out there, forty, fifty, sixty thousand—no one really knows, no one has read them, no one ever will. And then there are French novels, Chinese, Argentinian, American.
Debates about canonicity have been raging in my field (literary studies) for as long as the field has been around. Who is in? Who is out? How do we decide? Moretti reminds us of the dispiriting fact that this problem has no practical solution. It is not just that someone or something will be left off; it is that our most inclusive, most enlightened choices will fail against even the most generous requirements for statistical significance. The syllabus represents the merest fraction of the professor’s knowledge, and the professor’s knowledge is, in the scheme of things, embarrassingly slight.
Gregory Crane, who held a series of symposia on the general question, “What Do You Do With A Million Books?” a few years ago, rightly identifies it as an ancient calculus:
The Greek historian Herodotus has the Athenian sage Solon estimate the lifetime of a human being at c. 26,250 days (Herodotus, The Histories, 1.32). If we could read a book on each of those days, it would take almost forty lifetimes to work through every volume in a single million book library. The continuous tradition of written European literature that began with the Iliad and Odyssey in the eighth century BCE is itself little more than a million days old. While libraries that contain more than one million items are not unusual, print libraries never possessed a million books of use to any one reader.
Way too many books, way too little time.
But again, the real anxiety is not that the Library of Congress contains more than five hundred human lifetimes worth of reading material (I am using the highly generous Solon-Crane metric, which assumes you read a book every day from the day you are born until the day you die). The problem is that that much information probably exceeds our ability to create reliable guides to it. It is one thing to worry that your canon is not sufficiently inclusive, or broad, or representative. It is another thing when your canon has no better chance of being these things than a random selection. When we get up into the fourteen-million-book range, books that are known by more than two living people are already “popular.” A book like Hamlet has overcome enormous mathematical odds that ruthlessly favor obscurity; the fact that millions of people have read it might become a compelling argument for why you should read it too. But in the end, arguments from the standpoint of popularity satisfy neither the canoniclast nor the historian. The dark fear is that no one can really say what is “representative” because no one has any basis for making such a claim.
Several solutions have been proposed, including proud ownership of our ignorance and dilettantism. A few years ago, Pierre Bayard famously—and Page 114with only the barest sheen of satire—exposed our condition by writing a book entitled How to Talk About Books You Haven’t Read. In it, intellectual facility is presented as a kind of trick: “For knowing how to speak with finesse about something with which we are unacquainted has value far beyond the realm of books.” It is a lesson thoroughly absorbed by anyone who stands on the right side of a Ph.D. oral exam. But amazingly, even Bayard sees this as a means toward guiding people through knowledge. “[Students] see culture as a huge wall, as a terrifying specter of ‘knowledge.’ But we intellectuals, who are avid readers, know there are many ways of reading a book. You can skim it, you can start and not finish it, you can look at the index. You learn to live with a book. . . . I want to help people organize their own paths through culture.”
At some level, there is no difference at all between Pierre Bayard and, say, Mortimer Adler. Both believe in culture. Both believe that one can find an ordered path through culture. Bayard just thinks there are faster ways to do it than starting with volume 1 of Great Books of the Western World. Indeed, Adler himself almost seemed to agree; books 2 and 3 of Great Books presented what he called a “Synopticon.” What could such a thing be but the Cliff’s Notes to the main ideas of Western civilization? There also is not much of a difference between Bayard on the one hand and Crane and Moretti on the other. All three would like us to dispense with the silly notion that we can read everything, so that we can get on with the task of organizing our own paths through culture. It is true that the latter—as well as digital humanists generally—propose that we use computers, but I would like to argue that that difference is not as crucial as it seems.
There have always been two ways to deal with a library. The first is the one we are most used to thinking about. I am doing research on the influence of French composer Edgard Varèse on the early work of Frank Zappa. I go to the library and conduct an investigation, which might include the catalogue, a bibliography or two, the good people at the reference desk, or any one of a dozen different methods and tools. This is search. I know what I am looking for, and I have various strategies for locating it. I cannot read everything on this subject. I cannot even locate everything on this subject. But I have faith in the idea that I can walk out of the library (this afternoon, or after ten years of focused research, depending on my situation) being able to speak intelligently and convincingly on this topic.
The second way goes like this: I walk into the library and wander around in a state of insouciant boredom. I like music, so I head over to the music section. I pick up a book on American rock music and start flipping through it (because it is purple and big). There is an interesting bit on Frank Zappa, and it mentions that Zappa was way into this guy named Edgard Varèse. I Page 115have no idea who that is, so I start looking around for some Varèse. One look at the cover of his biography—Varèse with that mad-scientist look and the crazy hair—and I am already a fan. And so off I go. I check out some records and discover Varèse.
This is called browsing, and it is a completely different activity. Here, I do not know what I am looking for, really. I just have a bundle of “interests” and proclivities. I am not really trying to find “a path through culture.” I am really just screwing around. This is more or less how Zappa discovered Varèse. He had read an article in LOOK magazine in which the owner of the Sam Goody record chain was bragging about his ability to sell obscure records like The Complete Works of Edgard Varèse, Vol. 1. The article described Varèse’s music as, “a weird jumble of drums and other unpleasant sounds.” The rest is history (of the sort that you can search for, if you are so inclined).
We think of the computer as a device that has revolutionized search—“information retrieval,” to use the formal term—and that is of course true. Until recently, no one was able to search the content of all the books in the library. There was no way to ask, “Which of these books contains the phrase ‘Frank Zappa’?” The fact that we can now do that changes everything, but it does not change the nature of the thing. When we ask that question—or any question, for that matter—we are still searching. We are still asking a question and availing ourselves of various technologies in pursuit of the answer.
Browsing, though, is a different matter. Once you have programmatic access to the content of the library, screwing around potentially becomes a far more illuminating and useful activity. That is, presumably, why we called the navigational framework one used to poke around the world wide web a “browser,” as opposed to, say, a “searcher.” From the very start, the web outstripped our ability to say what is actually there. Jerry and David could not say it then and Google cannot say it even now. “Can I help you?” “No, I’m just browsing.” Translation: “I just got here! How can you help me find what I’m looking for when (a) I don’t know what’s here and (b) I don’t know what I’m looking for?” The sales clerk, of course, does not need a translation. He understands perfectly that you are just screwing around. Our irritation arises not because the question is premature or impertinent, but because we are being encouraged to have a purposive experience when we are perfectly happy having a serendipitous one.
And that is absolutely not what the people who are thinking about the brave new world of large-scale digital corpora (Google Books, or the web itself) want to talk about. Consider Martin Mueller’s notion of “not reading”—an idea he puts forth during a consideration of the power of the digital surrogate:Page 116
A book sits in a network of transactions that involve a reader, his interlocutors, and a “collective library” of things one knows or is supposed to know. Felicitous reading—I adapt the term from John Austin’s definition of felicitous speech acts—is the art of locating with sufficient precision the place a given book occupies in that network at a given moment. Your skill as a reader, then, is measured by the speed and accuracy with which you can do that. Ideally you should do it in “no time at all.” Once you have oriented a book in the right place of its network, you can stop reading. In fact, you should stop reading.
Perhaps this is not “search,” classically understood, but it is about as far from screwing around as the average game theory symposium is from poker night. You go to the archive to set things right—to increase the likelihood that your network of associations corresponds to the actual one (or, as seems more likely, the culturally dominant one). That technology could assist you in this august task—the task of a lifetime for most of us—should not obscure the fundamental conservatism of this vision. The vast digital library is there to help you answer the question with which you began.
Gregory Crane imagines a library in which the books talk to each other—each one embedded in a swirl of data mining and machine learning algorithms. What do we do with a million books? His answer is boldly visionary: “Extract from the stored record of humanity useful information in an actionable format for any given human being of any culture at any time and in any place.” He notes that this “will not emerge quickly,” but one might legitimately question whether, strictly speaking, such a thing is logically possible for the class of problems traditionally held within the province of screwing around. What “useful information” was Zappa looking for (in, of all places, LOOK)? He did not really know and could not say. Zappa would have loved the idea of “actionable formats,” however. As it turns out, it took him more than a year to find a copy of a Varèse record, and when he finally did, he did not have the money to buy it. He ended up having to convince the salesman to part with it at a discount. Lucky for us, the salesman’s “network of transactions” was flawed.
How would Zappa’s adventure have played out today? LOOK Online mentions Varèse, and the “actionable format” is (at best) a click away, and at worst, over at Pirate Bay. And it is better than that. Amazon says that if you like Varèse, you might also like Messiaen’s Quartet for the End of Time, which Messiaen actually wrote in a prison camp during World War II, the fifth movement of which (the piece, not the war) is based on an earlier piece that uses six Ondes Martinot, which is not only one of the first electronic instruments, but possibly the most beautiful sound you have ever heard. Page 117And I do not believe this. There is a guy in Seattle who is trying to build an Ondes, and he has already rigged a ring controller to a Q125 Signal Processor. And he has got video.
This is browsing. And it is one of the most venerable techniques in the life of the mind. Ian F. McFeely and Lisa Wolverton make the point forcefully in their book, Reinventing Knowledge:
The categorization of knowledge, whether in tables, trees, or Dewey decimals, has exerted a fascination among modern-day scholars far disproportionate to its actual importance. Classification schemes are arbitrary conveniences. What matters is not whether history is grouped with poetry or with politics and what that says about the ancient mind, but simply whether such schemes make books readily and rapidly accessible to roaming encyclopedic intellects.
It is sometimes forgotten that a search engine does not need information to be organized in a way that is at all meaningful to human beings. In fact, a fully automated library—one that uses, say, search engines and robots to retrieve books—would surely not organize things according to subject. Search engines are designed so that the time it takes to locate a text string is as close to constant as possible. Linear ordering is more often a liability in such frameworks, and if we are using robots, it might make more sense to order the physical books by color or size than by subject area.
Libraries today try to facilitate both forms of engagement. The physical card catalogue (another technology designed to facilitate serendipitous browsing) has been almost universally replaced with the search engine, and yet the stacks themselves continue to privilege the roaming intellect. It is a sensible compromise, even if we (and more importantly, our students) are more likely to forego browsing the stacks in favor of searching. Google Books, ironically, tries to do the same thing. Its search engine undoubtedly conceives of the book as a bounded collection of strings within an enormous hash table. Yet on the sidebar, there is a list of subjects and a link labeled “Browse Books.” Clicking the latter will take you to an apparently random selection of books within “Classics,” “Magazines,” “Gardening,” “Performing Arts,” and others. It will even show you, in a manner vaguely reminiscent of Vannevar Bush’s ideas about paths in “As We May Think,” “Trending Topics” (books located by other users’ search queries).
As a search tool, Google is hard to beat. By providing lookup access to the contents of the books, it provides a facility that no library has ever been able to offer in the history of the world. Yet as a browsing tool—as a tool for serendipitous engagement—it falls far behind even the most rudimentary Page 118library. It can successfully present books on gardening, but because all categorization within Google Books is ultimately a function of search, it has a hard time getting you from gardening to creation myths, from creation myths to Wagner, and from Wagner to Zappa. It may sound perverse to say it, but Google Books (and indeed, most things like it) are simply terrible at browsing. The thing they manage to get right (search) is, regrettably, the one thing that is least likely to turn up something not already prescripted by your existing network of associations. In the end, you are left with a landscape in which the wheel ruts of your roaming intellect are increasingly deepened by habit, training, and preconception. Seek and you shall find. Unfortunately, you probably will not find much else.
What is needed, then, is a full-text archive on the scale of Google Books that is like the vast hypertextual network that surrounds it (and from which it is curiously disconnected). Hand tagging at this scale is neither possible nor desirable; ironically, only algorithmic methods can free us from the tunnel vision that search potentially induces. Without this, the full text archive becomes something far less than the traditional library.
There are concerns, of course. A humanist scholar—of whatever discipline, and however postmodern—is by definition a believer in shared culture. If everyone is screwing around, one might legitimately wonder whether we can achieve a shared experience of culture sufficient to the tasks we have traditionally set for education—especially matters such as participation in the public square. A media landscape completely devoid of guides and standards is surely as lethal to the life of the mind as one so ramified as to drown out any voice not like one’s own. But these concerns are no sooner raised than reimagined by the recent history of the world wide web. Today, the dominant format of the web is not the “web page,” but the protean, “modded” forum: Slashdot, Reddit, Digg, Boing Boing, and countless others. They are guides of a sort, but they describe themselves vaguely as containing “stuff that matters,” or, “a directory of wonderful things.” These sites are at once the product of screwing around and the social network that invariably results when people screw with each other.
As usual, they order this matter better in France. Years ago, Roland Barthes made the provocative distinction between the “readerly text” (where one is mostly a passive consumer) and the “writerly text,” where, as he put it, the reader, “before the infinite play of the world (the world as function) is traversed, intersected, stopped, plasticized by some singular system (Ideology, Genus, Criticism) which reduces the plurality of entrances, the opening of networks, the infinity of languages.” Many have commented on the ways such thoughts appear to anticipate the hypertext, the mash-up, and the web. But Barthes himself doubted whether “the pleasure of the Page 119text”—the writerly text—could ever penetrate the institutions in which readerly paths through culture are enshrined. He wrote:
What relation can there be between the pleasure of the text and the institutions of the text? Very slight. The theory of the text postulates bliss, but it has little institutional future: what it establishes, its precise accomplishment, its assumption, is a practice (that of the writer), not a science, a method, a research, a pedagogy; on these very principles, this theory can produce only theoreticians or practitioners, not specialists (critics, researchers, professors, students). It is not only the inevitably metalinguistic nature of all institutional research which hampers the writing of textual pleasure, it is also that we are today incapable of conceiving a true science of becoming (which alone might assemble our pleasure without garnishing it with a moral tutelage).
Somewhere in there lies a manifesto for how digital humanities might reform certain academic orthodoxies that work against the hermeneutics of screwing around. Have we not already begun to call ourselves “a community of practice,” in preference to “a science, a method, a research, a pedagogy”?
But the real message of our technology is, as usual, something entirely unexpected—a writerly, anarchic text that is more useful than the readerly, institutional text. Useful and practical, not in spite of its anarchic nature, but as a natural consequence of the speed and scale that inhere in all anarchic systems. This is, if you like, the basis of the Screwmeneutical Imperative. There are so many books. There is so little time. Your ethical obligation is neither to read them all nor to pretend that you have read them all, but to understand each path through the vast archive as an important moment in the world’s duration—as an invitation to community, relationship, and play.
12. Frank Zappa, “Edgard Varèse: The Idol of My Youth,” Zappa Wiki Jawaka, accessed February 26, 2011, http://en.wikipedia.org/wiki/Frank_Zappa - cite_note-Varese-14.
6. Abort, Retry, Pass, Fail: Games as Teaching Tools
Games and play have always served an educational function. Computer games are only the latest incarnation in a vast history of playful learning environments and educational game tools. Three particular threads interweave in this general introduction. First, play and games are ancient elements of human learning. The former instills basic social cues that facilitate human interaction and group cohesions, while the latter improve complex skill acquisition, abstract thinking, and peer cohesion. Johan Huizinga, who described play as an essential (although not sufficient) element to cultural development, paid tribute to this dual nature by titling his book Homo Ludens, or “Man the Player.” His oft-quoted opening line is worth citing again: “Play is older than culture, for culture, however inadequately defined, always presupposes human society, and animals have not waited for man to teach them their playing.”
Second, a simple dichotomy between “play” and “game” belies the complexity that exists between them. Roger Caillois places the tension between play and game on a spectrum with paidia at one end of the axis, reflecting unstructured, spontaneous play, and ludus at the other, reflecting rule-based, explicit games. The ancient Romans understood the spectrum between play and game. The Latin word ludus meant both play and sport, but also training, as the word was used to describe primary schools for boys and girls. And, reflecting the seriousness with which some games were taken, ludus also described gladiatorial schools. Generally, humanity tends to formalize play into games, at both the individual level as children become adults, and at the cultural level as cultures become increasingly complex and economically Page 122developed. As seen in the differences between children kicking stones on a playground and professionals earning a living on the soccer pitch, this spectrum reflects instantiations of cultural formation. Indeed, the tendency to translate the paidic into the ludic, from the organic to the planned and structured, may reflect the very essence of cultural development.
Third, the spectrum between play and game in terms of definition mirrors the playfulness in which people participate in games. Players can “game” a system by adapting, bending, or breaking the rules, resulting in a completely satisfying gaming experience for them that readily thwarts the intentions of the designer or instructor. With respect to education, this playfulness means, in part, that the prescribed educational message may be completely ignored or subverted by the student game-player. The medium may not effectively impart the desired message. A parallel to television may help. Some of the earliest critics of television, for example, saw it as a tool of cultural and industrial domination as the viewers passively absorbed the privileged message of capitalistic giants. Television, like games, however, is a heavily mediated environment with complex modes and messages that are actively constructed by an active audience. It is a demanding ephemeral medium requiring conscious construction of meaning but does so through a series of images and conventions that are deeply familiar—close, but not quite, like reality. Games are similar. What is learned from playing a game may not reflect the desired outcome of the game designer.
This chapter surveys the history of games and how they have been used in teaching, especially teaching the liberal arts. While there is a long history of games and research into the history of gaming, there is less research into how serious games can enhance learning. We are at an experimental stage where games are being designed, often without much educational theory behind them. We propose that one promising area, especially in history, is to teach through game design where students do not just play games, but have to design games and through the design of games, learn about the subject matter being simulated.
Although recent trends in educational philosophy have highlighted the importance of creating play spaces for creative development, these efforts are not new. Miniaturized domestic settings have been found in Egyptian tombs of children and adults dating back four thousand years. By the seventeenth century, doll houses became common play spaces for little girls and young (and older) women. These miniature settings implied “a space specifically Page 123designated for play, often by adults who intend that children play nowhere else.” Often large and heavy, doll houses created spaces relatively free of interference where complex games could be set up and played out over a long period of time. To the designers and the purchasers, these spaces provided training for moral instruction, a point made clear in early modern literary references to tidiness, order, and domestic roles. Certainly much of the play that took place within the minds of the children reflected common domestic routines, even if adults did not structure the play along these lines, although some extant narratives may have encouraged such activity. The affordances offered by these ludic spaces, however, permitted significant interpretive play outside intended moral lessons: “It seems quite clear that most girls were able to regard doll houses as their own ludic spaces, places dedicated to their own play, rather than as sites for training in compliance.” Unsupervised, children often engaged in transgressive play, giving the dolls more interesting lives than their roles intended, moving them into spaces they should not have occupied, and exploring anxieties experienced during the daily domestic routine.
In eighteenth-century Europe a rage for card play developed throughout all levels of society, even though most historical academic attention has been placed on aristocratic play. Popular card games such as Whist, Faro, and Pope Joan promoted not only a common framework for understanding gameplay mechanics, but also a common set of social norms associated with hosting and attending a night of cards. These card games created a common framework underpinning not only the mechanics of play, but also gentility and hospitality, which evolved from a learned habit to a seemingly natural state. This change was particularly important for merchants, most of whom maintained financial dealings with the aristocrats. Social commentators remarked on “the increasingly genteel manners of the middling sort, especially those in the hospitality, retail and commercial sectors, and credited their frequent contacts with aristocratic customers with the change.” An understanding of polite society and commercial affability paved the way for better financial relationships and allowed those in the middle classes to move more self-assuredly among the social circles of their customers. Card games helped solidify a growing set of social rules that defined the emerging middle class. Carding was a part of this learning to fit in.
Such lessons were not restricted to adults; children were encouraged to play as well. Games such as “commerce,” which involved small pots Page 124of money, introduced children to accepted norms of social interaction at first with family members, then later with guests and friends. As children matured and expanded their social networks, “they joined more advanced adult players at more involved games, absorbing lessons in risk management as they dropped their pocket money into the pool.” The games framed social conventions that reinforced a comfortable system of expected behaviors and developing cultural norms for the middle class, essentially a blend of gentility with moderation and restraint.
Games in military training are perhaps the most studied aspect of games as teaching tools. The visualization of hunting and battlefield situations is an effective form of tactical communication and has served humanity in one form or another for millennia. Some scholars assert that military leaders in Asia used icons (colored stones, etc.) more than five thousand years ago. Certainly, convincing evidence exists that generals of the Roman Republic abstracted the chaotic nature of battlefield movements with sand tables and figures. This military tool allowed competing strategies to be played out in advance of battle, and later, to provide training exercises for generals and their staff. Games, as such, appear to have gone hand in hand with such developments. Three games in particular appear to be either descendants of, or antecedents to, battlefield visualizations.
Wei Hai, meaning “encirclement,” is dated to approximately 2500 b.c.e. and, in some sources, is attributed to Sun Tzu, the author of The Art of War. It features players’ use of colored stones to represent large army units. The game appears to have been an early predecessor to Go, and the goal of encircling one’s opponent has obvious military and hunting parallels. Petteia, meaning “pebbles,” is an ancient Greek game that may have had an older Egyptian origin. It is played with black and white stones and the goal is to surround your opponent’s piece between two of yours. Pots and vases, which appear to be contemporary with the Trojan War, depict soldiers and heroes playing the game. Polybius, commenting on the Carthaginian general Hamilcar’s battlefield prowess, compared his considerable tactical talent to that of a skilled Petteia player. And Chaturanga, probably meaning “army,” was developed in India in the sixth century and is often considered a precursor to chess. Here, game pieces represented specific military formations and resources, such as elephants and chariots.
Although different in rules and form, all three games share the same abstractions of landscape and pieces, which permit the development and Page 125refinement of strategic thinking. These lessons included military parallels in addition to flanking and encirclement mentioned before: removing pieces from play, controlling resources, slowing battles of attrition, and controlling space. Furthermore, depending on skill level, players and observers may deduce the “game state,” determining what had recently come to pass and what would likely happen in the future, simply by looking at the current position of the pieces on the board.
Such advances led to the development of more realistic warfare games, the first of which, most scholars agree, was Christopher Weikhmann’s King’s Game (Koenigspiel in German). The game was more realistic in the sense that the board was larger and included more playing pieces representing a broader array of military figures with more diverse movement options; these included a “king, his marshal, a pair of chaplains, chancellors, heralds, couriers, lieutenants, adjutants, bodyguards, halberdiers, and a set of eight private soldiers, which were given sixteen different powers of movement on the board.” Koenigspiel was more visually realistic than its predecessors, and certainly contained more complicated gameplay elements. The game functioned more like an enhanced version of chess, however, and did not possess realistic technical details about unit strength and ability—essentially lacking a sense of procedural realism meaning that a paradigm for simulating gameplay processes with an emphasis on conceptual realism was noticeably absent.
The inclusion of such elements in war games appeared rather quickly, with new games and their various iterations appearing between the late eighteenth and early nineteenth centuries. These games introduced a number of realistic game innovations, including real topographical and terrain maps with an overlying grid as a game board, realistic movement limits that were affected by the terrain, the representation of multiple units with one figure, supply and support logistics (bridges, bakeries, and wagon convoys), and the inclusion of an umpire to mediate disputes over game rules. In 1811, all these features appeared in Baron von Reisswitz’s Kriegsspiel (War Game), which was presented to the Prussian king, Friedrich Wilhelm III. The king was soon “contesting his friend the Czarevich Nicholas in their diplomatic trips between Moscow and Berlin, the two young royals acting out little conflicts just as their elders had ordered men of flesh and blood into battle.” Reisswitz’s son published an updated version of the game that came with a sixty-page manual entitled Rules for a New Wargame for the Use of Military Schools. The most significant aspect of this update was that the game attempted to “codify actual military experience and introduced the details of real-life military operations lacking in his father’s game. In particular, he quantified the effects of combat so that results of engagements Page 126were calculated rather than discussed.” Later versions even included dice to mimic the random, often chaotic occurrences that can tip a battle.
The increasingly realistic nature of war games, while suitable for battlefield planning, training, and re-enactments, had lost its “playful” nature in the complexity. As such, the later nineteenth century saw the split of war games along two equally popular tracts: rigid Kriegspiel, which focused on formal rules and realism, and free Kriegspiel, which focused on playability and symbolic play. Both versions worked their way into training academies in Britain and the United States, and then into the hands of enthusiasts and hobbyists the world over, as pointed out by Milton Weiner in 1959:
The free play game has received support because of its versatility in dealing with complex problems of tactics and strategy and because of the ease with which it can be adapted to various training, planning and evaluation ends. The rigid play game has received support because of the consistency and detail of its rule structure and its computational rigor.
These two streams codified the various game elements and mechanics that would influence game design over the next century and a half. The inclusion of computing technologies would add several others.
As early as 1960, computers were introduced to enhance the procedural realism of tabletop war games. While the initial efforts of computation focused on speeding up gaming mechanics, computers began to enhance the realism and utility of the game in a number of significant ways: the concurrent evaluation of hypothetical game decisions prior to action, the modeling of the complex interactions of multiple players, the simulation of multiple views of the same game state, and the ability to play against the computer rather than another human. As computers became more and more powerful, these games and simulations found a home not only in military academies around the world, but also in the homes of civilians. That the U.S. military developed America’s Army as both a training and recruitment tool reflects this ready transition.
Games and Education Theory
The manner in which instructors use computer games in the classroom, particularly at the university level, necessitates an examination of educational theory because, in this case, theory drives practice. On the whole, efforts to include gaming in the classroom, particularly at the university level, rely Page 127on intuitive leaps by faculty attempting to bridge the gap between dissemination and uptake, often without due consideration or even awareness of the efforts by educational theorists to assess the efficacy of using games in the classroom. These often-inspired efforts may remain isolated from similar efforts elsewhere, falling by the wayside when the professor teaches a different course or takes a research leave.
When considered from a broader theoretical perspective, the motivation to use games (technologically enhanced games in particular) as teaching tools falls into two broad pedagogical paradigms. The first relates to student engagement, often invoking some aspect of active or experiential learning as a pedagogical approach, even if that term is more intuitively understood than precisely defined. This is particularly true with respect to learning hierarchies, such as Benjamin Bloom’s taxonomy, where instructors instinctively prompt students to move from passive recipients of knowledge to active participants in the synthesis and evaluation of information and argument. Theoretical frameworks, however, do exist. Within the larger frame of Jean Piaget’s constructivism, which argues that education is not a transfer process but a process in which students construct their own knowledge through observation of the surrounding reality, Seymour Papert takes the leap from the contemplative to the action driven. He argues that learning occurs especially when students are required to construct the tools of their own learning experience. His constructionism is not the only pragmatic view on learning, but it is one of the most radical. David Kolb’s experiential learning cycle, for example, posits two elements to effective learning: a prehension element, where students take hold of an event through concrete experience; and a transformation element, where internal reflection and active manipulation reconsider and apply the event. The key here is that experiential learning occurs “only after experiences or events have been transformed by either reflection or action, or preferably both.”
The second incorporates variations of Fred Davis’s technological acceptance model, which evaluates the likelihood of individuals and groups adopting a particular technology. This well-validated model has technologically focused variables (specifically, perceived usefulness and perceived ease of use) as well as more common metrics used to evaluate the likelihood of acceptance of information technology. Its effective use can correct or at least mitigate assumptions that students generally familiar with technology (so-called digital natives) will prefer and benefit from digital game-based learning. Even a brief consideration of this assumption should raise flags in the minds of researchers. Students need to learn the affordances of video games in the same way that traditional classroom mechanics, such as note taking during lectures, are learned. To ensure the effective adoption of Page 128gaming technologies, educators need not only assess the perceived effectiveness of the game as a pedagogical tool, but also the video game literacy of the students (essentially, the perceived ease of use by students with disparate gaming experience) and the learning opportunities as an effect of its utility.
In a study that implicitly reflects these two theoretical perspectives, Henry Jenkins, a leading light in the design and study of computer games, and Kurt Squire conducted important preliminary work on the use of video games in the classroom. They tested five different games (ranging from commercially available software to games developed at the MIT Media Lab) as teaching tools at various education levels. Under certain circumstances, they argued, games can model complex scientific, social, and economic processes, thus increasing the students’ understanding of such complex subject matters.
- Civilization III—a real-time strategy game employed to teach high school disadvantaged students about large-scale, long-term historical change and the ways various aspects of a civilization are interconnected.
- Revolution—a multiplayer historical role-playing game developed at MIT, used to teach the impact of short-term events, and the potential for and limitations of individual activity within these constraints.
- Prospero’s Island—a single-player game based in the complex world of Shakespeare’s Tempest, aimed to increase the players’ understanding of the play; the story is not retold, but reinvented in this environment and the player is given freedom of choice.
- Environmental Detectives—an augmented reality game (ARG) with an ecological theme, played in teams with personal digital assistants (PDAs); the game emphasized win-loss strategies employed during imagined contamination scenarios.
- Biohazard: Hot Zone—a training simulation game designed by MIT, which helped students learn introductory biology and environmental science.
The experiments described show that game-based learning is often a holistic, immersive experience that encouraged a type of critical learning beneficial to the intellectual development of the students. Such efforts appear, at least on the surface, to improve cognitive learning outcomes among students. In a large meta-analysis of studies publishing results of game-enhanced teaching, Jennifer Vogel et al. synthesized the conclusions of 32 studies (from a list of 248 potential studies) that compared traditional teaching methods to teaching that included games and simulations. The authors concluded the following: “significantly higher cognitive gains were observed in subjects utilizing interactive simulations or games versus traditional teaching methods.”Page 129
These authors, and other critics, argue that these conclusions are tentative at best. The Vogel study, for example, contains a number of secondary conclusions that speak to the topic’s complexity. First, there appears to be a significant gender difference, with male students preferring traditional teaching approaches while female students prefer games and simulation—a perhaps counterintuitive assertion given common, albeit incorrect, perceptions of the average gamer. Second, they suggest that user control over the environment is an important indicator of cognitive gain. The more freedom the student has to navigate the environment, the better the result. Third, factors often considered important to engagement, such as graphic realism, do not seem to have a significant impact on cognitive learning. Perhaps most significantly, most of the studies included in the meta-analysis focused on teaching engineering, science, or the health sciences.
Proponents of the inclusion of video games in the science curriculum have explicitly championed it as a form of active learning—exploring problems within the constraints and affordances of software. These participatory simulations and experiences “immerse players in complex systems, allowing them to learn the points of view of those systems and perhaps even develop identities within the systems.” In addition, the very nature of computer games allows students to learn at their own pace, receive immediate and often continuous feedback, and review through replay elements that were misunderstood. These features have shown increased learning outcomes over traditional lecture approaches for students in science, technology, engineering, and mathematics. University students who played the game Virtual Cell as part of the biology curriculum, for example, obtained a 40 percent increase in learning outcomes over students who attended lectures instead. Other studies report similar improvements in the quality of learning outcomes in computing science education studies. Here, the potential for video games seems enormous.
Take, for instance, the demand for educational reform in the medical profession, where the lack of appropriate skill acquisition has dramatically increased the use of simulation and role-playing environments. Human patient simulators, virtual emergency rooms and intensive care units, and role-playing environments employ many of the gameplay mechanics established over the past century.
The application of video games in the liberal arts seems, on the face of it, a more risky proposition. The paucity of good “serious games” at the Page 130university level in the humanities and social sciences speaks to this difficulty. In addition, despite popular perception, university-level history courses are not litanies of facts and dates. Good history courses evaluate and synthesize the interpretations of historians about whysomething happened, not just whathappened. This sort of scholarly debate does not readily lend itself to a gripping game mechanic. In addition, when such games are attempted, they frequently focus on either entertainment, which oversimplifies the content, or on education, which detracts from the gameplay. As games may only adhere to the “broader strokes of history,” as one game commenter claims, they are not suitable as a digital textbook. Too often designers sacrifice the education content of the game to improve game mechanics, graphic detail, or production values. This dumbing down or “sweetening” of the content is clearly a poor pedagogical choice. Such games make poor substitutes for traditional teaching techniques. There are exceptions, such as games like Power Politics III, which places the player in the role of a campaign manager of current and historic presidential candidates. Released in 2005 by Kellog Creek Software, the game has been used with some success in political science classes at American universities.
Combining university-level learning outcomes with entertainment is the principal challenge facing postsecondary serious games. Overcoming this challenge requires attention to a number of factors: active involvement and stimulation of all players, sufficient realism to convey the essential truths of the simulation, clarity of consequences and their causes in both rules and gameplay, and the repeatability of the entire process. Educational and domain experts must, therefore, be included at all levels of the game design process, and not simply viewed as content creators. In particular, agreement on and iterative assessment of three elements of the game design process will reduce the likelihood of the educational content being lost: the purpose of the game (acquiring skills or knowledge), the affordances of the gameplay (improved social interaction, for example), and the effects of gameplay (learning outcomes, enjoyment, etc.). Without proper consideration to these elements throughout the design process, it is unlikely that specific learning outcomes would be achieved. This is a significant challenge considering that there is little empirical evidence that games are even capable of teaching what the educators think they can. This challenge is due in part to the paidia-ludus tension inherent in gameplay (the game may increase cognitive output, but may not in any way affect a teacher’s specific education outcomes). There is reason to doubt that assigning a competitive game in a class so that it is now mandatory is an effective teaching tool; as Charles Bailey states, mandatory games do not necessarily “build character.”Page 131
One popular approach to overcoming this difficulty is to create learning environments that improve students’ campus experience. Given the popularity of massively multiplayer online role-playing games (MMORGs), educators have sought to leverage the open-ended nature of these environments for learning purposes. Virtual worlds are not necessarily games; however, they do mimic many game-like elements. Second Life, perhaps the most well-known manifestation of this technology, extended previous technologies such as multiuser dungeons (MUDs) and the somewhat recursive MUD object orienteds (MOOs). In Second Life many universities have created models of their campuses (often for promotional purposes). There is also a university-focused space called Campus, which adds additional tools restricted to postsecondary institutions. Campus serves as an interesting middle ground between MMORGs and virtual worlds, essentially adding curriculum creation tools to a large, digitally populated campus environment. Players may “game” the system, however, subverting the intent of the game’s designer and transforming the instructional intent in ways not intended. Like many technologies that once seemed cutting edge, Second Life may already have seen its glory days. Second Life now seems a research environment where academics use other academics (rather than students) as subjects in experiments on teaching effectiveness and engagement.
Still, researchers have published significant research on the potential of virtual worlds. Andrea De Lucia et al., for example, describe the establishment of a virtual campus for e-learning courses. The virtual campus consists of four virtual spaces—a common student campus, collaborative zones, lecture rooms, and recreational areas—bound together with a Moodle plug-in to allow the integration of multimedia content. Similarly, Marcus Childress and Ray Braswell describe in detail the effectiveness of Campus at a small Midwest university. Their project sought to increase student participation within the university community and curriculum, particularly for those uncomfortable with lack of visual feedback associated with chat rooms and email. The authors of both studies conclude that when compared to less immersive environments, MMORGs create a stronger sensation of presence; this arises from an increased awareness of others within the setting and to enhanced communication resulting from avatar gestures and expressions. On the downside, users describe particular difficulties with navigation and the use of the 3D interface. On the whole, the authors concluded that virtual environments support synchronous communication and social interaction, and increase the participants’ level of motivation, although discipline-specific efforts remain understudied. Similarly, Yolanda Rankin et al. found that by facilitating interactions with native speakers in MMORPGs (Everquest II ), English-as-a-second-language (ESL) students Page 132improved significantly more in second-language acquisition than students learning through more traditional methods.
Using these games as objects of study for the depiction of particular instantiations of historic events is another matter altogether. José Lopez and Myriam Caceres, for example, theorized that many popular commercial games can be classified not by their genre or technical features, but by their subject matter as defined by the liberal arts: war and conflict, urbanism and territorial management, democracy and citizenship, economy and trade, and the environment. As objects of study thematically defined, games become a sociocultural resource readily mined by humanists and social scientists in terms with which they are more familiar.
Learning through Game Design
A constructionist, rather than an instructionist, approach to video games provides students with the means to build their own games, rather than simply play someone else’s. In order to design a game, not only do students need to develop and consider the content of the game (synthesizing and evaluating the most pertinent elements of the topic), they must also consider how to convey that information in a meaningful manner that makes sense to someone with less domain expertise.
Teaching meaningful communication through game design is a double-edged sword. On the one hand, video games specialize in the development of knowledge transfer and skill acquisition, which may provide important pedagogical lessons:
- Good games make information available to the player at the moment and place where said information is needed, seamlessly integrating this information into the game world.
- Good games push the player’s competence by being both doable and challenging, a pleasant frustration with the task at hand.
- Good games are customizable, placing the player in the role of co-creator of the game world.
- Good games introduce skills gradually, usually through a tutorial section that is integrated into the game’s story, building on “a cycle of expertise,” in which the player integrates old skills with newly acquired ones.
- Good multiplayer games are highly collaborative, allowing the players to pool and share both knowledge and skills.
On the other hand, the skill passed down to the player may be only suitable for improving the playing of video games. Neil Postman’s caution regarding educational television seems an obvious parallel, where the skills acquired watching Sesame Street, for example, only better prepare children to absorb and decode the signs and symbols associated with television. According to Postman, the skills are not transferable. It could be that teaching through game design teaches primarily about game design, leaving little time for the student to learn the target subject matter. The complexity of game-authoring environments could distract from what the course is supposed to be about even if there is some learning in game design.
That said, a constructionist approach in the liberal arts could also ameliorate the disconnect between what a teacher thinks a game is teaching and what the students are actually learning. As students must develop sufficient domain expertise prior to (or concomitant with) the creation of the game, cognitive learning outcomes desired by the instructor are more likely, particularly if the game is embedded in an authentic context. An added benefit of creating such games themselves is that students gain additional skills not normally associated with traditional liberal arts courses. Technical fluency, such as that acquired using the game toolsets, such as Aurora for Bioware’s Neverwinter Nights, will introduce students to computer scripting, databases, flow control, variables, and basic logic structures. Positive results in this area have been documented at multiple education levels. More ambitiously, educators have created game design engines to create specific games for specific pedagogical purposes. Pablo Moreno-Ger et al. designed and described a toolset for the creation of adventure games that can readily be adapted for use by students, particularly those working in interdisciplinary teams with some facility in document markup. At the University of Alberta we have developed an alternative–augmented reality gaming platform with which students may design games rather than just play them.
Although there are historical precedents and many experimental projects to examine, the application of gaming technologies to teaching in the humanities and social sciences remains an understudied area. Games may promote discovery and exploration in a manner that traditional teaching techniques do not—skills which when acquired may, through proper reflection and mentorship, be transferred to disparate situations. What remains sorely lacking is comprehensive testing of the efficacy of such games in improving Page 134learning outcomes at the university level in the liberal arts. In the Humanities Computing program at the University of Alberta, we caution students about rose-colored views of technology. The application of computing technologies to the complicated, nuanced arguments made by liberal arts scholars is full of potential and risk. It will always cost more money than expected. It will always take longer than expected. But, if done carefully, with considered, measured steps, it will almost be as good as the way you were doing it before.
1. Clearly, the title served as a play on the Pleistocene species homo habilis (the “handy man”), as well as homo faber (“Man the Maker”), described by Henri-Louis Bergson in the L’Evolution créatice (1907). As Huizinga places his anthropological work within an evolutionary structure, his title seems quite cleverly chosen.
4. See, for example, J. Barton Bowyer, Cheating: Deception in War & Magic, Games & Sport, Sex & Religion, Business & Con Games, Politics & Espionage, Art & Science (New York: St. Martin’s, 1992), and Mia Consalvo, Cheating: Gaining Advantage in Video Games (Cambridge Mass.: MIT Press, 2007).
5. See, for example, the strident views of the Frankfurt School, such as Theodor Adorno and Max Horkheimer, “The Culture Industry: Enlightenment as Mass Deception,” in Dialectic of Enlightenment: Philosophical Fragments, ed. Gunzelin Schmid Noerr and trans. Edmund Jephcott (Stanford: Stanford University Press, 2002), 94–136; and Walter Benjamin, The Work of Art in the Age of Mechanical Reproduction, 1936, transcribed by Andy Blunden 1998, accessed July 31, 2012, http://www.marxists.org/reference/subject/philosophy/works/ge/benjamin.htm.
6. See, for example, John Fiske and John Hartley’s Reading Television, 2nd ed. (New York: Routledge, 2003), and Ien Ang, Living Room Wars: Rethinking Media Audiences for a Postmodern World (London: Routledge, 1996).
11. Janet E. Mullin, “ ‘We Had Carding’: Hospitable Card Play and Polite Domestic Sociability Among the Middling Sort in Eighteenth-Century England,” Journal of Social History 42, no. 4 (Summer 2009): 991.
14. See Milton G. Weiner, “An Introduction to Wargames,” P-1773, The RAND Corporation, August 17, 1959; Peter Perla, The Art of Wargaming: A Guide for Professionals and Hobbyists (Annapolis: Naval Institute Press, 1990); and Roger Smith, “The Page 135Long History of Gaming in Military Training,” Simulation & Gaming 41, no. 1 (February 2010): 6–19.
16. In English translation: “Like a good draught-player, by isolating and surrounding them, he [Hamilcar] destroyed large numbers in detail without coming to a general engagement at all”; Polybius, Histories 1, no. 84, in Perseus Digital Library, edited by Gregory Crane, accessed July 31, 2012, http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.01.0234%3Abook=1%3Achapter=84. The game in the original version of the text is Petteia.
27. Although Carolin Kreber focuses on case studies instead of games, her criticisms of why active and experiential learning approaches fail is worth considering. See Carolin Kreber, “Learning Experientially Through Case Studies? A Conceptual Analysis,” Teaching in Higher Education 6, no. 2 (2001): 217–28.
31. Fred D. Davies, “Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology,” MIS Quarterly 13, no. 3 (1989): 319–39; Fred D. Davis, Richard P. Bagozzi, and Paul R. Warshaw, “User Acceptance of Computer Technology: A Comparison of Two Theoretical Models,” Management Science 35, no. 8 (1989): 982–1003. This model has been extended and enhanced by a number of researchers. See, for example, Yogesh Malhotra and Dennis F. Galletta, “Extending the Technology Acceptance Model to Account for Social Influence: Theoretical Bases and Empirical Validation,” Proceedings of the 32nd Hawaii International Conference on System Sciences (1999), 1–14.
32. Jeroen Bourgonjon et al., “Students’ Perceptions about the Use of Video Games in the Classroom,” Computers & Education 54, no. 4 (May 2010): 1145–56. The authors came to their conclusions based on descriptive statistics applied to a survey questionnaire given to 858 Flemish secondary school students.Page 136
33. Kurt Squire and Henry Jenkins, “Harnessing the Power of Games in Education,” InSight 3, no. 1 (2003): 8. For a similar view, see David Williamson Shaffer, Kurt R. Squire, Richard Halverston, and James P. Gee, “Video Games and the Future of Learning,” The Phi Delta Kappan 87, no. 2 (October 2005): 104–11.
39. Jennifer J. Vogel et al., “Computer Gaming and Interactive Simulations for Learning: A Meta-analysis,” Journal of Educational Computing Research 34, no. 3 (2006): 229–43. To qualify for the meta-analysis, the study needed to include statistical assessments of teaching and had to specifically focus on cognitive gains or attitudinal changes. See also Harold O’Neil, Richard Wainess, and Eva L. Baker, “Classification of Learning Outcomes: Evidence from the Computer Games Literature,” Curriculum Journal 16, no. 4 (2005): 455–74.
44. Kurt Squire, “From Content to Context: Videogames as Designed Experience,” Educational Researcher 35, no. 8 (2006): 27. See also Kurt D. Squire, “Video Games in Education,” International Journal of Intelligent Simulations and Gaming 2, no.1 (2003): 49–62; and Kurt D. Squire, “Replaying History: Learning World History through Playing Civilization III” (PhD dissertation, University of Indiana, 2004).
47. See, for example, Marina Papastergiou, “Digital Game-Based Learning in High School Computer Science Education: Impact on Educational Effectiveness and Student Motivation,” Computers & Education 52, no. 1 (2009): 1–12. For a particularly positive view of the potential of games, see Rosemary Garris, Robert Ahlers, and James E. Driskell, “Games, Motivation, and Learning: A Research and Practice Model,” Simulation Gaming 33 (2002): 441–67.
49. These efforts are distinct from treating patients with video game technologies, such as the improvements documented in visual coordination in patients with several forms of visual impairment or the improvement in cognitive processing in patients with mental impairments. See, for example, M. Nieto, Ambliopía: Introducción de videojuegos en su tratamiento (Madrid: Centro de Optometría Internacional, 2008); C. Shawn Green and Daphne Bavelier, “Action Video Game Modifies Visual Selective Attention,” Nature 423 (2003): 534–37; P. J. Standen, Francesca Rees, and David J. Brown, “Effect of Playing Computer Games on Decision Making in People with Intellectual Disabilities,” Journal of Assistive Technologies 3, no. 2 (2009): 4–12.Page 137
51. “The Ten Commandments of Assassin’s Creed: Brotherhood,” http://xbox360.ign.com/articles/112/1125500p1.html. For example, the visual re-creation of Renaissance Italy in Assassin’s Creed: Brotherhood is remarkable. The historic accuracy of people and events (outside of the fantasy elements) is less so. The game designers approached historical accuracy quite practically. According to the mission director Gaelec Simard: “if you can find the information within 30 seconds on the net, then it should be accurate in our game.”
53. Power Politics III was released in 2005 by Kellog Creek Software. The complexity of the game can be varied depending on the skill level of the player, and deals with topics like dirty tricks, electoral crises, and disputed results.
56. Yolanda A. Rankin, McKenzie McNeal, Marcus W. Shute, and Bruce Gooch, “User Centered Game Design: Evaluating Massive Multiplayer Online Role Playing Games for Second Language Acquisition,” in Sandbox ’08: Proceedings of the 2008 ACM SIGGRAPH Symposium on Video Games (2008), 43–49.
59. Lowell Cremorne, “Why Second Life is Already Second-Best for Education,” accessed August 1, 2012, [formerly http://www.metaversejournal.com/2010/10/06/why-second-life-is-already-second-best-for-education/].
60. Andrea De Lucia, Rita Francese, Ignazio Passero, and Genoveffa Tortora, “Development and Evaluation of a Virtual Campus on Second Life: The Case of SecondDMI,” Computers & Education 52, no. 1 (2009): 222.
66. Carl Bereiter and Marlene Scardamalia, “Intentional Learning as a Goal of Instruction,” in Knowing, Learning, and Instruction: Essays in Honor of Robert Glaser, ed. Robert Glaser and Lauren B. Resnick (Hillsdale: Lawrence Erlbaum Associates, 1989), 361–92.
67. James Paul Gee, What Video Games Have to Teach Us about Learning and Literacy (New York: Palgrave Macmillan, 2003); and James Paul Gee, “What Video Games Page 138have to Teach Us about Learning and Literacy,” ACM Computers in Entertainment 1, no. 1 (October 2003): 1–3.
69. Kevin Kee and John Bachynski, for example, suggest this difficulty in “Outbreak: Lessons Learned from Developing a ‘History Game,’ ” Loading . . . (The Canadian Game Studies Association) 3, no. 4 (2009).
70. David H. Jonassen, “Toward a Design Theory of Problem Solving,” Educational Technology Research and Development 48, no. 4 (2000): 64. An additional, though dated piece addressing the same topic can be found in Lloyd P. Reiber, “Seriously Considering Play: Designing Interactive Learning Environments Based on the Blending of Microworlds, Simulations, and Games,” Educational Technology Research and Development 44, no. 2 (1996): 43–58.
71. For university-level education, see Nathan Sturtevant, Sean Gouglas, H. James Hoover, Jonathan Schaeffer, and Michael Bowling, “Multidisciplinary Students and Instructors: A Second-Year Games Course,” SIGCSE 2008: Technical Symposium on Computer Science Education (Portland, Ore., 2008); for elementary education, see Judy Robertson and Cathrin Howells, “Computer Game Design: Opportunities for Successful Learning,” Computers & Education 50, no. 2 (February 2008): 559–78.
72. Pablo Moreno-Ger, José Luis Sierra, Iván Martínez-Ortiz, and Baltasar Fernández-Manjón, “A Documental Approach to Adventure Game Development,” Science of Computer Programming 67, no. 1 (2007): 31.
7. Ludic Algorithms
Llull’s Great Art
Jonathan Swift’s Gulliver, on the aerial leg of his Travels, finds himself in the lofty scholastic community of Laputa. There he encounters a professor with a strange device. The mechanism consists of a series of rotating blocks on which are inscribed words in the Laputian language and which, in use, resemble nothing so much as a mystical foosball table (figure 7.1). A few vigorous turns of the crank (for which the professor employs a team of undergraduates) produce what Robert de Beaugrande might call a “combinatoric explosion” of information: words combine randomly to produce sense and nonsense, the finest fragments of which are diligently recorded as the “wisdom” of Laputa. In this manner, Swift tells us, “the most ignorant person at a reasonable charge, and with a little bodily labour, may write books in philosophy, poetry, politics, law, mathematics, and theology, without the least assistance from genius or study.”
The Laputian device, a “Project for improving speculative Knowledge by practical and mechanical means,” and Swift’s unflattering description of the professor who invented it, are sometimes thought to satirize Gottfried Wilhelm Leibniz, whose 1666 Dissertatio de Arte Combinatoria made far-reaching claims for the ability of mathematical and mechanical languages to generate wisdom and solve conflict. Leibniz went so far as to suggest that, in the future, every misunderstanding or disagreement “should be nothing more than a miscalculation . . . easily corrected.” Disputing philosophers could take up their abaci and settle even delicate theological arguments mechanically, saying “Calculemus!”—“Let us compute!” (Leibniz).Page 140
In fact, a better-supported candidate for Swift’s vitriol is Leibniz’s acknowledged predecessor in the combinatoric arts, a colorful medieval polymath and sometime poet, rake, and martyr named Raimundus Lullus, or Ramon Llull (ca. 1232–1316). Llull’s chief invention was a so-called Ars Magna of inscripted, inter-rotating wheels developed in the latter decades of the thirteenth century and articulated in a treatise titled Ars Generalis Ultima. Its purpose was at once generative, analytical, and interpretive, and while its primary subject matter was theological, Llull was careful to demonstrate the applicability of the Ars Magnato broader philosophical and practical problems of the day. In other words, Llull’s wheels constituted a user-extensible mechanical aid to hermeneutics and interpretive problem solving (figure 7.2). Properly understood, Llull and his Great Art can take their place, not in the soaring halls of Laputian “speculators” and pseudoscientists, but among a cadre of humanists with fresh ideas about the relation of mechanism to interpretation.Page 141
A review and description of Llull’s tool, with attention to its structure and function and to past misunderstandings as to its purpose, will help situate instrumental issues that many digital humanities projects must address today. Among these are problems involved in establishing scholarly primitives and developing the rules or algorithms by which they can be manipulated in creative and revelatory ways. Llull also provides a framework in which to examine the relationship between algorithmic and combinatorial methods and subjective hermeneutic practices, and to demonstrate the utility of performative instruments or environments that share in his design model. This is a model for mechanisms that are generative, emergent, and oriented toward what we would now call humanities interpretation.
Llull’s intriguing device is widely recognized as a precursor both to computer science—in its emphasis on a mechanical calculus—and to the philosophy of language, in its use of symbols and semantic fields. After early popularity in the universities of Renaissance Europe, however, it met with sharp and lasting criticism. François Rabelais’s Gargantua warns Pantagruel against “Lullianism” in the same breath as “divinatory astrology”; it is “nothing else but plain abuses and vanity.” And Francis Bacon describes the Ars Magna as “a method of imposture . . . being nothing but a mass and heap Page 142of the terms of all arts, to the end that they who are ready with the terms may be thought to understand the arts themselves.” Such collections, Bacon observes, “are like a fripper’s or broker’s shop, that has the ends of everything, but nothing of worth.”
Modern critics also deride Llull. Even Martin Gardner, whose 1958 Logic Machines and Diagrams views the Ars Magna as foundational to the history of visual and mechanical thinking—Llull is Chapter One!—suggests that the best uses for his once-influential combinatoric system are (in Gardner’s words) “frivolous”: for example, to generate possible names for a baby, to work anagram puzzles, or to compare and combine colors for application in design and interior decorating.
Gardner holds that any more sophisticated or scholarly use of Llull’s device—particularly in fields like history and poetics—is wholly inappropriate. The spinning wheels, when applied to humanistic subject matter lacking in native “analytic structure” and for which there is “not even agreement on what to regard as the most primitive, ‘self-evident’ principles,” generate only circular proofs. “It was Lull’s particular distinction,” Gardner writes, “to base this type of reasoning on such an artificial, mechanical technique that it amounted virtually to a satire of scholasticism, a sort of hilarious caricature of medieval argumentation.” We may not wish to go so far (like his great proponents Peter Bexte and Werner Künzel) as to claim Llull as “der erste Hacker in den himmlischen Datenbanken” (the first hacker of the heavenly databases!), but it seems clear that the most scathing criticisms of the Ars Magna stem from a fundamental misunderstanding of the uses to which Llull meant his device to be put.
Künzel is right, in The Birth of the Machine, to describe Llull’s system of interlocking, inter-rotating wheels as an ancestor of the Turing machine, a logic device, “producing results, statements—output of data in general—by a clearly defined mechanical algorithm.” However, we would be wrong to assume, as Bacon and Gardner did, that we are to interpret as truth the statements generated through this algorithm (that is, by Llull’s proscribed procedure of marking and spinning wheels and diagramming their results). In fact, the linguistic combinations that Llull’s wheels produce are only meant to be interpreted. That is, Llull invented a device for putting new ideas into the world out of the fragments of old ideas and constraining rule sets, but left the (inherently subjective) evaluation and explication of these emergent concepts up to a human user—a person explicitly figured in his writing as an artista. Llull’s machine generates “truthful” formulations equally with falsehood, and makes no claim about or evaluation of its own output: “naturally, only the artist using the machine is able to decide which statement is true Page 143and which is false. The machine independently produces both: the universe of truth and the universe of the false, step by step.”
“Right Round, Baby, Right Round”
In building the Ars Magna, Llull began by establishing a manipulable alphabet of discrete, primary concepts or primitives on which his algorithmic and mechanical procedures could operate. The most commonly accepted (and least complex) version of this art associates nine letters of the Latin alphabet, B through K, with fundamental aspects of divinity: goodness, greatness, eternity or duration, power, wisdom, will, virtue, truth, and glory. The letter A stands in for the Divine, and is placed at the center of a circular diagram (figure 7.3), which in itself becomes a hypothetical definition of God. When lines are drawn to connect each of the nine letter-coded aspects (showing in binaries, for example, that God’s goodness is great [BC], God’s virtue lies in truth [HI], etc.), Llull expresses the basic relational character not only of divinity, but also of his very notion of an ars combinatoria. Combinatoric elements are not simply reordered, as with Swift’s Laputian machine; here they are placed for careful consideration in conjunction.
Resultant graphs—which, as we will later see, Llull considered to be dynamic rather than static—form the simplest interpretive tool of the Ars Generalis Ultima. The art is properly thought of as interpretive rather than explicatory, because the conjoined components of the definition of God that it expressed were not meant to be accepted flatly by its audience, but rather contemplated, analyzed, and above all contrasted against the opposites implied by the structural workings of the diagram—the qualities of fallen mankind. Rich rhetorical expression in these combinations comes into focus through the user’s own faculties of comparison and analogy as generated structures suggest, for example, that the power of human rulers (letter E)—unlike that of the defined divinity—is not always commensurate with their wisdom (letter F).
As a next step, Llull’s binary relationships are complicated by the application of a separate assemblage of meanings attached to his established alphabet, and a further series of diagrams. The concept of “an ending” in these elaborations, for example, may be interpreted as it relates geometrically to labeled notions of privation, termination, or perfection. Therefore, even the graphic organization of Llullian concepts participates in an expression of the enabling constraints under which his concepts are meant to function and through which they are enlivened.Page 144
Llull’s embodied relations permit the generation—for further analysis—of a phrase like “goodness has great difference and concordance.” An elevated pronouncement, indeed, but steps are taken to constrain output that could otherwise provoke an overly general discussion, through a generative process involving the insertion (via separate diagrams, figures 7.4 and 7.5) of a set of specific sense-perceptive and intellectual relations. A statement like “goodness has great difference and concordance,” then, is presented by Llull’s circles not as an eternal truth, but rather in order that it be interpreted within a specified context—that of sensual and intellectual differences—and in all the embedded relations among those fundamental domains.
For all its complexity and utility in generating relational assertions, thus far the Great Art limits itself to binary structures, and to interpretations based on fundamentally invariable graphs and matrices. With the introduction of a novel fourth figure, however, Llull expands his system from binary into ternary relationships, and moves from abstract algorithm and diagrammatic reasoning into the realm of mechanically aided hermeneutic practice (figure 7.6). He does this first by adding to the semantic weight of the primary alphabet a set of interrogatives (who, what, why, etc.) or—as he puts it—interpretive prompts. The prompts become part of a functioning rule set for procedure and elucidation when they are inscribed, along with Llull’s other encoded alphabets, on volvelles—exquisite, manipulable, inter-rotating wheels.
While versions of Llull’s wheels have been fashioned from a variety of media (including, most interestingly, the copper “covers” of a portable Italian Renaissance sundial masquerading as a book), they typically took the form of paper circles secured within incunabula and manuscripts by small lengths of string (John Dalton). The compartments, or camerae, of an outer circle would be inscribed on the page, while two inner circles were fastened on top of it in such a way as to permit them to rotate independently, mechanically generating interpretive problems based on ternary combinations of the alphabetic ciphers inscribed on them.
Llull’s wheels appear deceptively simple, but for the basic combination of two letters alone, they are capable of posing thirty-six issues to their human interpreters: twelve propositions (such as “goodness is great”) and twenty-four questions or philosophical problems (like “what is great goodness?” and “whether goodness is great”) multiplied down the succession of associations between, for example, goodness and difference, goodness and concordance, and so on. When three rather than two primary elements are combined with their associated questions or interpretive rules, as is enabled by the embedded, rotating wheels, even more complex problems can presentPage 146
themselves: for example, “whether goodness contains within itself difference and contrariety.”
Llull works out the results of his generative machine in tables similar to the half matrix used to express the simple relations of his first circular figure. In the Ars Brevis of 1308, a simplified version of his Great Art, the corresponding table has seven columns—but Llull’s Ars Generalis Ultima presents the relations that emerge from expanded iterations of the rotating wheel concept in a table with no less than eighty-four long columns. Each alphabetic expression in these tables has been algorithmically, logically, and mechanically generated for rhetorical and hermeneutic purposes, in service to what Stephen Ramsay has called “humane computation.” The cumulative effect is of an “extraordinary network of systems systematizing systems,” and yet the Llullian apparatus exists in service of interpretive subjectivity.
Llull is thought to represent the “earliest attempt in the history of formal logic to employ geometrical diagrams for the purpose of discovering nonmathematical truths, and the first attempt to use a mechanical device—a kind of primitive logic machine—to facilitate the operation of a logic system.” Llull’s wheels can be thought of as the “hardware” of this system, with the interpretive method he advocates for their use serving as software, expressed, along with output from the devices, in user manuals like the Ars Generalis Ultima.
It is important to remember, however, that most of the diagrammatic figures generated by Llull’s wheels do not explore “truths” at all, but instead pose interesting queries and hypothetical situations for their users: for example, “when it might be prudent to become angry” or “when lust is the result of slothfulness.” Llull also uses the wheels to help puzzle out such “typical medieval problems” as “If a child is slain in the womb of a martyred mother, will it be saved by a baptism of blood? . . . Can God make matter without form? Can He damn Peter and save Judas?” Llull’s Book of the Ascent and Descent of the Intellect moves beyond the theological sphere to apply his method to eight categories of natural philosophy, in order to pose and suggest possible answers to scientific problems like “Where does the flame go when a candle is put out?” or “Why does rue strengthen the eyes [while] onions weaken them?”
In the books accompanying his charts and diagrams, Llull sometimes offers full arguments and commentaries on such questions, sometimes outlines the combinatorial processes by which the questions could be addressed using his wheels, and sometimes simply demonstrates diagrammatically that such sophisticated questioning can be generated by means of the Ars Magna. At no point does Llull imply that his machine can produce “truth” Page 150independently from its human user, no matter how scientific his alphabetic abstractions appear. Instead, he himself tells us that the system employs “an alphabet in this art so that it can be used to make figures as well as to mix principles and rules for the purpose of investigating the truth.” That is, the mechanism enables interpretation through visualization, by making the core elements it operates on and the rules by which it plays explicit. The flat generation of combinations is not the point of his Great Art: that is not hard to do. In addition to the requisite hardware, Llull provides his users with a clearly specified method for analyzing both process and output outside of the generative system—and more importantly, for refining that system iteratively, based on subjective human assessment of its mechanical output. Interpretation is the real activity of the Ars Magna, not the spinning of wheels.
Despite their hermeneutic teleology, Llull’s devices participate closely in two traditions that exhibit a vexed relationship with humanistic interpretation. Any “step-by-step” production of what Künzel terms interpretive “universes” is by nature an algorithmic production, and the mixing of principles and rules on which Llull’s work depends is a nice elaboration of the notion of an ars combinatoria. An appreciation of both of these traditions and the methods that support them is critical to our understanding, not only of Llull and his interpretive devices, but also of the promise of digital tools and environments—that they might augment our methodologies and offer greater latitude to humanities scholarship.
Performance and Interpretation
Fitting Four Elephants in a Volkswagen
Llull is often listed among the first philosophers “compelled to delineate clearly a general method” for deriving conclusions. Frances Yates goes so far as to assert that the “European search for method . . . began with Llull.” We now commonly accept that “logical reasoning is, in a sense, computation” and that it “can be formalized and validated by controllable means,” but Llull’s clear and materially embodied articulation of this concept has been seen as an advance in Western philosophy, constituting the first major formal extension of traditional mnemonics, a “now-forgotten integral part of medieval education: the complex set of elaborated techniques for reminding and structuring things in human memory in a printless age.” Perhaps more important, Llull’s devices also implemented, for the first time in Western Europe, the newly translated rule-based work of the Arabian mathematician al-Khwarizmi, from whose name the word “algorithm” stems.Page 151
The relationship between algorithmic operation (as both a concrete and an abstract methodology) and the design and use of interpretive toolsets like the Ars Magna is underappreciated and perhaps easily misconstrued by humanities arts scholars outside of the tight community involved in building, making accessible, and computationally manipulating the modern digital archive. Algorithms, when thought of as remote, inflexible mathematical structures underlying computer programming and the more deterministic branches of science and engineering, can seem irrelevant or even antithetical to the work of scholarship. Practitioners of the digital humanities face the skepticism of colleagues: by building algorithmic text analysis tools, do we unthinkingly imply that the craft of scholarship can be mechanized? Are we tacitly putting constraints-based process forth as substitute for contemplation and insight? Or (a far more insidious assumption) are scripts and software, as the quiet servants delivering us the “content” of an archive, simply beneath our notice? In fact, algorithms—like various hermeneutic methods and historical schools of thought accepted by humanities scholars—can be understood as problem solving and (with a slight methodological recasting I will suggest in a discussion of the “ludic algorithm”) as open, participatory, explorative devices.
The algorithm is formally defined as a finite sequence of instructions, rules, or linear steps which, if followed, guarantees that its practitioner—whether a human or machine agent—will reach some particular, predefined goal or establish incontrovertibly that the goal is unreachable. The “guarantee” part of this description is important, as it differentiates algorithms from heuristics, or what are generally called “rules of thumb.” Like algorithms, heuristics can function iteratively to solve a problem and can be responsive to human input. Computer programs that modify themselves in response to their users, such as word processing spell-checkers, are sometimes—despite their algorithmic basis—termed heuristic. The heuristic process, however, is fundamentally one of informal trial and error rather than constrained activity according to a set of predefined rules.
Almost any everyday problem can be solved heuristically or algorithmically. For example: I have lost my car keys. Ordinarily, a harried new mother faced with this situation will proceed by heuristics: “I look in my purse. I look in my purse again. I brave the cluttered diaper bag. I check the front door because I have developed a bad habit of leaving them dangling there. I go to the last place I remember holding them in my hand. I ask my partner to help me find them. I wish the baby could talk.” In formal, graph-based problem solving, heuristics are sometimes used to guide the search for solutions by identifying the most promising branches of a search tree for further exploration, or even by cutting out unpromising branches altogether. The Page 152weak point of the heuristic method becomes evident when its user needs to shift gears. I am not finding my keys in the usual places. Should I retrace my steps next? Is it worth considering that I may have locked them inside the car? The basic “problem with heuristics”—in some cases a crippling problem, which could lead to the inadvertent elimination of the entire branch of a desired outcome branch from the search tree—“is how to decide half-way what would be an appropriate next action, i.e. how to design heuristic rules that lead to good solutions instead of bad ones” (Krista Lagus). Tellingly, we often attribute decisions in successful heuristic processes to intuition and those that result in undesirable outcomes to confusion and bad luck.
If the heuristic process fails or seems too unsystematic for comfort, a desperate searcher can always resort to a true algorithm:
Eventually, if this little program is executed perfectly, I will either find my keys or determine conclusively that they are not in the house. There’s a kind of predestination or special providence about an algorithm, formally defined. That is to say, I know to expect one of two prescribed outcomes before even undertaking the search process. And—as its strict definition requires—the algorithm is almost wholly generalizable. If I suspect I have left my keys at your house, I can run the process there. If the misplaced object is a watch, or a hat, the algorithm is equally applicable. (Of course, it is not a very efficient algorithm because it requires me, for example, to pick up and examine the house-cat—and to do so every time it saunters into a new room—but we can easily imagine more elegant versions of this basic method.)
Some common refinements to the concept of the algorithm are particularly relevant to interpretive or hermeneutic activity, which, by virtue of its realm of application, is generally predicated on ambiguity and flux. Algorithms are expected to be both perfectly precise and entirely implementable. An old bubblegum wrapper joke helps to make this point: how do you fit four elephants into a Volkswagen? The algorithmic answer is that you simply put two in the front seat and two in the back. Although those steps are clearly unambiguous, they are impossible to implement. In contrast is a commonplace algorithm for finishing one’s dissertation:Page 153
- Step 1: Write the next paragraph.
- Step 2: Repeat Step 1 until dissertation is complete.
This procedure is clearly implementable—graduate students perform it with great fortitude all the time—but it is far too ambiguous to be a “textbook,” or even a useful, algorithm. How exactly does one write a paragraph? What criteria indicate that the thing is “complete”? What is a “paragraph,” anyway? How does the algorithm know that you are writing a dissertation and not a thesis, or a novel, or a comic book? (How do you know? That is to say, how determinable from the point of view of the algorithm’s designer are the elements in this—in any—interpretive field?) And so the algorithm, originally applied to mathematical operations and associated almost inextricably in the contemporary mind with computer science, emerges as a step-by-step, linear, precise, finite, and generalizable process that produces definitive, anticipated results by constraining the actions of the agent who performs the process.
Almost as quickly as the application of algorithmic methodology to modern mechanical and computational apparatus became a fundamental aspect of design (with Charles Babbage’s 1837 Analytical Engine), algorithms themselves fell under fire as analytical or investigative devices. Babbage’s colleague, Augusta Ada Byron King, Countess of Lovelace—the daughter of Lord Byron who is celebrated as the first computer programmer for her elaborations of the Jacquard loom-like cards on which the engine operated—famously critiqued the algorithm:
The Analytical Engine [and, by extension, the algorithmic method on which it is based] has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.
Lovelace’s objection hinges on the reasonable idea that an algorithm can yield nothing more than its designer knew to ask it for in the first place. Algorithms are not fundamentally creative or revelatory. They merely perform predefined transformations and produce requested—and therefore anticipated or even presumed and therefore potentially flawed—results. We could see this quality, by way of example, in a purely mechanical performance of our car-key algorithm. The procedure’s outcome (confirmation or disconfirmation of the presence of car keys) could be in no way unexpected; it is in fact built inextricably into the process. Algorithms are certainly applicable Page 154to problem solving, but Lovelace suggests that they only (perversely) solve problems whose answers are projected, which is to say pre-known.
The Lovelace objection and its descendant Turing machine critiques bear a striking resemblance to Martin Gardner’s derisive description of Llull’s Ars Magna as a means built toward inappropriate ends, and for the manipulation of intractable objects. In such a case, any application of the algorithmic process to subjects for which, in Jerome McGann’s formulation, “imagining what you don’t know” is a desirable outcome, seems misguided at best. At worst, the use of algorithmic process in an interpretive or humanistic context could be seen as self-delusion justified through pseudoscientific formalism. (Critiques of “frivolous” combinatorial and deformative text manipulations and dire warnings against AI optimism in our ability to apply computational methods to text analysis participate in this limited acceptance of the uses to which algorithms might be put.)
Algorithms admittedly define and constrain a field of activity, even as they enable certain preordained interactions and solutions. Still, this is not to say that the results of algorithms—and even more, algorithmic methodology as subjective (most likely human) agents could actively and iteratively employ it—cannot paradoxically expand our thinking rather than atomize it, or limit it to presumptive outcomes. The precision a true algorithm requires of its elements and processes assumes a certain determinability and fixity of identity that is difficult if not impossible to maintain in interpretive fields. But to attempt, in data modeling or in performative criticism, an algorithmically enforced specificity is to experience and exploit a productive brand of what William Morris might have called “resistance in the materials” of humanities scholarship. Real challenges and opportunities arise for expanding our understanding of interpretive fields (including, at the most deceptively basic level, graphic and textual book artifacts) in the rigorous and thoughtful application of algorithmic method to our analysis and manipulation of indeterminate objects and ideas.
Lovelace gets at these consequences of algorithmic method in a neglected passage immediately following her well-known “objection.” She explains that the Analytical Engine’s facility in following rules and orders, producing expected results, and “making available what we are already acquainted with” is effected
primarily and chiefly of course, through its executive faculties; but it is likely to exert an indirect and reciprocal influence on science itself in another manner. For, in so distributing and combining the truths and the formulae of analysis, that they may become most easily and rapidly amenable to the mechanical combinations of the engine, the Page 155relations and the nature of many subjects in that science are necessarily thrown into new lights, and more profoundly investigated. This is a decidedly indirect, and a somewhat speculative, consequence of such an invention.
Here Lovelace takes up, in the context of combinatorial mathematics, that product of algorithmic, diagrammatic, deformative, and mechanical method I will cite under the broad rubric of “aesthetic provocation.”
The Gift of Screws
After-the-fact (after, that is, data-marking or -modeling) applications of aesthetic provocation are the principal manner in which information visualization enters the broader picture of humanities computing. This is in part because the digital humanities have long orbited the double stars of corpus linguistics and database construction and mining. An intense emphasis on the encoding and analysis of primarily textual human artifacts—coupled with institutional and disciplinary devaluation of methodological training and a sore lack of publication venues for image-intensive work—have contrived to make visualization, from the end-user’s perspective, generally a product to be received rather than a process in which to participate. Nonetheless, algorithmically or combinatorially generated aesthetic provocation, generally thought of as information visualization, has both rhetorical and revelatory power.
Visionary computer scientist Alan Turing, in a noted critique of the Lovelace objection, examines these revelations—the tendency of algorithmic mechanisms to provoke or surprise their users—and ultimately offers us a socialized, humanized view of algorithmic methodology. He begins the discussion with an attempt to reframe Lovelace:
A variant of Lady Lovelace’s objection states that a machine can “never do anything really new.” This may be parried for a moment with the saw, “There is nothing new under the sun.” Who can be certain that “original work” that he has done was not simply the growth of the seed planted in him by teaching, or the effect of following well-known general principles?
These “well-known general principles” are perhaps commonly thought of by humanists as the informal, heuristic methods transferred to us over the course of a rich and varied education. (One would generally rather take Page 156this stance than that; when writing on this subject, one must avoid that quagmire; etc.) But what if Turing means us to understand our day-to-day practices in “following” these principles as little more than the playing-out of socially acquired algorithmic procedures, the output of which in a human context feels like originality, invention? In other words, might we not follow formal, specific (and wholly ingrained) rules even—or perhaps most of all—when we engage in our most creative and supposedly inventive work? What is it, precisely, that inspires us?
There is no question that algorithmic method as performed by humans or machines can produce unexpected (even if, as Lovelace points out, fundamentally predictable) and illuminative results. The religious traditions of gematria and Kabbalah, the conceptual art of Sol LeWitt, John Cage’s aleatory musical compositions, OuLiPian literary production, and the procedural experiments of Ron Silliman, Jackson Mac Low, and others (for example, Lisa Samuels’s poetic deformations) are primary examples of the inventive application of algorithmic method in the “analog” world. The inspirational power of constraining systems and algorithmic methodology is everywhere evident; it is the reason we have highly articulated poetic forms like the sestina. In a practical, humanities computing context, computational algorithmic processes have been employed to perform revealing and sometimes startling graphical and statistical transformations under the rubric of text analysis. Jerome McGann’s Photoshop deformations of Rossetti paintings in the 1990s participated in this tradition. And digital information artists like Ben Fry work through strict systems of constraint in works that fruitfully blur the boundaries between creative and critical production.
The contributions of cognitive science to the humanities over the past few decades have (for better or worse) participated in what Colin Symes terms a “progressive demystification” of fundamental assumptions, long held in some quarters of the academy, about interpretive and artistic creativity. A Romantic vision of the artist unbound, as liberated in thought (a vision perhaps too easily countered with reference to the empowering constraints that drive even Romantic poetic practice), has given way among cognitive scientists to a growing “emphasis on the importance of a structured imagination.” According to this understanding, a top-down model of cognition that builds on Marvin Minsky’s notion that mental framing devices both structure and filter our thought processes, creativity functions almost wholly through elaborate systems of constraint. The idea that, as Jon Elster posits, “artists tend to maximize their options through minimizing their choices” may strike some as counterintuitive, but creative work in any number of disciplines bears this theory out, and it remains useful despite more contemporary critique.Page 157
Perhaps equally peculiar is the suggestion that Minsky’s framing system, which is structured hierarchically, could foster the subjective, nonhierarchical, out-of-the-box thinking we associate with interpretive and artistic production. According to this model of cognition, information filters progressively through top-level framing structures into lower-level “terminals.” Minsky’s primary interest is in the mechanisms of simple perception, but his concept of cognitive frames is equally applicable to more complex linguistic and creative processes. Uppermost frames in this case constitute a “range of primordial scripts” and “default grammars that control the structures of language.” There are, however, secondary constraining grammars. Margaret Boden terms these mental constraining systems, which structure critical and artistic thought and production within specific genres, forms, or disciplines, “computational spaces.” According to this theory, nonhierarchical cognition is fostered through supporting structures “whose computational spaces or frameworks are derived from particular epistemological and aesthetic domains.” These specialized spaces function both within and beyond the primary framing system that hosts them, generating, for instance, “forms of linguistic organization which transgress and even transcend those governing natural language.”
Poetic composition provides a clear example of the use of meta-grammars both to organize and to provoke subjective response. This distinction between organization and provocation is an important one because cognitive systems of constraint act simultaneously as matrices in which the fundamental units of language are placed, and as generative processes or algorithms. That is to say, a poet perceives the sophisticated metrical and rhythmic constraints of a sestina not simply as structures, but as a performative or procedural imperative. The linguistic patterns such constraints make impossible are as crucial to the composition of a poem as those they privilege and enforce. In this understanding of subjective response to algorithmic imperatives, poetry is shaped by what it cannot be, and poets by what their chosen forms will not let them do.
Some evidence exists that such genre- and form-specific shaping may become a physical or neurological condition of the performer. Cognitive scientist K. I. Foster has identified in the brain, with repeated linguistic use, a restructuring of the neural circuits or “connectionist pathways that excite mutually consistent arrays of language.” Interestingly, these pathways “at the same time inhibit those that are inconsistent with the exigencies of the constraint.” For the poet, the development of self-organizing mental systems results in a greater facility, over time, within his most familiar computational spaces and in the production of his chosen forms. And for this reason, writers exercise their faculties by engaging in rhetorical and Page 158metrical exercises and linguistic games, such as acrostics, bouts-rimés, or complex forms like hendecasyllabics. (Gerard Manley Hopkins, who constructed poetic matrices of ever-increasing complexity, maintained in his journals—or perhaps sought to reassure himself—that “freedom is compatible with necessity.” Likewise, Emily Dickinson’s “Attar from the rose” is “not expressed by Suns—alone— / It is the Gift of Screws.”) In fact, scientific investigation of the processes underlying poiesis suggests that artistic freedom may only be embodied—artifactually and physiologically—through the necessities of constraining, algorithmic systems.
Experimental and synthetic work in analyzing literary expertise also tends to support a constraints-based reading of the poetic and interpretive process. Cognitive research by Marlene Scaramalia and Carl Bereiter indicates that the presence of strict constraining systems promotes greater linguistic fluency in writers, by lending “form and direction to the more localized decision-making” involved in word choice within a particular genre or format. In effect, as Jon Elster demonstrates, this concentrates creative energies by economizing on the number of aesthetic and subjective choices available to the artist at any one time. Robert De Beaugrande explains the futility of any attempt at artistic composition unfettered by localized systems of constraint in terms of the “combinatoric explosion” that would occur should the range of choices become “unmanageable.”
Regardless of our acceptance of the theoretical assertions of cognitive science, the dual operation of computational spaces as structured matrices and generative algorithms functioning both within and beyond Minsky’s top-down, framing filters becomes usefully, provocatively evident in our attempts at modeling and encoding the artworks these spaces engender. Poetic conventions generate linguistic artifacts that, despite the regularity their constraining patterns enforce, are essentially nonhierarchical. This fact is attested to by the infelicity of common text markup systems at capturing poetic (as opposed to informational) organization hierarchically. We should also note that constraint does not operate at the same, uniform scale throughout a creative or interpretive procedure, but rather shifts in specificity depending on choices made and exigencies encountered. And all these notions are complicated by a necessarily performative slant to any algorithmic or constraints-based methodology.
The Ludic Algorithm
What may look inaccessibly, mechanistically algorithmic in (for instance) the OuLiPian project might be better understood as a ludic algorithm, Page 159which I posit as a constrained, generative design situation, opening itself up—through performance by a subjective, interpretive agent—to participation, dialogue, inquiry, and play within its prescribed and proscriptive “computational spaces.” This work may embed within itself a proposed method, but does not see its ultimate product as simply the output of a specified calculation or chance operation. In fact, the desired outcome of a ludic algorithm is the sheer, performative, and constructive enactment of the hermeneutic circle, the iterative “designerly” process we go through in triumphing over interpretive or creative problems we pose ourselves. In undertaking such activity, we are more than Jacques Bens’s “rats qui ont à construire le labyrinth dont ils se proposent de sortir.”
Turing touches on this brand of dialogue in his contemplation of the relationship between a machine (the very embodiment of algorithmic process) and its fallible, creative human interlocutor:
A better variant of the [Lovelace] objection says that a machine can never “take us by surprise.” This statement is a more direct challenge and can be met directly. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks. Perhaps I say to myself, “I suppose the Voltage here ought to be the same as there: anyway let’s assume it is.” Naturally I am often wrong, and the result is a surprise for me for by the time the experiment is done these assumptions have been forgotten. These admissions lay me open to lectures on the subject of my vicious ways, but do not throw any doubt on my credibility when I testify to the surprises I experience.
The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false. A natural consequence of doing so is that one then assumes that there is no virtue in the mere working out of consequences from data and general principles.
If its performative and cooperative components are not appreciated, Turing’s notion of algorithmic surprise could lead to justification of a grossly limited vision of the interpretive activity possible in digital environments, an idea of algorithm that restricts its application to after-the-fact “aesthetic Page 160provocation.” In fact, the real “surprise” involved here is less a matter of the algorithm working to its inevitable result on a set of data (as in a conventional information visualization) than of what that action, under observation, reveals about human thought processes. Turing is not a passive recipient of algorithmic output, but rather a predictive, constructive participant in its fashioning and reception. He makes assumptions, holds expectations, and awaits algorithmic response as just another part of a feedback loop. He is, in this, a reader of algorithms and their output, just as we are all readers of the machine of the book. Still, despite the cumulative (socializing and humanizing) effect of Turing’s assessment, as Ramsay reminds us, “to speak of an algorithm is usually to speak of unerring processes and irrefragable answers”—not of the participatory and iterative work of humanities interpretation.
Turing’s vision of the imperfect, risk-taking, intuitive human in conversation with a precise, calculating, fundamentally surprising machine partner is now familiar to us not only from science fiction and technological speculation but from our daily lives. We experience this brand of surprise perhaps most often as frustration in our interaction with algorithmic mechanisms (like telephone voice-response systems and the purgatory of the Department of Motor Vehicles)—interaction that can make us feel more like passive victims than active participants. We must realize, however, that Turing is documenting a fresh brand of dialectic, and by casting their facility in the “mere working out of consequences from data and general principles” as an anthropomorphized virtue machines can model for and perhaps teach us, he effectively rehabilitates computer-mediated algorithmic method as a creative and critical mode of performance. Recognition of the value of “working out . . . consequences” is as tangible a benefit, and perhaps as great a “surprise,” as the mechanically generated results of any imaginable algorithm. Performance (including human performance of algorithmic action) is valued here over passive reception. Turing’s surprises are provocations to further action, not those unpragmatic, theory-ridden “answers to enigmas in which we can rest” decried by William James. That is, we are sure from his description and subsequent proposals (indeed from the whole character of his project) that Turing means to take these dialogues further.
My own desire for an enhancement of the typical aesthetic provocation paradigm hinges—like Turing’s observation and like OuLiPian practice generally—on the methodological uses of algorithmic constraint and calls for a new, more ludic and performative application of the notion of “aesthetic provocation.” The problem with a visualization (or any other last-step provocation to interpretation) generated algorithmically from previously encoded data is that pre-encoded data is pre-interpreted data. And programmed algorithms that are flatly, “automagically” applied to a data set, Page 161not opening themselves up to examination and modification by a user, filter the object of interpretation even further. The user of such a system is not properly figured as a user at all, but rather becomes an audience to statements being made by the designers of the system’s content model and visualization or other representational algorithms.
While these statements can constitute—in all fairness—remarkable critical moves on their own part, the culminant effect of an unbalanced use of this approach is to reinforce a mistaken notion that digitization (and the concomitant application of algorithmic process of any sort) is a pre-critical activity, the work of a service industry providing so-called content to scholars. As an interpreter of algorithmic statements, a scholar (the end-user) is of course enfranchised to respond critically or creatively in conventional ways: by writing, speaking, teaching, or even by answering visualizations in kind, responding with new images. All of these responses, however, typically take place outside the system that provokes them, and to date (despite the early promise of projects like NINES and the Ivanhoe Game), few scholarly systems have created meaningful opportunities for critical engagement on the part of users. Sadly, the scholar’s interpretive act plays a distant second to the primary interpretation or algorithmic performance encoded by the creators of most allegedly “interactive” digital environments.
A more fruitful interest in algorithms and algorithmic processes—as first embodied in Llull’s combinatoric wheels—lies in their design and our subjective experience in using them, rather than in their (oddly, at once) objective and Delphic output. A suggestion that digital humanists move beyond the conventional application of “aesthetic provocation” is by no means a denigration of the measured use of traditional information visualization—of the algorithmic “product.” My own work, however, is much more invested in digitally or mechanically assisted algorithmic methodology as an interpretive strategy. How are such provocative statements as those made by Fry’s Valence produced? Can we insinuate ourselves (our subjective responses, interpretations, participatory acts) more deeply into their production? We may find that the greater understanding of algorithmic process we gain in dialogue and co-creation with our Turing machines leads to a deeper appreciation of the self-replicant, recombinant documentary world in which humanities scholars live and move and have their being. For even the most pedestrian algorithmic construct opens itself up as an interpretive field in productive ways. Our simple car-key algorithm, for example, could easily, in performance, become a synthetic, interpretive, and creative ludic exercise—a game.
Even at its most basic level—setting aside the intimate manipulations of a designer or programmer—algorithmic performance by subjective agents Page 162is revelatory. Imagine actually going through the prescribed physical process of picking up every item in your house, individually, and examining it for car-key-ness or not-car-key-ness. You might well find your keys by the end of the algorithm—but, by that time, the “success” of the operation would certainly seem beside the point. Undertaking this structured, constraints-based activity as a thinking human being, either practically or imaginatively, means more than performing it mechanically with one end in sight (confirmation or disconfirmation of the presence of car keys). Instead, you would be prompted continually to interpret and reinterpret your environment, your goal, your scope of activity, and your very actions, simply because a constraining system was compelling you to think algorithmically. You would, in performance, act on and reinterpret the objects of your rule set and the rule set alike.
Repositioning closed, mechanical, or computational operations as participatory, ludic algorithms requires acknowledgment of a primary definition, derived from the studies of the game theorist Martin Shubik, a figure sadly neglected in literary or new media game studies. He concludes a powerful survey of “the scope of gaming” with the simple statement that “all games call for an explicit consideration of the role of the rules.” Shubik means us to understand this “consideration” not only as adherence by players to a set of constraints, but also as appreciation of the impact of rules on the whole scope of play. The rule set or constraining algorithm in any ludic system becomes another player in the process and, as expert gamers often testify, can seem to open itself to interpretation and subjective response—in some cases, to real, iterative (which in this case is to say, turn-based) modification. In our “consideration of the role of the rules” we must follow C. S. Peirce, and understand algorithmic rule sets “in the sense in which we speak of the ‘rules’ of algebra; that is, as a permission under strictly defined conditions.” The permission granted here is not only to perform but also to reimagine and reconfigure.
Llull in Application
“The Farmer and the Cowman Should Be Friends”
Algorithmic and ludic operations, however fundamental to artistic and scholarly activity, remain exotic concepts to most humanities researchers. Ramon Llull, our benchmark designer of the participatory, ludic algorithm, is more generally appreciated by academics in the historical context of ars combinatoria, a practice described by the installation artist Janet Zweig and Page 163others as rooted in mysticism and divination and leading up to the aleatory experimentation of the modern conceptual artists, musical composers, and mathematically inspired writers. Ars combinatoria have been called “the software of the baroque,” with an output as rich as Bach’s fugues, at once mechanical and occult.
Anthony Bonner, in tracing the evolution of Llull’s mechanical design from early forms more dependent on prose description, reference tables, and static figures, draws attention to the shift to ars combinatoria proper brought about with the introduction of the inter-rotating wheel:
Initially it appears as a device to compensate for the loss of basic principles that formerly constituted the building blocks of the Art; but soon one sees that it is in fact the replacement of a vast sprawling structure, whose parts are loosely and often only implicitly (or analogically) interrelated, by a far more compact structure, whose parts are tightly and much more explicitly and mechanically interrelated.
Not only does the device, first embodied as the Fourth Figure of the Ars Brevis, serve that work’s aim of making plain the complexities of Llull’s Ars Magna, it also demonstrates that the essence of a “vast sprawling” and analogical structure can be usefully condensed into a set of combinatorial relations—so long as the concretization and precision implied by the new form can be matched by flexibility in an open, interpretive rule set.
Unfortunately, the association of Llull’s Great Art with ars combinatoria implies for some a focus that is either mystical (almost alchemical) or inextricably linked to an allegedly uncritical or precritical artistic value on “pure process and play.” What relevance can such flights of fancy have to serious scholarly issues of interpretation and analysis? We can begin to answer this question by contextualizing Llull’s own design (though it is an answer best embodied in the design and production of new tools rather than simply explicated historically).
Llull’s algorithmic and combinatorial device emerged not from mysticism or playful experimentation, but rather from a crisis in communication and interpretation. The Ars Magna was meant to serve as an aid to hermeneutic thought and cross-cultural understanding in light of seemingly insurmountable (and unassailably rigorous) problems of textual criticism and rescension. That they seem playful in use is a mere fringe benefit of the serious interpretive burden Llull meant his spinning wheels to bear.
Llull was born on Majorca, only a few years after the king of Aragon and Catalonia had retaken the island from its Islamic conquerors. In Llull’s time, Majorca was a melting pot: at least one-third of the population was Muslim, Page 164there was a strong and influential Jewish minority in the economic and political center, and the rest of the island’s inhabitants were Catholic. Künzel calls the Mediterranean of Llull’s day “a kind of interface for three expanded cultural streams.” Llull recognized many elementary commonalities among the three combative monotheistic religions represented on Majorca, but despite the sharing of basic concepts and notions of divinity, cultural tensions grew and Llull became deeply committed to the cause of resolution and appeasement. We find it therefore “necessary to regard his invention as embedded within a special situation, i.e. embedded in a deep crisis of communication.” Admittedly, Llull saw himself as a Christian missionary and his tools as enabling devices for the conversion of the infidels—not by the sword, as the failed Crusades had attempted, but by logical reasoning facilitated through the innovative combination of familiar, shared ideas.
Earlier attempts at peacefully convincing unbelievers, Llull recognized, had failed because of problems of bibliographical analysis and textual criticism: theologians from the various camps had “based their arguments on sacred texts” (trying to point out errors in the Koran, the Talmud, or the Bible)—a practice that “invariably became bogged down in arguments as to which texts were acceptable to whom and how to interpret them.” A passage from Llull’s Book of the Gentile and the Three Wise Men—written ca. 1275 as a popular companion to the Ars Magna, in which the complex operands of that method are softened through presentation as the flowers and leaves of a tree—demonstrates the author’s consciousness of the text-critical nature of religious problems of his day:
“I am quite satisfied,” said the Gentile to the Jew, “with what you have told me; but please tell me the truth: do Christians and Saracens both believe in the Law you mention?” The Jew replied: “We and the Christians agree on the text of the Law, but we disagree in interpretation and commentaries, where we reach contrary conclusions. Therefore we cannot reach agreement based on authorities and must seek necessary arguments by which we can agree. The Saracens agree with us partly over the text, and partly not; this is why they say we have changed the text of the Law, and we say they use a text contrary to ours.”
The innovation of the Ars Magna was to abstract philosophical concepts in play from their textual condition, by identifying notions common to the documentary sources of all three major religions and offering a combinatorial method for fusing them together and analyzing their relations. Llull’s hope was that Christian arguments inspired by the Ars Magna would be Page 165satisfactory to Muslims and Jews, stemming as they did from logical combinations of their own basic beliefs. There is, however, no quality or assumption inherent in the Llullian method to enforce a certain interpretive slant. It is just as easy to use Llull’s wheels to formulate arguments that favor Judaism or Islam. All the interpretive impetus is placed on the artista, the human user of the Ars Magna.
Llull’s method was not only notable for being clearly delineated; it was also self-testing, in the sense that the execution of iterative combinatorial motions was only carried out until contradictions or obvious untruths emerged. These untruths, naturally, would not appear as a parsing error or blue-screen breakdown in any material system at hand (the wheels, the diagrams), but rather in the conceptual system taking shape over the course of interaction with the Ars Magna in the mind of its user. At that point, the wheels themselves (and therefore all the marked primitives and practiced algorithms in play) could be examined and reconfigured. In this way, Llull’s Great Art was both a generative and autopoietic mechanism, through which new posited truths and refined combinatorial and analytic methods could emerge.
Emergence, rather than derivation, is in fact the hallmark of Llullian method. The diagrams generated by Llull’s wheels operate on principles of equivalency, not cause and effect, generating statements “per aequiparantium, or by means of equivalent relations,” in which ideas are not chained causally (the primary method for deriving logical and predictive relations), but are instead traced “back to a common origin.” In the same way, Llull’s idea of an ars combinatoria is not flatly combinatoric, but also fundamentally relational in structure and scope, in the manner of proof-theoretical semantic tableaux. Even better, for Llull’s uses, is that inherent value placed on human associations and the interpretive interplay of concepts ensures Laputian “wisdom” or random nonsense can be rejected. We must, in looking at Llull’s diagrams, appreciate his attitude toward their primary elements, the “constants” represented by an alphabetic notation. In Llull’s estimation, nothing in the world is inactive. Nothing simply is; rather, everything performs whatever its nature dictates. So Llull’s emergent definitions (for example, the wheels may be spun to generate the simple statement “Goodness is great”), which “to some commentators have seemed simply tautological, in fact imply a dynamic reality articulated in a large web of interactions.” Llull’s definitions for alphabetic ciphers are “purely functional,” after the style of “modern mathematicians, who do not say what a thing is, but only Page 166what it does.” This dynamism provokes computer scientists like Ton Sales to argue that Llull invented the graph.
It is clear that “concept-structuring or taxonomic” graphical designs—such as tree structures—predate Llull. Llull’s typical graph was not built on a static, taxonomic model, however, but “conceived rather as a present-day’s ‘semantic network’ and intended to be ‘followed,’ i.e. dynamically executed as though it were truly a fact-finding ‘program’ or a ‘decision tree’ as used in AI decision procedures.” Such an image was not a chart or illustration, but instead an “actual net of links that allowed the user to explore in a combinatorial fashion the relations that existed among the currently manipulated concepts.” In this way, Llull’s designs resembled or prefigured modern conceptual graphs and semantic networks, as they “presupposed a dynamic interpretation” in which to know the concepts at hand meant to follow and explore their consequences and associations, to participate actively in the manufacture and representation of knowledge.
Dark, Satanic Millstones?
Perhaps the finest quality of Llull’s now-neglected system is that it assumes activity at all its levels. It works at once mechanically and graphically, and it offers a method by which its users may respond interpretively, interactively, and iteratively to its combinatoric output. Here, we are not asked to feed data into a closed system (the algorithms of which were perhaps fashioned by others, necessarily for other purposes and materials than our own) and wait passively for a visualization or tabular report. We are instead meant to create, mark, and manipulate a wheel; to record its statements diagrammatically; and to follow and explore those resultant diagrams as a means of formulating, testing, and refining both ideas and rules, or algorithmic and combinatorial systems of interpretive constraint. No satanic mill, Llull’s open-ended mechanical model instead follows William Blake’s imperative: “I must create my own System, or be enslaved by another Man’s.” For no matter how benign and even profitable the typical enslavement to after-the-fact “aesthetic provocation” in humanities computing tools may be, algorithmic instruments that do not work on Llull’s principle can only deliver us “answers” that are either pre-known or inaccessibly random—that is, either derivative from algorithms and content models that express deep-seated, framing preconceptions about our field of study (as in typical, last-stage “aesthetic provocation”), or derivative of deformative and aleatory automations that too often do not open themselves adequately to the participation of a subjective agent during their operation.Page 167
Janet Zweig, in her overview of ancient and modern ars combinatoria, asks a fundamental question, relevant to appreciating Ramon Llull and his Great Art in the context of digital scholarship and computer-assisted hermeneutics: “What is the qualitative difference between permutational systems that are intentionally driven and those systems that are manipulated with chance operations?” It is important to understand—as Llull’s critics and the slow forces that have driven him into obscurity did not—that the Ars Magna is not a game of highfalutin, theological Twister: a governing, user-manipulating system of chance operations and random (or worse—insidiously circular) output.
Zweig’s question about the qualitative difference between aleatory and intentionally driven mechanisms implies its own answer: the differences are qualitative, embedded in, and emergent from our familiar world of humanistic interpretation. We are not meant merely to get output from Llull’s wheels. They are designed to generate insight into their own semi-mechanical processes and into our rhetorical and hermeneutic methodologies of use. Like so many (often misunderstood) humanities computing projects, Llull’s wheels assert that interpretation is merely aided by mechanism, not produced mathematically or mechanically. That this assertion is sometimes lost on the general academic community is not simply a failure of the devices scholar-technologists produce (although, as this chapter has sought to suggest, we can do a better job of anticipating and incorporating patently interpretive forms of interaction on the part of our users into the systems we create for them). Instead, it displays our failure to articulate the humanistic and hermeneutic basis of our algorithmic work to a lay audience. Further, it reveals the rampant underappreciation among scholars of the algorithmic nature of an overfamiliar machine on which all our work is predicated: the book.
When I began to examine Ramon Llull, I anticipated closing a description of the Ars Magna with some examples of how computing humanists or digital historians and literary scholars might use his wheels to analyze and reconfigure combinatorially the hidden rules and assumptions that drive our own practice. Instead, I am inclined to argue that the best new use for Llull’s old machines might be as defamiliarizing devices, modeling—for a larger and often skeptical or indifferent academic community—the application of mechanical or algorithmic systems to problems of interpretation with which scholars engage on a day-to-day basis. A dearth of clear and compelling demonstrations of this applicability to the interests of the academy is the central problem facing the digital humanities today. It is the reason our work, like the allegedly “precritical” activity of bibliographers and textual critics before us, remains insular.Page 168
Llull tells us that he chose a graphical and mechanical art partly through inspiration (the Ars Magna was revealed in fiery letters on the manipulable and discrete leaves of the lentiscus plants on Majorca’s highest peak)—and partly out of a recognition that the elements of interpretation should be finite in number, explicit in definition and methodological use, and visually memorable. Seen in this (divine?) light, interpretation lends itself easily to algorithm and apparatus. Why should any of us feel fettered? Let us build enabling devices for scholars—digital environments that marry methodological openness and mechanical clarity to the practice of humanities interpretation.
3. See John Unsworth, “ ‘Scholarly Primitives’: What Methods Do Humanities Researchers Have in Common, and How Might Our Tools Reflect This?” paper presented at “Humanities Computing: Formal Methods, Experimental Practice,” King’s College, London, 2000, accessed July 31, 2012, http://bit.ly/p8O0i.
4. Ton Sales, “Llull as Computer Scientist, or Why Llull Was One of Us,” Instituto Brasiliero de Filosofia e Ciencia Raimundo Lulio 2.3, ARTS ’97 Proceedings of the 4th International AMAST Workshop on Real-Time Systems and Concurrent and Distributed Software: Transformation-Based Reactive Systems Development.
5. Some seventy medieval manuscripts of the Ars Brevis alone (a shortened expression of Llull’s tools and methods) survive, and Anthony Bonner records twenty-four Renaissance editions of this popular work. For a succinct reception history, see Bonner’s introductions to “Llull’s Thought” and “Llull’s Influence,” in Selected Works of Ramon Llull (1232–1316), trans. Anthony Bonner (Princeton, N.J.: Princeton University Press, 1985), 577.
11. Werner Künzel, The Birth of the Machine: Raymundus Lullus and His Invention, accessed July 31, 2012, http://www.c3.hu/scca/butterfly/Kunzel/synopsis.html.Page 169
14. Anthony Bonner, “What Was Llull Up To?” Instituto Brasiliero de Filosofia e Ciencia Raimundo Lulio, accessed July 31, 2012, http://bit.ly/i4wocZ.
24. Countess of Lovelace, “Sketch of the Analytical Engine Invented by Charles Babbage, Esq. By L. F. MENABREA, of Turin, Officer of the Military Engineers,” trans. Augusta Ada Byron King, Scientific Memoirs 3 (1843): 666–731. See “Note G.”
27. Johanna Drucker and Bethany Nowviskie, “Speculative Computing: Aesthetic Provocations in Humanities Computing,” in A Companion to Digital Humanities, ed. John Unsworth, Ray Siemens, and Susan Schreibman (Oxford: Blackwell, 2004).
29. Algorithmic text analysis tools such as those designed by Stephan Sinclair in an OuLiPian mode have been aggregated (among less consciously ludic applications) at TAPoR, the Canadian Text Analysis Portal for Research, directed by Geoffrey Rockwell. See TAPoR, accessed July 31, 2012, http://portal.tapor.ca/. Ben Fry’s work at the MIT Media Lab and elsewhere is available, accessed July 31, 2012, at http://benfry.com/. See especially his genomic cartography, “Favoured Traces,” and organic information design projects, all of which have been applied to text analysis (but only in art installation contexts unhappily ignored by textual scholars).
30. See Colin Symes, “Writing by Numbers: Oulipo and the Creativity of Constraints,” Mosaic 32, no. 3 (1999): 87. Interestingly, Florian Cramer points out that Friedrich Schlegel, in the 1790s, defined Romanticism in terms of recursion and formal self-reflexivity—the same terms under which contemporary algorithmic and combinatorial digital art (of which Cramer himself is a Lullian practitioner) takes shape. See Cramer’s Tate Modern talk, “On Literature and Systems Theory,” of April 2001, versions of which remain at the Internet Archive, accessed July 31, 2012, [formerly http://userpage.fu-berlin.de/~cantsin/homepage/ - theory].
31. Jon Elster, “Conventions, Creativity, and Originality,” in Rules and Conventions: Literature, Philosophy, Social Theory, ed. Mette Hjort (Baltimore: Johns Hopkins University Press, 1992). See discussion in chapter 1 of Alan M. MacEachren’s 1994 How Maps Work: Representation, Visualization, and Design. And it is no new notion; see the discussion of constraint in the work of Dante Alighieri in Jerome McGann and Lisa Samuels’s seminal essay, “Deformance and Interpretation,” accessed July 31, 2012, http://www2.iath.virginia.edu/jjm2f/old/deform.html.Page 170
35. This idea is closely tied to the biology of autopoiesis as articulated by Francisco Varela and Humberto Maturana. See their Autopoiesis and Cognition: The Realization of the Living (Dordrecht: Reidel, 1980).
39. The difficulties involved in rigorous analysis of this quality of poetic production have been framed as a “problem of overlapping hierarchies” by the humanities and linguistic computing communities. Trace the discussion to Michael Sperberg-McQueen’s comments at the Extreme Markup Conference 2002 (“What Matters?” accessed July 31, 2012, http://www.w3.org/People/cmsmcq/2002/whatmatters.html) and debate by Alan Renear and Jerome McGann at the 1999 joint conference of the Association for Computers and the Humanities and Association for Literary and Linguistic Computing (“What is Text?”).
40. On the relation between hermeneutics and design, a developing interest in the architectural community, see especially D. Schön, “Designing as a Reflective Conversation with the Materials of a Design Situation,” Research in Engineering Design 3 (1992): 131–47; Adrian Snodgrass and Richard Coyne, “Is Designing Hermeneutical?” Architectural Theory Review 1, no. 1 (1997): 65–97; Richard Coyne, Designing Information Technology in the Postmodern Age: From Method to Metaphor (Cambridge, Mass.: MIT Press, 1995). See also Nigel Cross’s “Designerly Ways of Knowing,” Design Studies 3, no. 4 (1984): 221–27; and Terry Winograd and Carlos F. Flores, Understanding Computers and Cognition: A New Foundation for Design. Language and Being (Norwood, N.J.: Ablex, 1986), 12: “We also consider design in relation to systematic domains of human activity, where the objects of concern are formal structures and the rules for manipulating them. The challenge posed here for design is not simply to create tools that accurately reflect existing domains, but to provide for the creation of new domains. Design serves simultaneously to bring forth and to transform the objects, relations, and regularities of the world of our concerns.”
43. This is an enhancement I embodied in the design of the Temporal Modelling PlaySpace environment, ca. 2001–2, and described, with Johanna Drucker, in Blackwell’s Companion to Digital Humanities, 2004.
44. See, most recently, Neatline, a National Endowment for the Humanities Start-Up and Library of Congress–funded project for “geospatial and temporal interpretation of archival collections,” undertaken by the Scholars’ Lab at the University of Virginia Library, accessed July 31, 2012, http://neatline.org/.
45. Chris Crawford, author of the first major handbook for computer game design, contends that all great designers must think algorithmically, concentrating on process over fact and on trend over instance. The antithesis of “algorithmic thinking,” he writes, is “instantial thinking,” which always leads to poor interactive designs. The instantial thinker “comes up with great cut scenes,” the passive, movie-like animations Page 171that close chapters or levels in many digital action games, “but lousy interactions,” which are the heart of gameplay, and “when he designs an adventure game, [the instantial thinker] loves to cook up strange dilemmas in which to place the player, but the idea of a dilemma-generating algorithm is lost on him.” See Crawford, The Art of Computer Game Design (New York: Macmillan-McGraw-Hill, 1984).
47. See, for instance, Peter Suber’s “Nomic,” and Imaginary Solution #1: Dr. Kremlin’s Disc, a game described in my unpublished dissertation, executed as a hands-on activity at the 2010 “Playing with Technology in History” symposium, accessed July 31, 2012, http://www.playingwithhistory.com/wp-content/uploads/2010/02/nowviskie-game.pdf.
56. Bonner points out that the word “art” was the “usual scholastic translation” for the Greek τεχνη (technē). Llull’s work is best understood as a “technique; it was not a body of doctrine, but a system. Or to put it in contemporary terms, it was a structure.” Selected Works, 62.
59. Anthony Bonner suggests that Llullian alphabetic ciphers are constants rather than variables (“What Was Llull Up To?”). Clearly, the wheels and their primitives open themselves to adjustment by a human user, or artista. I therefore take this assertion to mean that the letters, once placed in the practical matrix of Llull’s wheels and charts, are best understood as having a one-to-one relationship with the objects or ideas they represent, the better to enable the sort of dynamic, performative interaction of an artista with a diagram Llull favored.
66. It is for this reason that I prefer the terms “environment,” “instrument,” and “mechanism” to “tool” when designing mechanical or algorithmic aids to humanities interpretation. An “environment” is by definition an inhabitable space. An “instrument” is played as well as used, and a “mechanism” is a system that can be opened up for analysis and adjustment. “Tool,” on the other hand, implies self-containment and inviolability.
68. A notable exception to this older trend is the work of maverick analytical bibliographer Randall McLeod, who comfortably straddles empirical and interpretive genres in the same way that writers like Susan Howe blend poetic practice and criticism.Page 172