Author: | David J. Staley |
Title: | Digital Historiography: Subjunctivity |
Publication info: | Ann Arbor, MI: MPublishing, University of Michigan Library April 2000 |
Rights/Permissions: |
This work is protected by copyright and may be linked to without seeking permission. Permission must be received for subsequent distribution in print or electronically. Please contact [email protected] for more information. |
Source: | Digital Historiography: Subjunctivity David J. Staley vol. 3, no. 1, April 2000 |
Article Type: | Book Review |
URL: | http://hdl.handle.net/2027/spo.3310410.0003.115 |
Digital Historiography: Subjunctivity
Freeman Dyson, The Sun, The Genome, and the Internet: Tools of Scientific Revolutions. (New York: The New York Public Library and Oxford University Press, 1999)
Niall Ferguson, ed. Virtual History: Alternatives and Counterfactuals. (New York: Basic Books, 1999)
Ray Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence. (New York: Viking, 1999).
I define subjunctivity as the intellectual state whereby one can imagine, think reasonably about and communicate "what is not." Because humans are a symbol-using species, we can fashion virtual worlds out of words. We have developed the means to invent and describe realities that do not exist, yet we can think about them as if they were in existence. I wrote that last phrase using the subjunctive tense, to demonstrate the ways in which we employ subjunctivity all the time. The fact that we have a verb tense that describes this realm of the possible—this virtual shadow world—demonstrates the degree to which subjunctivity is common to humans. The cognitive scientist Douglas Hofstadter, who wishes to understand the human mind, finds subjunctives to "represent some of the richest potential sources of insight into how human beings organize and categorize their perceptions of the world." [1] Being human means not only being able to understand what is, but being able to describe what isn't or what is yet to be.
Subjunctive prediction is evident in the wealth of books dealing with the future of computers and electronic technology. The rhetoric used to describe technology typically features references to the future, and the ways in which all the institutions of society—from education to commerce to interpersonal relationships—will be altered by the computer. In many cases, the verb "will" appears frequently, suggesting that the possibilities imagined by scenario writers are as real, tangible and observable as any part of "factual reality." As proof of this statement, read how confidently Bill Gates believes he understands the "road ahead."
Futurists have been making predictions since the first augurs examined the flights of birds to divine the future. With the advent of the Industrial Revolution, futurists have been attempting to predict the effects of technological change, and more often than not they have missed the mark completely. For every Jules Verne who correctly imagines journeys to the moon, there are dozens of futurists whose predictions fail to materialize. Futurism is not some empty pasttime, however, whimsically peopled by science fiction writers and dreamers. William Sheridan notes that thinking about and predicting the future is big business for stock market analysts, economists, demographers, meteorologists, strategic planners and their clients. Despite their sophisticated techniques, however, Sheridan notes that no credible method has yet been developed to accurately predict the future with any regularity in any discipline. [2]
The problem is not a lack of imagination, but perhaps too much imagination. Scenarios and other subjunctive statements often suffer from a lack of limitation of the imagination. Carefully constructed subjunctives have to be imaginative enough to envision possibilities, but also disciplined enough to recognize limits. [3] A useful subjunctive rigorously addressed what is not possible, else the creator may simply dream up anything his fancy imagines. For a subjunctive to be useful, it must be created via methodologies that not only free the imagination but simultaneously restrain that imagination, like a sculptor whose creativity is enabled and limited by her material. One means of striking a balance between possibility and limit is to seek plausibility rather than prediction.
Scenarios of the future and counterfactuals about the past strike me as two intellectual activities with a great deal in common. Both are examples of subjunctivity, one looking forward, the other backward in time. Creators of both scenarios and counterfactuals are trying to gain insight into events and contexts that haven't happened. While futurism and counterfactual history have their own methodologies, they nevertheless share common traits. Unbeknownst to many of us, then, historians have much to contribute to an understanding of such subjunctive issues as "What will be the future effects of increased computer use by our culture?"
Unfortunately, most historians tend to distrust subjunctives, concentrating instead on what "actually happened" in the past. Historians, notes Niall Ferguson, often look upon counterfactuals as a mere parlor game, not as a serious historical inquiry. When historians have used counterfactual arguments, they have either been highly imaginative without a very sound empirical basis, or overly empirical with little imagination. It seems, then, that imagination plus evidence equals subjunctivity in historical thought.
Ferguson contends that the historian's distrust of subjunctive counterfactual arguments is rooted in the very philosophy of history to which most historians subscribe. Whether one believes history is guided along by Providence, Fortuna, the Invisible Hand, class conflict, Progress, Reason or simple linear cause and effect, determinism runs all throughout Western historiography. Several generations of historians have struggled with the apparent contradiction between an overarching plan to the procession of events and a contingent role played by "free will." While historians have certainly championed the notion of free will, if forced to choose between determinism and chance, historians tend to distrust contingency and accident as an explanation for historical events. In fact, the very act of writing about the past imposes a narrative order on the chaos of events, further proving that historians favor order and purposeful pattern. Determinism and linear order are central to our notions of cause and effect. Therefore, believes Ferguson, historians are not predisposed to examine subjunctive alternatives to the actual events, since any reasonable alternative would call into question the determinism so valued by historians.
Clearly, Ferguson does not believe events to be so predetermined. He draws insights from twentieth-century science, especially the ideas of uncertainty, chaos theory, and nonlinear science. In fact, Ferguson includes history among those disciplines which study "stochastic behavior," that is, patterned randomness. Ferguson evokes the concept of uncertainty as it pertains to quantum mechanics, which holds that the physicist "can only predict a number of possible outcomes for a particular observation and suggest which is more likely." (73) Because any physical system is exquisitely sensitive to initial conditions and because of the overwhelming number of variables which might determine the behavior of that system, it is not possible to predict the exact state of that system at any point in the future. One can only imagine a number of reasonable scenarios. If prediction is not possible, even for a physicist, how can historians claim to uncover general patterns or laws that govern history, unless one is willing to admit to such patterns only in hindsight?
Thus, Ferguson announces his Historian's Uncertainty Principle: "that any observation of historical evidence inevitably distorts its significance by the very fact of its selection through the prism of hindsight." (74) It is here that Ferguson's concern for counterfactuals becomes apparent: one should engage in such subjunctive thinking in order to gain a clearer understanding of causation in history. Only when one understands that the procession of events is not governed by law-like determinism and that events occur because of complex processes sensitive to initial conditions can the historian truly understand causation. "The historian," warns Ferguson, "who allows his knowledge [from hindsight] as to which of these outcomes subsequently happened to obliterate the other outcomes people regarded as plausible cannot hope to capture the past 'as it actually was.'" (87) Counterfactuals remind historians of the indeterminate nature of causation in the past, and presumably in the future.
Counterfactual thinking is not a licence, however, to dream up any alternative. If this were the case, the number of possibilities would be so vast as to make any serious inquiry impractical. Clearly, some alternatives are more possible than others, and the historian must seek out limits on those possibilities. One limit would be to consider only those alternatives articulated by contemporaries, and only those preserved as primary sources. Another would be to disregard possibilities based on anachronistic assumptions. In any event, it is important for the historian to make a distinction "between what did happen, what could have happened and what could not have happened." (83) The best way to accomplish this is to look for plausibilities rather than mere possibilities. "In short," notes Ferguson,
This method of thinking, it seems to me, raises important questions for anyone wishing to think about the future. If causation in the past is nondeterministic, how can one claim to have divined the single path to the future? If the historian can identify any number of plausible scenarios in the past, how can a futurist see only one "road ahead?" If physicists are unable to predict the behavior of physical systems, how can the futurist ever hope to predict the behavior of human systems, which are intrinsically more complex? In order to rigorously think about the future, the scenario writer should use similar habits of thought employed by counterfactual historians.
Freeman Dyson's approach to the future is grounded in a realistic understanding of limitation and plausibility. Dyson acknowledges that accurately predicting the future is not possible, and he concedes that his vision for the future of technology will surely be fraught with errors. However, Dyson does not claim to be offering a prediction but rather a "model of the future." "As a scientist," he observes, "I make a sharp distinction between models and theories." Theories claim to describe the universe as it actually is, whereas "a model is a construction that describes a much simpler universe, including some features of the actual universe and neglecting others."(xiv) In this sense, a model is like a map, remembering that the map is not the actual territory. But while only an imperfect replica of the territory, travelers and navigators continue to use maps because they are useful. Theories are useful for describing domains that are well known because of observation and empirical testing, whereas models are better suited to those domains of inquiry that are less travelled. As empirical evidence becomes available, theories replace models. Thus, a model is a provisional map that helps to organize one's understanding of an unfamiliar terrain.
A study of the future would therefore seem to be an ideal domain for model-building rather than predictive theories. "The history of technology in the twenty-first century," the subject of Dyson's book, "is a prime example of territory beyond the reach of observation."(xv) Thus, Dyson's subjunctive statements are not theoretical predictions, for "the purpose of model-building in the realm of the future is not to predict, but only to draw a rough sketch of the territory into which we might be moving." (xv) Like the counterfactual historian who constructs reasonable plausibilities for what might have been, the futurist can only construct plausible models of what might be.
The Sun, The Genome and the Internet is a set of three lectures concerning the future of, respectively, solar power, biotechnology and electronic communications. In the second of these lectures, Dyson attempts to imagine a future scenario where these three technologies work in concert to promote global "social justice." This lecture is, by Dyson's own admission, a dream of the future, and perhaps strays away from his more rigorous notion of a model of the future. It is, nevertheless, an interesting statement of subjunctive plausibility.
Dyson believes that once the technical problems of cheap solar energy, genetic engineering and universal access to the Internet are solved, then the world will enter into a new period that looks like a fourth industrial revolution. (This is my term, not his.) The technology of solar energy will be decentralized, spread out into the villages and rural areas of the developing world, while genetic engineering will create a demand for "industrial crop plants." These would be plants engineered to produce energy not by consuming them but by designing them to manufacture energy. A forest of trees could be manufactured to "convert sunlight to liquid fuel directly through their roots to a network of underground pipelines." (69) When combined with solar energy, the countryside would become the location for "cheap, abundant and environmentally benign" sources of energy. The Internet, made available to everyone—rural or urban, rich or poor—would maintain the information flow of this system, aiding in the coordination of the economic activities tied to energy production. Industry would therefore transfer from overcrowded cities back to the countryside, reversing the pattern started during the first industrial revolution over two hundred years earlier.
This model, this dream of the future, is not without its problems. The repopulation of the countryside might be little better than gentrification, suggesting a continued gulf between rich and poor. Converting large areas of the developing world into energy farms might appear little different than the colonial exploitation of these same areas for sugar, rubber, oil and other raw materials necessary for industrial consumption. In other words, Dyson's vision of "social justice" appears as "just" as the system already in place. Dyson is aware of some of these limitations; further, since Dyson does not see his dream as a prediction, only as a plausible model, it is difficult to dismiss his observations out of hand. Instead we should read them and wrestle with them for what they are: a sketch with which we may wish to organize our actions in the future. Readers of Dyson's book—or any book which attempts to imagine the future of technology—must understand that his observations are plausibilities to be contemplated.
Ray Kurzweil's observations are not plausibilities, for he is convinced he has predicted the only path into the future. Kurzweil is an inventor and entrepreneur. He has founded several technology firms, and has invented an electronic musical instrument, a machine that can read text for the blind, translation programs and a program that writes poetry. He has successfully predicted the victory of computers over humans in chess and the replacement of human musicians by machines (which is a bit like "predicting" that it will be cold in January.) Thus, he is a man of action, a visionary who defines possibilities and turns them into realities. He writes with the confidence of an Ayn Rand hero; there are no alternatives or plausibilities in this account, only certainties.
Buoyed by his successes, Kurzweil is certain that in the future the boundary between human and machine will be erased. As computers become increasingly intelligent, and as medical science invents more devices that can replace organic body parts, eventually intelligent machines and cybernetic bodies will become indistinguishable. By the year 2099—Kurzweil is very certain as to this date—consciousness will no longer be exclusively embodied in carbon-based material, but also in electronic equivalents. Information will feed directly into brains via neural implants, eliminating the need for "education" as we currently understand the term. Long before this time, basic life needs such as food will be manufactured and available to all. Eventually, by 2099, "life expectancy is no longer a viable term in relation to intelligent beings."(280) Since intelligence will no longer be embodied in carbon-based form, humans (or at least their electronic descendants) will have cheated death through technology.
This argument sounds like a restatement of the cyberneticist's fantasy of "transcending the body to achieve immortality". [4] It would seem easy to dismiss Kurzweil, then, as unrealistic and overly—even hubristically—imaginative. However, recall that imagination is central to any subjunctive, so it would be unfairly inconsistent to fault him for enthusiasm. We can, however, scrutinize the methods he employs to produce his subjunctive.
Kurzweil does not admit to any uncertainty about or limitation of his vision, nor does he propose any plausible alternatives, which are necessary for any meaningful subjunctive. His is based on a deterministic vision of the past, a linear development of events projected into the future. As a computer scientist, he is influenced by Moore's Law, the theory proposed by Gordon Moore, the inventor of the integrated circuit, that holds that computing speed doubles every 18 months. If one held strictly to Moore's Law, computers would not be able to provide the computing power needed to maintain Kurzweil's vision of the future. While Kurzweil believes that Moore's Law is far too narrow a domain, he nevertheless wishes to demonstrate the validity of Moore's Law by expanding it, by placing it within the widest possible context: the entire history of the universe itself.
It seems that increasing computational power is the single most important evolutionary process in the history of the universe, dating to the Big Bang. Kurzweil's argument goes like this: according to the "law of time and chaos," the interval between important events is determined by the degree of order or chaos in the system. According to the "law of accelerating returns," as order increases, the interval between important events decreases; thus time "speeds up" as order increases. Evolution is a process of increasing order; in evolution, order increases exponentially. Therefore, time is also increasing, which means that the "returns"—the "valuable products of the process"—are increasing. Such returns include technologies; thus as order increases, technologies and the value they impart also increases. Since computation is the most important value in any technology, computation is increasing exponentially. Thus, Moore's Law, while correct in form, is far too narrow in conception. Once increasing computation is understood as the driving force of evolution, one need only draw the line into the future to understand computing power in the upcoming century and all the valuable death-defying products that will inevitably follow.
Kurzweil thus bases his narrative of the future on a deterministic vision of the past. For all the complexities of his argument and for all his imaginative images of the future, his is based on a rather simple approach to subjunctive thinking: identify a line of events and trace that line into the future. Missing is Ferguson's notion of contingency and indeterminate causation, and Dyson's understanding of the heuristic value of modelling. In Kurzweil's book, there is an imbalance between imagination and limitation.
Kurzweil's argument is based on an appeal to the past. He is striding into territory that historians typically claim as their own, but is often appropriated by augurs such as Kurzweil. Historians are specialists in issues of time, periodization, and causation. We understand that complex variables and sensitivity to initial conditions influence the procession of events. Historians, it seems to me, are just as qualified as scientists, technologists and inventors to contribute meaningful "models of the future" based on our understanding of the complexities of the past.
Our models could be just as imaginative and as empirically disciplined as any other. Historians—correctly—claim that we cannot predict the future, for historians understand that we cannot simply draw a line from the past into the future, despite the fact that visionaries like Kurzweil do it all the time. Historians do have something meaningful to say about the future, as long as we think in terms of a plurality of models like Dyson, and as long as our methodologies are rooted in counterfactual habits of thought like Ferguson. Expert in describing the past "as it actually was," we could devote some of our expertise to imagining the future "as it might be." We could write about the future as we write about the past, as "simulations based on calculation about the relative probability of plausible outcomes in a chaotic world." Our subjunctives could be "plausible narratives of the future," intended as provisional but useful maps of that future.
Notes
1. Douglas Hofstadter, Godel, Escher, Bach: An Eternal Golden Braid (Vintage Books, 1979): 642.
2. See William Sheridan, The Fortune Sellers: The Big Business of Buying and Selling Predictions (John Wiley and Sons Inc., 1998).
3. See my "Realistic and Responsible Imagination: Ordering the Past to Envision the Future of Technology." Futures Research Quarterly 14 (Fall 1998): 29-39.
4. N. Katherine Hayles, "Embodied Virtuality: Or How to Put Bodies Back into the Picture," in Mary Anne Moser, ed. Immersed in Technology: Art and Virtual Worlds (MIT Press, 1996): 2.