Author: | David J. Staley |
Title: | Review Essay: Digital Historiography: Algorithms |
Publication info: | Ann Arbor, MI: MPublishing, University of Michigan Library April 2001 |
Rights/Permissions: |
This work is protected by copyright and may be linked to without seeking permission. Permission must be received for subsequent distribution in print or electronically. Please contact [email protected] for more information. |
Source: | Review Essay: Digital Historiography: Algorithms David J. Staley vol. 4, no. 1, April 2001 |
Article Type: | Book Review |
URL: | http://hdl.handle.net/2027/spo.3310410.0004.118 |
Review Essay: Digital Historiography: Algorithms
- David Berlinski, The Advent of the Algorithm: The Idea That Rules the World. (Harcourt, Inc., 2000)
- Keith Devlin, Goodbye, Descartes: The End of Logic and the Search for a New Cosmology of Mind. (John Wiley and Sons, 1997)
- Steven R. Holtzman, Digital Mantras: The Languages of Abstract and Virtual Worlds. (The MIT Press, 1994)
The protagonist in Neal Stephenson's cyberpunk novel Snow Crash develops an interest in Sumerian culture, and descends into a virtual reality library in order to retrieve data on that ancient civilization. He engages in a conversation with a holographic librarian about the Sumerian concept of me, "the 'key words' and 'patterns' that rule the universe." The librarian cites from the noted historian of Sumerian culture Samuel Noah Kramer, who observed that the Sumerians believed that their gods had handed down rules and laws governing behavior and human activity. These rules were not simply related to religion or otherworldly ethical and moral concerns but often dealt with everyday activities such as food preparation, craft work, building and cultivation. Each me was a set of instructions for how to properly carry out each task, a recipe for activity. Upon hearing this information, the protagonist of the novel quips, "Like executing a computer program?" "Yes," the librarian replies. "Apparently, they are like algorithms for carrying out certain activities essential to the society." "The operating system of society," replies the protagonist. "When you first turn on a computer, it is an inert collection of circuits that can't really do anything. To start up the machine, you have to infuse those circuits with a collection of rules that tell it how to function. How to be a computer. It sounds as though these me served as the operating system of the society, organizing an inert collection of people into a functioning system." While the algorithm is a twentieth century invention, Stephenson's hero suggests that the concept of the algorithm is an ancient one, dating to the beginnings of civilization. Indeed, Stephenson suggests that algorithms—formal rules for manipulating symbols—lie at the very core of human civilization.
Stephenson's character suggests that the concept of the algorithm long predates its actual invention. The idea of an algorithm existed in an imprecise, nebulous form; once the concept was formalized and precisely refined "the algorithm" could emerge fully formed from this conceptual primordial soup. If one accepts this line of reasoning, one task for the historian of the algorithm might be to trace the process by which the algorithm was so re fined from its original source. The invention of the algorithm in the early twentieth century reflects the culmination of a centuries-long process of formalizing "the patterns that rule the universe."
That the idea of the algorithm predates its creation is the premise of David Berlinski's book. "More than sixty years ago," he begins, "mathematical logicians, by de fining precisely the concept of an algorithm, gave content to the ancient human idea o f an effective calculation."(xi) While perhaps not always filled with the same religious significance that the Sumerians understood, humans have long constructed civilizations from legal systems, bureaucratic procedures, and classification schemes. Berlinski also includes in this list "recipes, love letters, Books of Prayer, war manuals, tax tables—all the varied instruments which coordinate the flow of action from moment to moment, and so to spread the net of human consciousness beyond the episodic." While placing algorithms within this larger context, however, Berlinski separates the algorithm from these other types of procedure. The algorithm, whose midwifes were mathematical logicians, is a formal and precise type of procedure, superior because of its rigorous formalization.
The roots of the algorithm lie in logic, and specifically with mathematical logic. Berlinski defines logic as "correct reasoning," where "correct" means right and proper and "reasoning" means moving from premise to conclusion. He begins his book with Gottfried von Leibnitz. Leibnitz and the other mathematicians lionized in this book, such as Giuseppe Peano and Gottlob Frege, were interested in discerning the formal properties of correct reasoning. Their method was to overlook the semantic meaning of statements in order to more closely examine the underlying structure that coordinates the flow of reason in those statements. Leibnitz, for example, in addition to inventing the calculus, attempted to create a language based on unambiguous symbols organized according to precise rules. Peano and Frege attempted to discern rules and procedures for reducing mathematical operations to a series of discrete steps. Each logician sought to reduce thought to a series of operations that ignored the semantic meaning of the symbols and sought instead to discern the structures and procedures which act upon those symbols.
In so beginning his narrative with these mathematical logicians, Berlinski treats the m as contestants in a larger game: seeking the Holy Grail of the algorithm, even if that was never the stated intellectual purpose of each "contestant." "The idea of the algorithm had been resident in the consciousness of the world's mathematicians at least since the seventeenth century," writes Berlinski, "and now, in the third decade of the twentieth century, an idea lacking precise explication was endowed with four different definitions, rather as if an attractive but odd woman were to receive four different proposals of marriage where previously she had receive d none." (205) Berlinski's account, as this example illustrates, is oppressively teleogical, in that he describes each successive mathematician as moving inexorably, if unsuccessfully, toward the same goal.
The above quote also suggests something about the writing style of this book. Authors writing books on technical subjects for wider audiences tend to follow one of three narrative strategies: they write about the technical material in an inaccessibly technical manner; they water down the technical material in an overly simplistic and condescending manner; or they clearly and carefully explain the technical material to an educated but unspecialized audience. This book falls somewhere between the first two categories. When addressing the technical issues of mathematical logic, Berlinski's prose remains technical, jargon-ridden and unhelpful. These technical passages are then "dressed up" in poorly chosen metaphors—like the one above—or silly literary devices. Berlinski will, for instance, have the mathematicians engage in fictitious conversations with each other, or Berlinski himself will appear as a character in a fictional encounter. When describing Leibnitz, Berlinski's tone is so conversational as to trivialize his subject: "Gottfried von Leibniz, formerly Gottfried Leibniz, and his father before that Leibnuetz, the von derived from god-knows-where ..."(1-2) I imagine that Berlinski felt that such clever literary tricks would make the technical material easier for non-technical readers to understand; there is even a brief dialogue between Berlinski and his editor over the tone and word choice he should use in the book. The effect of these distractions is to make the book annoyingly user-unfriendly.
Once we move beyond this rather lengthy "prehistory" of the algorithm, Berlinski finally identifies the four "inventors" of the algorithm: Kurt Godel, Alonzo Church, Alan Turing and Emil Post. Each was pursuing a different intellectual program, and only Turing and Post were thinking in terms of algorithms as effective procedures for computers. Berlinski might have been better served beginning with these four figures, and then working backwards toward Leibnitz, if only to make his account seem less teleological. As written, however, these earlier thinkers appear to have fallen short of the mark, failures to achieve the prize of inventing the algorithm, even if this was never their original intention.
In so working backward, the reader might gain a greater sense of Berlinski's real intention, which is to show how the four inventors retrieved the idea of the algorithm from this larger mix of ideas:
It happens and no one knows why. The intimation of an idea floats in the atmosphere for days and years and Hours, and then at one time and at one place, the scattered parts of that intimation draw together into a thunderhead and drop their dense load of moisture onto the waiting earth.
The scattered parts of the thunderhead that broke with a clap in the 1930's can in retrospect seem destined to cohere —a concern for symbols and symbolism, the inferential rules of logic, the axioms for arithmetic, the idea of a universal language and so a universal calculating machine, the intuitive concept of effective calculability; these were things in the ambient air, but it was never quite certain whether they would form a coherent whole, or whether they would remain obstinately unproductive , like clouds that promise rain but then dissolve into nacreous wisps. (181)
Berlinski maintains that, like scientists who stand on the shoulders of giants, these four pulled together several ideas that were "floating in the air" of the early twentieth century and so invented the algorithm.
To understand Kurt Godel's place in this pantheon, one needs a background in mathematical logic and the program of the German mathematician David Hilbert. In the late nineteenth century, Hilbert challenged mathematicians to ground their discipline upon a so lid logical foundation. Hilbert wished to subject the procedures by which mathematicians established the truth of their statements to the same logical rigor that mathematicians used to create those statements. In other words, Hilbert wished to use the logic of mathematics in order to test the logical foundations of mathematics itself. (This branch of mathematics is called metamathematics.) Mathematical systems, he argued, would have to be proven consistent, complete and decidable. Hilbert was one of several mathematicians of the late nineteenth and early twentieth centuries who began to view mathematics as a system of logic, not as the science of numbers. For Hilbert and others, mathematics was a formal, abstract game whose object was to manipulate symbols.
Godel 's importance to the history of mathematics was in proving the impossibility of Hilbert's program. In his famous 1931 paper, Godel demonstrated that arithmetic is incomplete. I will not pretend to have the expertise to summarize the argument here, only to point out that Godel employed a type of mathematical procedure that used mathematical symbols to describe the properties of other mathematical symbols. His reasoning involved a procedure called "primitive recursion," which refers to a special type of function that Berlinski claims is an algorithm since it acts as a type of mechanical expression of thought, a procedure for manipulating a finite set of symbols that produce statements. In so devising a mathematical procedure that demonstrated the undecidability of mathematical axioms, argues Berlinski, Godel had served as the midwife to the idea of an algorithm. That is, Godel's procedure was a type of algorithm.
Like Godel, Alonzo Church devised a mathematical procedure called lambda conversion. Again, I cannot summarize the specifics of this complex mathematical procedure, only to say that Church's lambda-conversion involves a finite set of symbols (three in total) arranged according to a set of specific rules. As in the case of Godel's recursion, Church's lambda conversion is a mechanical procedure that acts upon symbols. To Berlinski, what mathematical logicians such as Godel and Church had accomplished was to reduce " thought" to a series of discrete steps that when acted upon symbols produced statements.
Turing is widely credited as one of the inventors of the digital computer, or at least the idea of a digital computer. Unlike Babbage, Turing did not construct an actual calculating machine but rather developed the conceptual apparatus that would al low for the construction of electronic calculating machines. It is equally well known that prior to Turing, a "computer" was a human being: one whose task was to compute. Turing had served as a code breaker during the Second World War, and was well acquainted with the myriad "computers " who would calculate numbers necessary to break German encrypted messages. Turing mused about the possibility of mechanizing that human process, that is to substitute the human computers with a machine that could perform the same task. The result of these musings was a Turing Machine, an imaginary device that would serve as a mechanical computer.
Turing imagined both the hardware and software of such a machine. The "hardware" of t he Turing Machine consisted of an infinitely long tape upon which a finite set of symbols would be recorded, to be deciphered by a reading head that sat in a finite set of states. (185-86) To make the Turing Machine work, however required a set of rules and instructions. Those instructions, the algorithm that tells the hardware how to proceed, was Turing's lasting contribution, argues Berlinski. Turing's machine belongs not only to the history of technology, he contends, but also to intellectual history, for Turing "[provided] a simple, vivid, and altogether compelling mathematical model of the ancient idea of an algorithm." (191) While Godel and Church were not thinking in terms of effective procedures for computers, Berlinski nevertheless ties Turing's Machine to their abstract mathematical logic since each was thinking in terms of procedures which act upon symbols.
If Turing thought in terms of "hardware" and "software," although not employing these terms explicitly, then Emil Post thought strictly in terms of "software." Post had developed a similar " machine" like Turing's, only Post's computer was completely symbolic. Post's thought experiment employed a human worker rather than tape and a mechanical reading head. This worker then carried out tasks according to a set of instructions. Hardware diminishes in significance in Post's computer. It is pure software, pure instruction, pure algorithm.
Berlinski classifies computers as expressions of pure thought, which is given expression in material form. Computers certainly appear to be like other mechanical devices, built from silicon, plastic and metal. However, "The essence of [these] machines is elsewhere, in a universe in which symbols are driven by symbols according to rules that are themselves expressed in symbols. The place in which these machines reside," concludes Berlinski, "is the human mind." (203)
It seems that the algorithm may in fact represent the triumph of mind over matter. In unlocking the secret to the algorithm, these pioneers offered humanity a glimpse into the secrets o f the universe. Berlinski clearly sees thought as superior to matter; he then applies this idea to the universe itself. What is intelligence, he maintains, but the imposition of design upon matter? What is life but the imposition of rules and instructions over matter? DNA—the algorithm of life—is a set of instructions which act upon matter; adenine, thymine, guamine and cytosine are not simply rep resented as the symbols A D G and C; they are real symbols, acted upon by algorithms. To Berlinski, "they function as symbols, the instruments by which a genetic message is conveyed. (288) He hints, but never explicitly states, that the algorithms of life encoded in DNA might be evidence of an intelligence that designs the universe.
Berlinski 's teleological approach treats "the advent of the algorithm" as the end of the story. Armed with the algorithm, humanity's search for the meaning of life ends, and mind trumps matter. If the subtext of Berlinski's book is the triumph of logic, then the subtext of Keith Devlin's well-written book is the limitations o f logic. Specifically, Devlin shows that a rigorously formalizing set of instructions and rules is insufficient to explain thought and intelligence. Where Berlinski's book begins with Leibnitz and concludes with the triumph of the algorithm, Devlin—who, like Berlinski, is trained as a mathematician—begins his book with the ancient Greeks and concludes with the limitations of algorithms. Or more correctly, Devlin's concerns are with the limitations of logic as a means to comprehend human thought. Although not a history of the algorithm per se, Goodbye i Descartes does provide a larger and deeper historical context for the "advent o f the algorithm."
For Devlin, the history of rule-based formal languages dates to the ancient Greeks, who believed that one could achieve correct reasoning by stripping down language, revealing the underlying structure of language. In so beginning his account with the Greeks, Devlin demonstrates that mathematical logic is an historically recent phenomenon, and one which is not teleologically triumphant. He describes the period 1890-1930 as "the golden age of predicate logic." Where logic was once the study of the formal structures of natural language, logic in this period was transformed into a subfield of mathematics. This "golden age" represents the historical terrain of Berlinski's account; in Devlin's book, this golden age is situated within a much larger historical and theoretical context.
The critical shift in the history of logic from the Greek idea of a science of all human reasoning to a more narrowly defined science of mathematical reasoning began with George Boole, whose contributions are inexplicably absent in Berlinski's account (save two brief passages). Boole's Laws of Thought promised to be a mathematical analysis of the structure of human reasoning, but in Devlin's estimation "in the end what his work led to was a theory of mathematical reasoning. Indeed in the eyes of his successors, Boole's great success was to lock onto certain crucial aspects of formal, mathematical reasoning without becoming embroiled in the messy psychological aspects of human thought processes in general."(84) Boole's method bloomed into a new branch of mathematics called propositional logic, which Devlin believes is the real root of the digital computer. Godel, Church and Post are nowhere to be found in this account, although Turing does merit a section. Indeed, Devlin suggests that it was Turing—wrestling with a PhD dissertation in mathematical logic—who defined an algorithm as an effective procedure. However, Devlin maintains that Boole is the critical midwife here: "Though he could hardly have known it at the time, when Boole worked out his algebraic theory of logic, he was laying down the basic theory of the modern-day electronic computer. Today's ubiquitous digital computer consists of an assembly of very simple electronic devices to manipulate expressions of propositional logic." (92)
The advent of the algorithm and the digital computer spurred two separate research agendas aimed at closing the "logic circle."(95) By logic circle, Devlin refers to the Mobius-like idea that one could take a device created as a product of the logical study of human thought and program them to think like humans. Although carried out in many locales, these two agendas clustered around MIT in the 1950's. On the one hand, mathematicians and computer scientists sought to build thinking computers —such computers themselves the product of the laws of thought. Devlin pictures this closing of the logic circle this way:
thinking
to
patterns of thinking
to
logic
to
computers
to
computers that think.
At the same time, the MIT linguist Noam Chomsky was similarly trying to close a logic circle, from:
language
to
patterns of language
to
logic
to
mathematical theories of language (95)
Chomsky examined the structures of syntax in language, arguing that humans possess a syntactical deep structure: a finite set of rules that generates the infinite number of possible statements in a language. Although careful not to draw an explicit connection to computer algorithms, Chomsky argued that language itself was the result of a rule-based procedure, not unlike an algorithm. Chomsky was working within the traditions of mathematical logic and linguistics, not the burgeoning AI tradition; therefore, he was not explicitly seeking an "algorithm of human thought." AI and Chomskian linguistics are, instead, parallel movements born from the same source.
Devlin maintains that, despite the advances both AI and Chomskian linguistics have made, neither effort has been fully successful, for both research agendas failed to appreciate the role of context in human thought and language. Although computers can be instructed to carry out many complex tasks such as playing chess, their range of cognitive actions is severely limited. Devlin notes Terry Winograd's experiments with the SHRDLU program. Given a microworld made up of different colored blocks, a computer could respond to commands such as "Pick up the red block" and "What color is the block on the red brick?" "Scaling up" to more complex worlds, however, typically results in failure, the computer failing t o understand even relatively simple commands.
Winograd' s microworld is not at all like the "macroworld" in which humans must communicate. Humans get around in this world by sharing common understandings; we know that when we say "Picasso went through his blue period," we do not mean the color of his skin. When we say "Augustus died in 14 AD," we know that he remains dead and that death means the end of life. Such common-sense understanding is a fundamental part of all human communication, but has to be meticulously spelled out for computers. Thus, when trying to scale up a computer microworld, this common-sense understanding has to be written into the algorithms. Unlike computer thinking, then, most human thought is not strictly rule-based.
Similarly, Chomsky's ideas failed to include the role of context in human thought and language. Indeed, Chomsky's entire approach was based on the elimination of context fro m the study of language. As Devlin observes, "Chomsky called his approach to linguistic investigation Cartesian linguistics in order to emphasize its scientific nature as the study of language in a rational manner, free of any messy context, like culture." (18 5) As AI researchers have learned, human language is not entirely rule-based. Indeed, in treating language as a formal rule-based system, Chomsky's version of linguistics failed to account for meaning, context, shared cultural knowledge and the complexities generated via conversation. Devlin devotes the rest of his book the work of linguists interested in semantics, sociolinguistics and, especially, ethnomethodology, which involves the formal study of conversation. Devlin also champions situation theory, a branch of mathematics that attempts to formalize conversations and other situations where conversations occur. Devlin believes that this research brings us closer to a formal understanding of human thought and language than the study of context-free, rule-based procedure, which has long been the dream of logicians.
Devlin advocates a new branch of mathematics he terms "soft mathematics." Soft mathematics does not exist as a recognized subfield yet, although Devlin places the situation theory described above within this new domain. Soft mathematics would apply the tools of formal reasoning to the study of context and meaning in language; in effect, soft mathematics would reintegrate the formal study of language with the messy complexities of context that logic has heretofore ignored. Soft mathematics, in Devlin's estimation, would more closely model the structure of human thinking than would the model of the mind as an algorithm.
Like Devlin, Steven Holtzman shows that other thinkers outside of mathematical logic —and even outside the Western intellectual tradition—have similarly captured t he idea of the algorithm, formalizing the concept using other symbols. Holtzman was not specifically writing about the algorithm; his book is an attempt to explore the use of computers as creative tools for artists. He defines the computer as "the ultimate symbol manipulator," and explores how computers might be designed to write music, create visual compositions and understand natural languages. I n so exploring these possibilities, Holtzman offers a view of the history of the algorithm that, like Devlin's, is contextually broad.
Holtzman held degrees in Eastern and Western philosophy and a PhD in computer science. His education also included musical composition; indeed, music is the dominant m ode of knowledge in this book, followed closely by linguistics and the visual arts. His main argument is that at the beginning of the last century, these disciplines begin to move in the same epistemological direction: toward increasing structuralism. In linguistics, for example, Ferdinand de Saussure transformed that discipline from one concerned with the historical changes in language to one that examined the structures of language, those relations between signs in a formal system. Noam Chomsky—whose ideas are featured prominently in this book—took the idea of structuralism in new directions. Unlike Saussure, who looked at language as a social system, Chomsky sought to uncover the structures in the human brain that produce language. Interestingly, Holtzman notes that Chomsky 's ideas resonate with the ideas of the fourth century B.C Indian scholar Panini. Panini formalized the Sanskrit language, and in Holtzman's estimation had created a generative grammar. "The core of Panini's grammar," observes Holtzman,
is an exhaustive statement of the rules of word formation of Sanskrit. The rules are impressive in their completeness as well as their economy...Panini's grammar is not simply a descriptive grammar but a generative grammar. That is, it is a set of rules that can be used to generate Sanskrit words. (12)
Thus, for Holtzman, "A formal view of language dates back to the ancient Aryans," (13) and not to the ancient Greeks. Perhaps we need to broaden our search for the origins and Evolution of the idea of the algorithm outside of the Western tradition.
Holtzman explores structuralism in other disciplines. In music, serialist composers such as Arnold Schoenberg altered the composition process by treating music as a formal system. Formal rules dictated pitch, duration, and scale. Postwar serialists such as Pierre Boulez and Karlheinz Stockhausen composed works that used electronic tools to create new sounds, these sounds themselves constructed from formal, rule -based procedures. Holtzman notes that musical composition has always been form al and rule-based; serialists were only more transparent and mathematical in their procedures. But the point is well taken: as music theory and history demonstrates, there have been many instances of formal systems of thought outside of logic. In fact, the musical score might well be a candidate for the first algorithm: a formal, rule-based system of thought that arranges symbols. To emphasize this point, Holtzman titles one section of this book "The Score as Program." (163)
In the visual arts, Sassily Kandinski defined points, lines, planes, and colors as the structural elements of visual composition. As with the serialist composers, Kandinski attempted to formalize painting as a rule-governed procedure which manipulates these elemental symbols. Holtzman chronicles the work of Ray Lauzzana and Lynn Pocock-Williams, who treated Kandinski's visual formalisms as a type of effective procedure for computers. They created a rule-based procedure that instructed a computer to create Kandinski -like abstract drawings. Holtzman concludes that the significance of this work lies in the demonstration that "expressions in visual languages can also be gene rated by a computer given explicitly and formally defined rules in the form of a grammar."(173)
Holtzman also notes the work of artist Harold Cohen, an abstract painter who has worked with computer composition. In a similar manner as Lauzzana and Pocock-Williams, Cohen devised a computer program called AARON that produced images that art-museum audiences believed were created by a human artist (thus passing the Turing Test?). Holtzman identifies Cohen as a "metaartist;" the computer is the artist, the entity that fashions the visual composition. "The metaartist's contribution," notes Holtzman, "is the software: the instructions, the rules, the grammar." (220) In a previous column, I noted that some programmers of computer chess have argued that since computers are far better at calculating variations, human competitors will soon be no match. The human contribution to chess—or to art or to music- - may well be as the programmer, the composer of the algorithm, the rules and formalisms by which the computer performs the activity. Virtuosity in chess, that previous argument went, may no longer be measured by artfully moving chess pieces but in devising the rules by which a computer moves those pieces. The quality of computer -generated art or music, to conclude the analogy, may also lie in the elegance of the human-designed algorithms which instruct the computer artist or musician.
This is clearly a problematic conclusion. Imagine judging the quality of a fine meal not by the taste of the finished product but by the elegance and complexity of the chef's recipe. Instead of eating the meal, we would instead marvel at the "metameal," the rules and procedures that the chef used to act upon the ingredients. If this scenario seems counterintuitive, however, consider our experience of classical music. While listeners at a concert clearly appreciate the virtuosity of the performers, our appreciation of the music derives in part from the skill of the composer. Mozart is not present today at the performance of his work, yet he lives on because we have the record of his instructions to the musicians. Hence Holtzman's observation above that the musical score is a type of algorithm, a formal system of rules which manipulates symbols to communicate thought.
Therefore, one can read Holtzman's book and easily conclude that logic is not the only formal abstract system of thought. Indeed, one of the subtexts of this account is that formal language is only one system by which humans express ideas. Holtzman clearly appreciates the contributions of the mathematical logicians to the idea of the algorithm. However, this history is only part of a larger story. The narrative Holtzman creates emphasizes that mathematical logic was part of a general movement in many domains of knowledge toward formalization and rule-based procedure. These movements reached a point of convergence in the 1950's. On the one hand:
Leibnitz 's vision diverged into separate avenues of study. His interest in symbolic logic was developed by George Boole who in turn influenced Claude Shannon. [Leibnitz 's] experiments in calculating machines led to Charles Babbage and eventually on to Konrad Zuse and the first electronic calculators. Shannon's work and the first electronic calculators led directly to the invention of the first computers. The Logic Theorist [an early program for solving logical proofs], Turing, and the premise of artificial intelligence represent the full reintegration of Leibnitz's work in logic, calculators, and the idea of a mechanized calculus ratiocinator.
At the same time, Chomsky wrote Syntactic Structures and invented generative grammars, establishing a view of language as rule-governed behavior. Boulez composed Structures, a composition represented by a grammar consisting of a set of explicit instructions that could be executed by an automated process. Kandinski had aimed to discover a universal visual grammar and established abstraction as a part of visual languages. By the 1950's, abstract expressionism was the most significant influence on the visual arts. All of these innovators, in effect, shared a structuralist view, where the definition of the symbols and the rules that govern how they relate and can be manipulated determine meaning within the system. A structuralist view of languages was developing in the study of every form of communication and every form of art. (138)
Holtzman' s history of the algorithm, if we may identify it as such, is not teleological and triumphant. His account is, rather, associative and analogical. More broadly, Holtzman's book reminds us that the algorithm, like the Sumerian me, is an ancient concept that has assumed many forms. The algorithm—whether program, effective procedure, recipe or musical score —is not the intellectual property of one domain of knowledge. Rather, the algorithm derives from many disciplines, and has evolved from many sources. If the algorithm can be said to have been "born" in the early twentieth century, it is because of the convergence of ideas from several formal systems of thought.