Page  00000204 Creating Music in a Machine Age: the Relationships between Computer Tools, Composers, and Music Making Machines Tang-Chun Li Faculty of Music, McGill University Montreal, QC H3A 1E3 Canada litc@music.mcgill.ca ABSTRACT This paper explores the relationships among computer tools, composers, and music-making machines by examining their roles during the music-creating process. Two working models of human composition and machine composition are proposed. We also examine the functional roles of computer tools in music-making, and the relationship among the tools, the tool builder, and the composer. Finally, music creation is briefly discussed. 1. INTRODUCTION To compose music today, a composer has many choices of different types of computer tools, particularly if he or she involves in computer music, electroacoustic music or pop music. In most cases, these tools are used to create new sounds or to facilitate the compositional process. In some extreme cases, composers let their tools automatically create both the macro- and the micro- structures of the composition. For example, Robert Rowe's Cypher program (Rowe, 1991) can respond to musical input according to the setup of response features and patterns from the user; David Cope's EMI (Cope, 1996) can generate music in the style of various composers by the program itself. In these extreme cases: Does the machine take over the act of music-making? If not, who or what is doing the act of composing? What are the relationships among the composer, the tools, and the music created? What is music-creating? We will examine these questions in this paper. 2. THE ROLES OF COMPUTER TOOLS IN COMPOSITION In a machine age like today, there are many methods and tools for creating interesting works. Usually, composers use these tools for two purposes: A. Tools for the making of music content itself: (1) Tools to work on elementary composition materials and units, e.g., sound synthesis, and sampling. (2) Tools to generate operators and their operation functions, e.g., combinatorial operations for the serialism, digital signal processing, and sound spatialization. (3) Tools to edit scores, e.g., a score editor. (4) Tools to construct structures and music decisionmaking routines of a composition, e.g., the SEE (Kunze and Taube, 1996) for the Common Music, musical grammar, and high-level composition languages. (5) Tools to build the generating processes of musicmaking, e.g., Cope's EMI and Rowe's Cypher. Tools in Category 4 and category 5 are indeed implementations of abstractions of musical knowledge. Barry Traux says it well: "automated, interactive, and process-oriented performance systems are all examples of how procedural knowledge... can be integrated within a computer music system. Each extends or even redefines the compositional process, and each has the potential to create new musical languages. " (Truax 1986) B. Tools to expand the composer's working memory and storage space: While composing, a composer needs recording devices for the ideas, sounds, or scores generated. Today, many tools are used for recording the processes of music-making, including the sonic materials. There are also tools for computer representation of generated materials. Some examples include MIDI, sequencers, audio mixes, and notation tools. 3. DEFINE COMPOSITION Knowing the functionality of the compositional tools, now we can ask a fundamental questions: what is composition? The concept of composition is the key to answer all the proposed questions in Section 1. According to the New Harvard Dictionary of Music, the definition of composition is "the activity of creating a musical work; the work thus created." Larry Austin and Thomas Clark (1989) have a quasi-operational, semi-abstract definition: "[composition] connotes putting music together, integrating the materials with skill,.planning, and artful originality to satisfy the requirements of a particular musical genre. " To make it more operational, we define the act of composing as to externalize the ideas and constructs of the mind, or mental maps, by performing some operations on some type(s) of sonic medium and/or system to implement an instance of realization maps. The mental maps refer to key constructs and/or planning of the architecture of the musical work at various levels. They characterize the music from a high-level view. The medium/system can be the harmonic series, the chromatic scale, the twelve tone series, or a screaking door sound, etc. The realization maps refer to the transformed view of the original constructs and planning through a mapping function of the medium/system. These detailed constructs and planning are the result of the constraints and properties of the medium/system. A composition, then, is an instance of the implementation of the realization maps. An implementation involves selecting operators and -204 - ICMC Proceedings 1999

Page  00000205 Maps composition Fig. 1 Maps mental ideas & constructs I~ 1~1 m: medium. s: system Some operations are ntrtinsic proe&tisot^th.> medium and system and some arsr 66ifid di constraints for a verification process of accepting or between a carbon rejecting the current instance: improviser. In the All music must rely on some medium or system. In the composer is al fact, "[ajll music promotes a world view in an implicit way between the corn since the choice of a particular system or language obliges the cybernetic mirror composer to adopt the vision mediated by it" (Teipi 1995). another tool creat< Furthermore, there must exist some intrinsic properties digital duplication and formal systems within the medium/system that the reacting syster regulate the composer's mental trajectory in music space. other hand, if the Finally, there are some abstract properties associated with only contai un the process of music-making and music itself. These knowledge', then I properties describe the judgment of the "values" of the also the tool build work, few of which are named by Austin and Clark: reflect universal, c integration, artistic, aid originality, all measured by the we view the comp4 observer. So the compositional tools can used to specify unrelated even if tl the operators and their functions, the constraint rules, and 5. WHO OR WHAI construction/representations of the maps. The Fig. 1 shows Now we can com' the working model of composition processes. what is making the 4. RELATIOINSHIPS AMONG TOOLS, TOOL BUILDER relationships amor (Lansky 1990), AND COMPOSER easy to see that oi Today's compositional tools, especially computer software, unrelated to the to embody personal views of what music is or how it should then there are somn be. By using the tool, a composer then interacts with the making the music. musical space of the tool builder. There are at least two is making the mus types of relationships among the tools, the tool builder, The situation and the composer. One, is the composer the tools builder? observer's view. 1 Two, how the computer tools interact with the composer: the composer's m is it controlled by the composer or the composer interact Only in the last c with automated systems? What or who dominates the imagery from the process of music-making? The first relationship raises the question of whose ideas are involved. If a tool is built by the composer, then it serves as an aid for composition. On the other hand, when the tools are made by other people, they are instances of the creation of someone else's concept of compositional elements and organizations. The composer creates music by implementing his or her musical ideas on top of the framework offered by the tool designer. Herbert Brun (1969), Paul Lansky, and Berry Truax all share a similar observation. Lansky writes: "Instrument design and construction now become a form of musical composition.... Playing someone else's instruments becomes a form of playing someone else's composition....[Tlhere was probably little distinction in [Harry Partch's] mind between building an instrument and composing the music for it.... [UJsing Csound, MusicS, Cmix, M, Performer, Ovaltune, Vision, Texture. CMU Toolkit, is, to varying extents, I The universal mi adopting the musical vision of the designer. "(Lansky 1990) of knowledge th The second relationship addresses the controlling of types of style music-making process during the interaction: it can be system (1991) is either a leader vs. a follower or two equal partners theory. rea ation constraint1 (m,s, rnapi) constraint2 (ms, map2) constraint3-(m,s, map3) -based improviser and a silicon-based case of an equal partner relationship, if so the creator, then this is an interaction poser's musical mental maps and the of these maps. If the system is built by or, then two parties are interacting: the of the tool creator's mental maps and n in the mind of the composer. On the composer is merely a user and the tools iversal (non-builder-specific) musical there is no difference if the composer is er since the tools react indifferently and ommon musical knowledge. In this case, oser and the tool builder as interactively hey are the same person. r IS MAKING THE MUSIC, THEN? e back to our original question: who or Smusic? By examinipg the four possible ig composer, tool builder, and tools, it is ily the case when both the composer is >ol builder and the tools are autonomic, ie possibilities that the machine itself is For the other three cases, the composer ic. This can be explained by the Fig. 2. i should be examined from a third party The third party observer only observes usical imagery in the first three cases. ase can the observer observes the joint composer and the program. Depending usical knowledge here refers to the type at is universally accepted within certain practice. For example, Cope's SPEAC consistent with the Western tonal music ICMC Proceedings 1999 - 205 -

Page  00000206 Case I (=, c) Case 3 (<>, C) Case 2 (=, a) Case 4 (<>, a) Fig. 2 lH ) Observed by listener C: composer's invention P: program c: controlled by composer a: automated composer <>: is the tools builder is not on who is in charge, the observer might hear something entirely different each time. When the program takes control of the processes of music-making, the role of the composer becomes more like an improviser or a game player. Under the circumstances, one would agree that the generated music is mainly the result of the program. Examples of this are the Harmonic Driving and Melody Easel in Tod Macover's Brain Opera (Paradiso, 1998), and Mozart's music dice game. However, can we say it is the machine that is making the music? In one view, making music refers to the abstraction of procedural generating of notes and materials. Whoever or whatever generates notes and materials is the music-maker. Therefore, the machine is the music-maker. In another view, it can be argued that the creator of the machine is the real music maker. The machine is simply a digital duplication of the realization maps in the mind of the creator. 6. MACHINE MUSIC-MAKING IS NOT MACHINECOMPOSING OR -CREATING Another example that poses interesting questions is the Experiments in Musical Intelligence by David Cope (1991, 1996). Cope presented three pieces of music in Bach's style in the AAAI 98 Conference: one by Bach, one by his program, EMI, and one by composer Steve Larson. These three pieces are very similar. Can we say that the EMI machine composed the Bach style piece in this case? Before I answer this question, first I would like to modify my working model of composition. The adaptation of the traditional definition of composition would not work when we deal with machines. As shown in the Fig. 1, the first processing unit is mind. A machine can never have a mind by definition (see any dictionary). If a mind is a must, a machine can never compose music. So, does the composition process really need a mind? Or a mind-like process? Consider the following situation: Imagine we have a Mozart machine -- a machine that is designed to compose in the style of Mozart through learning. For a human composer, by definition, learning to compose means that a student is given some knowledge (data) of music theories, musical practices, and lots of examples with the help of his/her innate capability of auditory scene analysis. By following this definition, the composing capability of a machine should come from its general learning faculty, assuming it has been provided with the auditory scene analysis capability. If a machine is given both the training data for its learning process (by exposing many positive examples) and the expert's advice as in the human student situation, and if the machine then succeeds in creating some "musical works" approved by the teacher, then we declare that this machine knows how --* -- -: some type of relation to compose and the pieces generated by this machine are its compositions. Following the definition above, most of the algorithmic composition systems use ad hoc knowledge directly from the builder. Therefore they do not qualify to be able to "compose" music. In a real composition machine, excluding the knowledge of music and auditory scene analysis, the learning scheme should be a general one that does not have goal-oriented ad hoc knowledge. Fig. 3 is our modified model for machine composition. As shown in the graph, the system records the history of its state generation and evaluation. Parallel maps are learned and used for various styles. In the case of the EMI, we know the following as facts (Cope, 1991; Cope, 1996): (1) The music schema part of the EMI, the ATN grammar, is part of the well known knowledge in common practice of the Western tonal music. It is (considered to be) fed directly by the teacher. (2) The signature and texture part of the EMI, derived through pattern matching and statistic process, can be considered as learned through many positive examples. (3) The style dictionary is fed by the expert. (4) For a composer or user, the EMI controls the entirety of the generating process and reacts indifferently to all, including Cope. (5) The architecture of Cope's EMI is neither learned nor is it common musical knowledge. Specifically, the concepts of sigrfature and texture and their usage for this problem are told directly by the expert. (6) The parameter tuning for the pattern recognition and statistics analysis is manually specified by the user (in this case, Cope) (Cope, 1996, pp. 90) through experiments and manual verification. Fact 5 can be interpreted in either way depending on how ad hoc is the knowledge told by the expert. However, Fact 6 is truly ad hoc knowledge directly from the user. Therefore, the EMI is not a composing machine. The reason is that the success of the system depends on the intervention and verification of the user. Still, it can be called a music-making machine from some views. Two other theories for recogniziig the ownership of composing, creating, or other music activities are determined by intentionality (Searle, 1980), (Cross, 1993), or by the causal history of automatic, plastic generationevaluation cycle (Elton, 1995). The EMI system does not exhibit either feature. Any machine exhibiting either feature would make music by itself. In this case, a composer serves no role. Could this artificial creativity happen in the near future? Although there are many -206 - ICMC Proceedings 1999

Page  00000207 composition learned parallel Maps Fig. 3 states generation & monitoring Maps - (mapping (medium,system) m: medium s: system. "C Some operations are intrinsic properties of the. medium and system; some are User specified, interesting viewpoints (Stefik and Smoliar, ed., 1995) in the research community, in my view, creativity involves much more complex issues than claiming a machine is composing. The word of creativity itself needs to be specified in detail. Furthermore, creativity such as the Pcreativity or H-creativity as defined in Boden (1991) requires clarification regarding its operational domains (who, whom, what, why, and how) in a musical social context and musical consensus. Unless these issues are addressed, artificial creativity is only theoiy on paper. 7. CONCLUSION In this paper, we examine music-making through exploring the relationship between tools and composer in today's computer-based composition environment. Models for both human composition and machine composition are proposed. Tools are used either for the making of music or to extend the composer's working space and memory. They are the extensions of the builder's musical ideas. Depending on what types of tools are used, the composer interacts with the musical space of the builder in various degrees. In some cases, the tools dominate the process of generating music and the composer merely serves as a user or game player. In this situation, we claim it is the machine that is making music. We also make the distinction between music-making and composing music. Machine composing involves a knowledge database of both music theory and auditory scene analysis, a general learning unit, a set of positive examples, an advising expert, and an evaluation unit. The requirement for machine music-making is more relaxed. A program could use ad hoc strategies to make music, but it does not compose music. ACKNOWLEDGMENTS I thank Bruce Pennycook for his insightful comments and supports for letting me to do something that is outside of my main thesis work. REFERENCES Austin, L., and T. Clark. 1989. Learning to Compose. Wm. C. Brown Publishers, Dubuque, Iowa. Boden, M. A. 1991. The Creative Mind: Myths and Mechanisms. Basic Books, New York. Brun, H. 1969. Infraudibles. In Music by Computers, H. Von Foerster and J. Beauchamp, ed. New York: John Wiley & Sons. Cope, D. 1996. Experiments in Musical Intelligence. A-R Editions, Madison, Wisconsin. Cross, I. 1993. The Chinese music box. Interface, vol. 22(1993), pp. 165-172. Eaton, M. 1995. Artificial creativity: enculturing computers. Leonardo, vol. 28, no. 3, pp. 207-213. Kunze, T., and H. Taube. 1996. SEE -- A structure event editor: visualizing compositional data in Common Music. In Proceedings: ICMC 1997. Lansky, P. 1990. A view from the bus: when machines make music. Perspectives of New Music, Vol. 28, Summer 1990, pp. 102-109. Laske, 0. 1992. The humanities as sciences of the artificial. Interface, vol. 21, pp. 239-255. Paradiso, J. A. 1998. New instruments and gestural sensors for musical interaction and performance. http://physics.www.media.mit.edu/publications/papers/98. 3.JNMR_Brain_Opera.pdf. Rowe, R. 1993. Interactive Music System. The MIT Press, Cambridge, Mass. Searle, J. R. 1980. Minds, brains, and programs. Behavioral and Brain Sciences, 3 (1980), pp. 473-497. Stefik, M., and S. Smoliar, ed. 1995. "The Creative Mind: Myths and Mechanisms": six reviews and a response. Artificial Intelligence, 79 (1995) 65-182. Tipei, S. 1995. For an intelligent use of computers in music composition. Proceedings: ICMC 1995. Truax, B. 1986. Computer music language design and the composing process. In The Language of Electroacoustic Music, S. Emmerson, Ed. Macmillan Press Ltd. ICMC Proceedings 1999 -207 -