A Sense of StyleSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 00000001 A Sense of Style Brad Garton M~atthew Suttor Music Department -- Dodge Hall Music Department -- Dodge Hall Columbia University Columbia University New York, NY 10027 USA New York, NY 10027 USA garton @columbia.edu matthew @ music.columbia.edu ABSTRA CT This paper explores a compositional environment developed by the authors for simulating diverse musical styles. The methodology used for linking real-time musical instrument physical models with algorithms that mimic performance practices of Western and non-Western musics is described, as well as the ethical and aesthetic issues that necessarily result from this work. The compositional environment consists of several components: a set of real-time synthesis models implemented in RTcmix (real-time cmix), a LISP-based kernel in which 'performance rules" are encoded, and a variety of high-level interfaces allowing for the creation and investigation of synthetic musical styles. These interfaces range from a basic 'style-space' graphical system to an a-life model of music/cultural evolution, as well as a fundamental paradigm of interaction which has the interesting feature of exploring the pedagogy of learning to "play" a given computer instrument within a simulated musical tradition. All of this work is rooted in an expanded concept of musical representation, one which proceeds from the notion that music is generally best represented through performance rather than through abstract symbolic notation. In this environment, the locus of musical style is found in the actual realization of music instead of within a reductive, generated set of notes (or notes+rules). This captures the essence of musical style associated with many improvisatory traditions, both Western and non-Western. Musical cultures in which notation is an alien intrusion can be rendered directly using this expanded representation scheme. Modelling of cultural styles raises important questions about the ethical and aesthetic relationships existing between the synthetic style and the real music that inspired it --concerns that go beyond the standard copyright/ownership issues involved in direct sampling or quotation of extant music. Under the rubric of computer music, what we do is not really "real". However, the implications of what we do are rapidly becoming very real. What is appropriate appropriation? What compromises must be reached when musical cultures collide? The authors have been in contact with various organizations involved in the promotion of indigenous musics in order to address these complex issues. 1. Introduction During the past several years, we have been employing the symbolic representation and sound synthesis capabilities of computers to model aspects of musical style. We have used these models directly in our music composition activities, and have also involved ourselves in musical research arising from our
Page 00000002 interaction with the models. In this paper, we will discuss how our models work and why we find these models musically compelling. We will also touch upon some of the philosophical and aesthetic issues we have encountered while developing our style models. First, we need to clarify exactly what we mean by a "musical style model". There have been a number of recent efforts aimed at creating models of compositional style. Many of these explorations have been intended to emulate the compositional practice of a particular period in Western Classical music (see [Cope 1992] for example) or even the creative style of a specific composer [Ebcioglu, 1986]. Others have investigated the performance practice of Western Classical music, the goal being the production of a more 'musical' realization of a pre-existing composition by computer [Friberg, et. al. 1991; Widmer, 1995]. It is perhaps artificial to make such a clean separation of "compositional practice" and "performance practice" in music, however. For much improvised music, for example, this distinction collapses into a unitary musical act. A few researchers have investigated the codification of stylistic characteristics in improvised music, but the stylistic identifiers are rooted firmly within a particular improvised performance tradition [Dannenberg, et. al. 1997]. Our sense of musical style is aligned more with the features that locate a music within a musical culture; style being construed as a pan-cultural identifier. What makes Irish folk music sound "irish"? What makes Greek music sound "greek"? Our definition of musical style thus encompasses the compositional and performance factors that create these categories. One way to visualize where we believe our work resides is to imagine "compositional factors" and "performance factors" as two orthogonal vectors (see figure 1). Most of the extant research into musical style within the Western Classical tradition lies clustered close to one of the other of these axes. We have found that the characteristics that identify a music as being from a specific cultural tradition are situated in the space between the axes, with less or more contribution from each dependent upon how a musical tradition has evolved. SEbcioglu compositional factors X Cope xDannenberg, et. al. Widmer Friberg, et. al. X X performance factors Figure 1 -- depiction of selected style-model research
Page 00000003 2. The Modelling Framework The basic approach we use to produce an imitation of a particular musical style has been described elsewhere [Garton, 1992], but a brief summary in the context of this paper is probably useful. The models work by stochastically invoking a set of Lisp-coded rules designed to ultimately assign values to parameters for sound synthesis algorithms. The rules can be conceptually divided into hierarchical categories (see figure 2), but they actually operate in a rather tangled fashion. harmonic layer sha e Ia er rift Ia er gestural Ia er inflection layer physical layer Figure 2 -- conceptual style-rule hieralrchy At the root of the hierarchy (the physical layer) are rules for checking the physical possibility or impossibility of a given action. For example, when strumming a chord on a multi-stringed instrument, small and slightly randomized time delays must be inserted between the notes sounding on successive strings because of the time required for the plectrum to travel from one string to the next. Timbral variations resulting from different note articulations (up-pick vs. down-pick in certain guitar musics, or percussive/legato breath attacks in various flute performance traditions) can also be considered as rules existing at this layer. The next layer codes information about performance inflections appropriate to a given musical style. The manner in which pitch bends occur, the types of vibrato used, grace notes and quick rhythmic figurations are all examples of these inflection rules. The inflections are then grouped and merged with rhythmic and pitch data into small units called gestures. Gestures typically consist of 2-8 notes, with rules at the gestural level controlling their internal unfolding. Finally, rules at the shape and harmonic layers in the hierarchy govern the long-term arrangement of gestures, placing each type at stylistically-aLppropriate points in a musical phrase. When assigning parameter values to a synthesis algorithm, the rules do not function in a hierarchical manner. Instead, rules are accorded precedence in a context-dependent, probabilistic manner. Finding the exact locus of 'compositional decisions' or 'performance decisions' in one of our models is nearly impossible because the rules all function interdependently in making musical choices. Two observations should be made about our models. The first is that the parameter-generating code requires a good digital synthesis algorithm for the production of sound. By 'good' we mean an algorithm
Page 00000004 that faithfully mimics the set of timbral and sonic controls presented to a real musician by a real musical instrument. Musical instruments and performance technique evolved simultaneously, with performance technique reflecting the interface of the instrument. The ability to virtually re-create the subtle nuances of a performance style necessitates a virtual instrument exhibiting "real life behavior. Towards this end, we have been using Charles Sullivan's extended version of the Karplus-Strong algorithm [Sullivan, 1990] and many of Perry Cook's physical models (see [Cook, 1992] for examples) as our digital instruments. The second observation is to reinforce the assertion that nearly every musical decision in our style-model programs is made probabilistically. At the lowest level of choice, this randomness operates as skewed constraints that captures the directed imprecision of a real human performer. At higher decision levels, this approach allows us to imitate the non-notated and often improvisatory nature of many of the musical traditions we seek to emulate. 3. Model Implementation We build a style model by hand-coding a set of Lisp rules for musical gestures, inflections, phrasing, etc. to reproduce the salient characteristics we hear in a particular musical tradition. We then adjust the various rule and parameter probabilities until we start hearing what sounds like a more focused performance within that tradition. This stage of the modelling process seems remarkably similar to coaching a neophyte musician to play with a certain technique, or coaxing a studio session professional to produce a certain "sound", only the language used for communication is Lisp rather than a natural human language. We also often discover that the development process leads us into new musical areas -- it is impossible to predict exactly how the output of the model will sound. This approach has worked quite well for us in the past, allowing us to produce a number of compositions with an acceptable level of "stylishness". We can also negotiate the area between the 'composition' and 'performance' axes of figure 1 by adjusting the tightness of the constraints built into our probabilistic system. Recently we have dramatically increased the speed of the model-implementation process by writing a set of Lisp functions which communicate directly with the digital synthesis algorithms coded in RTcmix (a realtime software synthesis/signal-processing language, see [Garton and Topper, 1997] for a description). The real-time interactivity of the Lisp/RTcmix system opened a wide range of new avenues for us to explore, including the creation of new, interactive interfaces into the style-model rulesets. In the next few sections, we will give a brief overview of several recent projects arising from this newfound interactivity. The source code for all of these projects (as well as RTcmix and the GCL-RTcmix interface functions) is available at http://www.music.columbia.edu/-cmc/software. 4. Style Morphing After constructing a variety different style models, we imagined the possibility of dynamically combining different styles. Using the interactive capabilities of the RTcmix/Lisp system, we designed several interfaces (written in Java) that present a two-dimensional map to the user. Spatial regions on the map are
Page 00000005 affiliated with the specific style models; a mouse click in one area may cause 'irish' rules to be employed for a synthetic flute performance, while a click in a different region may invoke a "japanese" or a "greek" ruleset. The real interest in the interface occurs when traveling between style regions. By selectively choosing which ruleset to use, it is possible to effect a change from one style of music to another. Initially we used a simplistic morphing technique by choosing one ruleset or another for a given riff or phrase, depending on probabilities associated with distances from discrete style regions on the map. This approach had the effect of alternating rapidly between one style and another -- not the smooth morphing effect we were seeking. We have since been experimenting with locating the style ruleset choice at lower levels in our conceptual hierarchy (figure 2). It seems that most of information we use to identify a cultural performance style exists between the inflection and gestural layers, reinforcing our intuitive sense that the performance of music is at least as powerful as the notes being performed for marking a particular musical style. 5. Style Evolution A problem with our style models is that they are basically static. Even in the morphing system, each independent style model exists as an unchanging and closed entity. Real musical traditions are constantly changing in subtle and occasionally dramatic ways. We have begun to model this dynamism by drawing loosely on techniques borrowed from a-life researchers (see [Levy, 1992] for a good overview of these techniques) and incorporating them into our style-model framework. The idea is to start with a population of virtual performer agents all sharing a rudimentary knowledge of how to play a synthetic instrument. We use a very 'stripped-down' set of style rules to represent this. We then probabilistically apply a set of mutation operations designed to extend and modify the basic style ruleset. If one of the agents adopts a modified rule (perhaps an altered inflection, or a pitch or rhythm extension/change to an intrinsic riff, etc.), then all the other agents vote (again probabilistically) to see if they think the stylistic change is "cool" or not. A successful poll results in the addition of the new rule to each agent's style model. The 'fitness function' guiding the evolution of the performance population is thus determined from within the population itself. This rather crude approximation of the socio-musical aspects of style evolution has yielded quite interesting results, but the biggest difficulty has been achieving a radical shift to a unique performance style. To a large extent, the musical style evolutionary pathways followed by our populations are easily predicted by the mutation operations we create. Our goal is to build a set of incremental mutation operations that might combine non-linearly to produce a truly new and original musical style. 6. Aesthetic Comments Leonard Meyer has defined style as a "replication of patterning, whether in human behavior or in the artifacts produced by human behavior, that results from a series of choices made within some set of constraints [Meyer, 1987]. Our projects do indeed attempt to model style by locating patterning on various levels within the music we are modelling. Our decisions are principally informed by recordings but also by transcriptions, ethnographic descriptions and players of the musics. Although recordings and transcripts are consulted, such objects which are subject to ownership are not used directly in the modelling process or
Page 00000006 form part of the product. Nothing is sampled or directly quoted. But while patterning informs our understanding of the music, we have actively resisted any attempt to neatly "systematize" patterning to design algorithmic models that produce generic products. From Meyer's definition one might imagine that the locus of style is hard to pinpoint. The complex interplay between the physical model, the restrictions of physical performance logistics, and the stochastic hierarchical parameterization of, for example, gesture/inflection produces a synthetic musical performance that is layered, rich and often surprising. At this stage, a focus of our work has been an exploration of the concept of musical representation. There are many aspects to selecting this line of approach. As mentioned, a fundamental aim of our project is simulating diverse musical styles, both Western and non-Western. It is important to our approach that we tackle the broader phenomenon of style itself and not become entangled in trying to represent in code rulesets that describe a particular musical practice, such as functional harmony. By using general templates we can coax an instrumental physical model into the gray area between "compositional practice" and "performance practice". This leads to another objective: to avoid mapping essentially Western concepts of abstract symbolic notation onto all our simulations without further consideration. One of our currently active projects -- mbira style modelling -- is a good case-in-point. At present Suttor is collaborating with Martin Scherzinger, a music theorist, composer and mbira player from South Africa who is also a graduate student at Columbia University, on a style modelling project that endeavors to produce mbira melodies. The mbira is an African instrument played by plucking iron tongues attached side by side to a wooden base. The type of mbira music that we are concerned with is the Mbira dza Vadzimu from Zimbabwe. The project at this stage attempts to produce melodies that are "culturally correct", but not actually heard before. Modelling mbira music presents many challenges and potential pitfalls. At a glance transcribed mbira music bears a notable semblance to Western diatonism. The scale is heptatonic and the music is usually based on a 12 chord progression. There are even theories that suggest a system that this music is based on a fundamental progression [Tracey, 1989]. Indeed, part of the mbira project is to see if we can create a ruleset that produces mbira melodies based on a LISP interpretation of this fundamental progression theory. However, we should be suspicious of the urge to rush into trying to encapsulate such theoretical propositions -- that essentially revolve around pitch -- into some neat and tidy algorithm and call it a style model. Although we have found for ourselves that mbira music does indeed seem to be based on a fundamental harmonic progression, the many ways in which this is realized and the ambiguity which is built into the system means that grounding an understanding on the transcriptions alone would be far too limited. It is also more than just a matter of playing experience informing the programming. The separation of the "music" in a Western sense of the word from the object that makes the sound is to create an ultimately unbreachable rift. The South African composer Kevin Volans has made several pieces based on such transcriptions of mbira music, and he argues that there is no such thing as "translatability" when the "music" from one instrumental tradition is mapped onto another instrument -- two Africans playing mbira music (mbira music is usually played by two interlocking parts) and transcriptions of such music arranged for two harpsichords represent two different pieces of music [Volans, 1998]. But what of this issue of "translatability" from the real world to the virtual? The implications of this for computer music are somewhat overwhelming (to say the least), and there is no avoiding this gap between real and virtual worlds for those investigating music modelling. Our paradigm by very definition is one where various types of translations separate our models from the original acoustic source and its socio
Page 00000007 musical setting by the fact that we are working in the "unreality" of the digital domain, by the way we interpret these musics through the "cultural filter" of our ears and eyes and by decisions we make when we translate our interpretation into a computer model. The computer music paradigm provides a unique opportunity for composers, but it also contains hidden snares. We can sample, we can try and imitate other music, we can set about trying to code what we hear, and we can attempt to generalize about the idea of composition itself. However, we agree with Paul Lansky that "the essence of this development lies not so much in our increasing ability to model and invent, but in the ways in which we'll relate to one another in this new domain." [Lansky, 1990] At the basis of our endeavor is an examination of the desire itself to compose in the style of music outside our cultural experience. There are many well-known examples of Western composers from Debussy to Bartok and from Lou Harrison to Kevin Volans, incorporating stylistic elements from music of other cultures. What justification can we give for wanting to involve ourselves in the music of other cultures? What prevents our project from becoming entangled in the sticky issues surrounding cultural appropriation? We may argue that we are not directly quoting other musics but instead are trying to inform our own compositional activities through examining what we find attractive in the music of others. As Feld says "Music appropriation sings a double line with one voice. It is a melody of admiration, even homage and respect, a fundamental source of connectedness, creativity, and innovation... Yet this voice is harmonized with by a countermelody of power, even control and domination, a fundamental source of asymmetry in ownership and commodification of musical works." [Feld, 1994] We have attempted to address this question: "What folk music does give me is a sense of "belonging", a feeling of membership in a human endeavor. In an increasingly fragmented and disconnected world where the threads of tradition and the standard pathways or continuity are being fundamentally eroded, this ability of music to satisfy a nostalgic desire for community is becoming vital to our societal health. When we actively listen to music, we are vicariously participating in the community defined through that music." [Garton, 1995] BIBLIOGRAPHY Cook, P. 1992. "Physical Models for Music Synthesis, and a Meta-Controller for Real Time Performance." Proceedings of the 1992 International Computer Music Conference and Festival at Delphi, Greece. University of Thessaloniki, IPSA. Cope, D. 1992. "Computer Modelling of Musical Intelligence in EMI." Computer Music Journal 16(2): 69-83. Dannenberg, R., Thom, B., and Watson, D. 1997. "A Machine Learning Approach to Musical Style Recognition." Proceedings of the 1997 International Computer Music Conference. San Francisco: International Computer Music Association. Ebcioglu, K. 1986. "An Expert System for Harmonizing Four-Part Chorales." Proceedings of the 1986 International Computer Music Conference. San Francisco: International Computer Music Association. Feld, S., and Keil, C. 1994. Music Grooves. Chicago: University of Chicago Press.
Page 00000008 Friberg, A., Fryden, L., Bodin, L., and Sundberg, J. 1991. "Performance Rules for Computer-Controlled Contemporary Keyboard Music." Computer Music Journal 15(2): 49-55. Garton, B. 1992. "Virtual Performance Modelling." Proceedings of the 1992 International Computer Music Conference. San Francisco: International Computer Music Association. Garton, B. 1995. "Computer Modelling of Musical Performance and Style." Proceedings of the 1995 Greek Symposium on Physical Models and Applications in Psychoacoustics. University of Thessaloniki, IPSA. Garton, B. and Topper, D. 1997. "RTcmix -- Using CMIX in Real Time." Proceedings of the 1997 International Computer Music Conference. San Francisco: International Computer Music Association. Lansky, P. 1990. "A View from the Bus: When Machines Make Music." Perspectives of New Music 28(2) [Summer 1990]: 105. Levy, S. 1992. Artificial Life. New York: Random House. Meyer, L. B. 1987. "Toward a Theory of Style." in The Concept of Style, ed. Berel Lang. Ithaca, NY: Cornell University Press. Tracey, A. 1989. "The System of the Mbira." Proceedings of the 7th Symposium on Ethnomusicology. International Library of African Music. Sullivan, C. 1990. "Extending the Karplus-Strong Algorithm to Synthesize Electric Guitar Timbres with Distortion and Feedback." Computer Music Journal 14(3): 26-37 Volans, K. 1998. private conversation. New York, 12 June 1998. Widmer, G. 1995. "Modeling the Rational Basis of Musical Expression." Computer Music Journal 19(2): 76-96.