Page  116 ï~~A Look at Performer/Machine Interaction Using Real-Time Systems Cort Lippe University at Buffalo-SUNY Hiller Computer Music Studios Abstract This paper examines the utilization of the computer as a musical tool. In particular, the relationship between performers and interactive computer music systems involving real-time digital signal processing is explored. The question is posed: as a musical tool, does the computer have a unique functionality? 1 Sound Processing and Time Delay Disregarding questions of latency based on hardware limitations, most real-time processing and analysis necessitates the storage of sound samples, either for later playback or as analysis data, thereby producing a time delay between the original signal and its transformations. Reverberation, flanging, chorusing, filtering, etc., are obvious examples of signal processing algorithms which make use of delayed copies of a signal. Pitch tracking, and FFT analysis-resynthesis make use of a delayed (stored) signal copy. Envelope following makes use of a low pass filter (calculated with past samples), and harmonizer algorithms store small windows of sound. (Ring modulation appears to be one of the few processing techniques that does not necessitate delay.) Samples recorded on-the-fly for later playback must be stored, as do pre-recorded samples stored either on disk or in memory. Thus, almost anything outside the realm of sound synthesis (FM, AM, etc.) that takes place in a live, interactive computer music environment involves some sort of time delay. [Jaffe, 1995] (And performer input used to control pure synthesis introduces a time delay in the synthesis' control structure.) Does this seemingly inherent delay define a particular relationship between performers and computers? 2 Performer Input/Machine Response Metaphorically speaking, a performer often leaves audible traces "in the air" during a real-time computer performance as the music moves forward in time. A common observation sometimes a complaint-is that real-time computer music has a static performer/machine relationship, one in which the computer always reacts to or transforms material generated by a performer. (The same critics note that the performer/machine relationship often seems less static in the tape/instrument repertory.) Real-time transformations of an instrument, using the instrument's sound as input to the transformational process, cannot easily avoid a kind of "call and response" relationship. 2.1 Transformational Processes Is this call and response relationship a musical trap? A great deal of music employs some kind of repetition of materials: echo is the simplest form of delay, and call and response is common in many kinds of music. In addition, as a temporal art-form, a large body of music is organized around techniques such as imitation, diminution, augmentation, transposition, inversion, and retrograde (albeit, usually of the parameters pitch and/or rhythm). This incomplete list of techniques describes transformational processes which are based on repetition, and quite often, in practice produce audible delayed responses or reactions to some original material [Rowe, 1993]. (Contrary to this, the majority of story-telling, poetry, and theater utilizes vastly different methods of organizing materials. Perhaps, if music were a "language" in any sense like natural languages, techniques like diminution or transposition would not be as commonplace in music.) 2.2 Micro to Macro The simple concept of delay can be used to describe music and music-making on a much broader scale. On one end of a continuum, the very idea of resonance implies some sort of delay. Natural resonators, a violin body for instance, delay an instrument's sound. Concert spaces have important natural resonating qualities. (Try listening to music in an anechoic chamber for any length of time.) On the other end of this artificial continuum, the term "theme and variations" could be used to loosely describe a huge body of music, if we consider variations as delayed transformations of some original material. 3 Ensemble Paradigms Before returning to instrument/computer relationships, a look at ensemble paradigms is useful. Models of music-making include duos, chamber ensembles made up of equal partners, soloist and ensemble partnerships, Lippe 116 ICMC Proceedings 1996

Page  117 ï~~etc. All of these models can employ strategies which allow for flexibility in performer/performer relationships. Primary and secondary (accompaniment-like) roles are often in evidence, as are hierarchical relationships among performers. These relationships can also be fixed or variable. Performer/performer relationships may or may not include the notion of call and response. One performer (or musical part) may have the role of both generating and transforming material (i.e. a soloist), or calling and responding may be shared between different performers. Reactive activities among performers create temporal relationships based on time delay. Usually, changes in ensemble relationships, or changes in the reactive role (who is "calling" and who is "responding") are musically significant on a structural and/or expressive level. 3.1 Computer Functionality The computer appears to be a useful tool for creating new sounds, transforming pre-existing sounds, and controlling algorithmic compositional structures. As a compositional, sound designer, or studio and concert tool, its power and flexibility are rather impressive. But its functionality does not appear to be entirely unique since, for the most part, it has only taken over the roles of pre-existing tools [Zicarelli, 1992]. Before computers existed, musicians edited sounds, made algorithmic music, and transformed sounds both electronically and acoustically. This question of functionality would seem to be an important one. Can the computer have a unique function in a real-time concert situation? 3.2 Composer/Performer/Machine Relationships Aside from a small minority of computer musicians who consider the computer as more than just a means to an end, and who question, for example, the relationship of form to materials in computer music [Di Scipio, 1995], the computer's basic music-making functionality does not appear to be unique. An artistic "paradigm shift" similar to the scientific paradigm shifts described in Thomas Kuhn's well known book The Structure of Scientific Revolutions does not appear to have taken place in composer/performer/machine relationships. Why does the computer's functionality in a real-time concert situation adhere to composer, performer, instrument, and instrumental accompanist models? And why is the computer reactive in much real-time computer music? [Puckette and Settel, 1993]. 4 Conclusion Perhaps considering the computer as a tool with clear functionality, based on well-known models, is the best we can do. Perhaps that is the only way we can conceptualize its uses. Basic anthropomorphic roles such as composer, performer, soloist, accompanist; or musical roles such as instrument, instrumental extension, generator of materials, transformer of materials, or even "computer as bagpipes" may define the limits of a conceptual framework. If we are not capable of creating roles for the computer which go beyond our own image, perhaps we can at least attempt a synthesis of all these anthropomorphic and musical roles. This hybridization would at least offer us a glimpse of other functionalities, other conceptual frameworks, and can only serve to enrich the musical discourse and interactivity of real-time computer music. In the last few paragraphs of Lewis Mumford's Technics and Human Development: The Myth of the Machine. Volume One, Mumford writes: " progressively replacing the recalcitrant and uncertain human components with the specialized mechanisms of precision made of metal, glass, or plastics, designed as no human organism had ever been designed, to perform their specialized functions with unswerving fidelity and accuracy...The machine 'advanced' thinkers began to hold, not merely served as the ideal model for explaining and eventually controlling all organic activities, but its wholesale fabrication, and its continued improvement were what alone, could give meaning to human existence...Power, speed, motion, standardization, mass production, quantification, regimentation, precision, uniformity, astronomical regularity, control, above all control-these became the passwords of modem society..." Acknowledgments to Barry Moon for the concept of the "computer as bagpipes". References [Di Scipio, 1995] Agostino Di Scipio. Centrality of Techne for an Aesthetic Approach on Electroacoustic Music. Journal of New Music Research 24(4): pp. 369-383, 1995 [Jaffe, 1995] David A. Jaffe. Ten Criteria for Evaluating Synthesis Techniques Computer Music Journal 19(1): pp. 76-87, 1995. [Puckette and Settel, 1993] Miller S. Puckette and Zack J.E. Settel. Nonobvious Roles for Electronics in Performer Enhancement. Proceedings of the International Computer Music Conference, Tokyo. pp. 69-72, 1993. [Rowe, 1993] Robert Rowe. Interactive Music Systems: Machine Listening and Composing. The MIT Press, Cambridge, 1993. [Zicarelli, 1992] David D. Zicarelli. Music Technology as a Form of Parasite. Proceedings of the, Jnternational Computer Music Conference, San Jose. pp. 69-72, 1992. ICMC Proceedings 1996 117 Lippe