Improvisational Builder: Improvisation as ConversationSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact email@example.com to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 190 ï~~Improvisaticon Builder: improvisation as convCrsatlon William Walker, Kurt Hebel, Salvatore Martirano, Carla Scalctti CERL Sound Group & School of Music / University of Illinois 252 Engineering Research Laboratory / 103 S. Mathews/ Urbana IL 61801-2977 /USA Telephone: (217) 333-0766 Email: firstname.lastname@example.org A BSTRACT To participate in musical improvisations, an interacting system must both generate appropriate musical materials and express those materials appropriately in collaboration with other performers. We have used results from the study of conversation and discourse to identify the major components for a theory of improvisation-listening, compoxsing, and realizing. Our goal is a framework based on a model of musical improvisation as conversation that incorporates signal level as well as event level control of sound. 2. BACKGIROUNI) 2.1. Sal-Mar Construction and SAL The Sal-Mar Construction (Franco 1974, Martirano 1971 ) was an interdisciplinary project involving Salvatore Martirano, computer science graduate studeut Sergio Franco, and ILLIAC III designers Rich Borovec and James Divilbiss. It was based on the idea of "zoomable" control-being able to apply the same controls at any level, from the micro-structure of individual timbres to the macro-structure of an entire musical composition. Weighing in at 15(X) pounds, the Sal-Mar Construction provided digital control over analog synthesis modules through a unique touch panel consisting of banks of switches assignable to any level of control. The YahaSALmaMAC orchestra features MIDI synthesizers under control of the Sound and Logic (SAL) program, which was implemented in LeLisp on theApple Macintosh by Sal vatore Martirano and David Tscheng. Sound and Logic participates in performances by transforming gestures played by the human performers into new gestures. Using the Macintosh keyboard, the human performer can cause the computer to perform looping or change orchestration, intervening in otherwise automated processes (as with the Sal-Mar Construction). A major impetus for the current project was the desire to include timbral control in an improvisation system-a combination of the best features of SAL and the Sal-Mar Construction. 2.2. Kyma System Kyma is a sound specification language that does not distinguish between signal level and event level processing or between the concepts of orchestra and score; these models are supplanted by arbitrary hierarchical structures constructed by the composer out of uniform Sound objects (Scaletti 1987, 1991). Kyma Sounds are generated in real-time by the Capybara, a digital signal multiprocessor. A Macintosh driver for controlling the Capybara enables any program to play a Kyma Sound and to control its parameters in real time. 2.3. Improvisation as Conversation While computers that converse are still years away, systems that interact in musical performances exist today (see, for example, Rowe 1991). Many have used linguistic techniques in the representation of musical knowledge; the logical next step is to consider musical interaction as being analogous to language interaction, or conversation. Hartmann describes one manifestation of this analogy-"trading fours"-in jazz performance (Hartmann 1991): "A four measure phrase leaves the layer enough room (say, three to six seconds) to develop one idea, to make one statement; yet there is no mistaking the dialo)gte within which each statement takes its place, and often the musicians answer each other directly. The resemlblance to conversation is uncanny."~ Conversation and musical im~provisation) arc at once similar andt different. For example, they seem to differ in their degree of simultaneity; improvisationentails that the participants ",talk" at the same time, while most models of conversation assume strictly alternating speakers. However, overlap often occurs in real conversations. Much of the simultaneous activity during improvisation is akin to conversational overlap; a pianist accompanying a soloist is like a listener acknowledging and 190
Page 191 ï~~encouraging a speaker. Conversation seems to lacks the strict temporal structure of harmonic rhythm that often supports group jazz improvisation. H-owever, the timing of conversational overlap is crucial; a conversational participant who never (or always) interrupts other speakers will not fully participate. In addition to timing issues, realizing speech materials in a conversation requires control over timbre as well as timing. For example, a sarcastic tone can turn the phrase "Yeah, yeah" from an affirmation into a rejection. The improvisor relies on timbral control and variation as much as rhythmic and melodic development. By analogy with conversation, an improvising system should have four properties. First, it should listen to the musical environment in which it finds itself. Second, it should generate musical material which relates to that environment. Third, it should realize these musical materials with timing that displays awareness of the other performers. Fourth, it should employ timbral control as a coherent part of realizing musical materials. 3. THE [NICMIPROVISATIONB UILDER PROGRAM 3.1. Plan One constraint in this project is that at every stage of development, ImprovisationBuilder must remain usable by a composer in actual performance and concerts. In order to achieve this, we have devised a four step plan: (i) Translate SAL into Smalltalk-S() andl add to it some compositional features of the Sal-Mar Construction (ii) Expand the Smalltalk-8() version to include control of a real-time signal processor (iii) Generalize andl expandl upon the specifics of SAL, culminating in a full conversation-based framework for improv isation (iv) Expand support for other input and output devices 3.2. Progress Report As of this writing we are largely finished with (i) and making considerable progress on (ii). The framework supports four basic kinds of activity, corresponding to the four recluirements for improvising systems set forth above. Listeners process the incoming music, parsing it into phrases and focusing the system's attention. Players create new phrases, either by transforming phrases supplied by the listener or via some compositional algorithm. Players could also supply previously composed phrases. Realizers attempt to express these phrases appropriately, both through timely presentation and timbral control. ImprovisationBuilder is written in ParcPlace Smalltalk-80 on the Apple Macintosh. Smalltalk primitives connect ImprovisationBuilder with input and output devices. Connections to MIDI devices are handled by the Apple MIDI Manager, while connections to the Symbolic Sound Capybara are handled by a driver that controls its operation (see Figure 1). A MusicScheduler dispatches all MIDI and Capybara events, decoupling the Players and Listeners from playback timing.r Music encompasses many different time iF7pyar MAcitshSletlk8 scales. In the short term, onset times must Capybara b be governed to within a few milliseconds, while longer term structures like blues choruses may last tens of seconds. These MID! Synth processes require radically differentMI!Cnrle amounts of computation, all to be satisfied simultaneously. Careful use of round-robin and priority scheduling allows the Figure 1: ct graphical user interface, ImprovisationBuilder, and SinaIltalk-80's garbage collector to share the same computer. I j C primitives I Driver Manager:onnections between Smalktalk-80 and music hardware 191
Page 192 ï~~We achieved our first goal of translating SAL by building Smalltalk-8() classes that models each component. A Listener filters incoming MIDI events and maintains a buffer containing the most recent notes played by the human performer. A FancyPlayer periodically copies this buffer and selects two to four excerpts from it. These excerpts are subjected to a set of transformations and then sent to the MusicScheduler for playback. At present, the set of available transformations includes transposition, contour inversion, retrograde, and decimation. More sophisticated transformations include a chord voicer that provides a continuum of voicings from closed to open and a Markov chain that generates new material based on the distribution of existing transitions. Transformed excerpts from the FancyPlayer are also placed in an output queue, where they are consumed by a SimplePlayer that further transforms them before playing them. Both Players perform playback via their own Orchestration, which chooses a new, random subset of Tirmbres every few notes. A MusicControlPanel allows the human performer to exercise control over which transformations are used and what their parameters are. By connecting these objects (see Figure 2) we can either simulate the existing Sound and Logic architecture or explore new configurations. Object-oriented programming techniques have simplified development of these components. FancyPlayer and SimplePlayer are subclasses of Player, an abstract class that captures similarities between the two. Only the differences between them (the different sets of transformations, for example) need be programmed. The transformations are all subclasses of Transformation, and observe the same protocol, so they can be used interchangeably. New MIDlSynths transformations are easily developed and tested by apybaraMusicSchedul substituting them for the default ones. We have made significant progress on our second goal of control of timbral micro-structure through control of MIDI Controller Orchestration Orchestration real-time software synthesis. There is a direct, high speed connection between the host computer and a digital signal processor, the Capybara. The composer designs timbres using the Kyma System. Listener FancyPlayer SimplePlayer ImprovisationBuilder can control any parameter of a Kyma Sound during performance. Our starting point for connecting Kyma to Figure 2: one possible ImprovisationBuilder configuration Improvisation Builcer was a phase modulation primitive written by K. Rebel (Hebel 1991). This primitive allows the Capybara to synthesize timbres similar to the ones Martirano used with the YahaSALmaMAC orchestra. Having done this, we can proceed to design new kinds of timbres for improvisation. In addition to facilitating timbral control, Kyma can also process other instruments under ImprovisationBuilder control through use of the Capybara's analog-to-digital converter. The MusicScheduler has been expanded to handle dat intended for both MIDI synthesizers and the Capybara. Since MIDIMessage and KymaMessage are both subclasses of MusicMessage, the two data streams can be merged together. The Players compile phrases into linked lists of messages. These lists are merged into the scheduler's pending events queue in chronological order. Since the Players are working several seconds into the future, merging a newly created linked list is often simply a matter of appending it to the end of the pending events queue, a significant computational savings. 3.3. Future Work is proceeding on the latter half of the four step plan outlined in Section 4.1. Step (iii) is to turn ImprovisationBuilder into a full, conversation-basedl framework for improvisation. At present, the SAL implementation in Smalltalk-8O is a largely one-sided conversation. Musicality emerges when the human performer lear'ns to adapt to an improvisation partner who interrupts rather often and sometimes focuses on insigznificant parts of the human's performance. An important step will be the construction of a Realizer that avoids interrupting other performers. We will also investigate controlling timbral parameters and how they can be incorpo~rated into improvisation. 192
Page 193 ï~~Step (iv) is to expand support for other input and output devices. A benefit of proper object-oriented programming is that our framework can manipulate any kind of input or output event that observes a fairly small set of protocols. Thus, other kinds of devices can be incorporated into the performance environment and used within an improvisation. This would include, for r t,,fi,,dit.. example, videotape players, lighting equip- FixedBufferListen FenePier 10 SimplePlayer Bi12 31 ment, and pressure sensitive controllers like Star s*p _Ki_ Istag l top Ki St I e o Kil....i S hrso 1 he RIo 13 ) er the Continuum Music Keyboard (Haken et al,,,condde2u ratiRoT.51 1).Rto11.511 1992). 272 S3smostrecen cho d ne 4P e 335.729 nedPhrase 338.217 4. CONCLUSION c 7c Our goal is to create a conversation-based 6Â~-6c 6c framework for musical improvisation. We_, Â~._-.-._.-- have translated the Sound and Logic program Sc.. c into Smalltalk-80 (see Figure 3) and added,c, - some of the compositional features of the SalControls Mar construction. As work continues, we willoMusicScheduler Transpose #(-7 -S 057 explore further the analogy between Tise 323.4, Ououelength 76 lar g, 7 117t1 uscSchediier-r Scheduler queue emptySkp2 conversation and improvisation n t 7.,,..,,,0,,, kp2 and its 754 MusicScheduler Schedulerqueue empty Â~oking I ramifications for ImprovisationBuildcr. 173.0MuscScheduier- Schedulerqueue empty Player1 Orchestra x(29 30 8 5 25 27 21 15) 5. ACKNOWLEDGMENTS eto Retrograde This research is supported in part by a grant from the University of Illinois Research Figure 3: ImprovisationBuilder in action Board. 6. REFERENCES S. Franco. hardware design of a real-time musical system. Ph.D dissertation, University of Illinois, 1974. L. Haken, R. Abdullah, M. Smart. "The Continuum: a continuous music keyboard." in Proceedings of the International Computer Music Conference. San Francisco: International Computer Music Association, 1992. C. 0. Hartmann. Jazz Text: Voice and Improvisation in Poetry, Jazz and Song. Princeton University Press, 1991. K. Hebel. "A Framework for Developing Signal Processing and Synthesis Algorithms for the Motorola 56001." in Proceedings of the International Computer Music Conference. San Francisco: International Computer Music Association, 1991. S. Martirano. An Electronic Music Instrument which combines the Composing Process with Performance in Real-Time. Unpublished. 1971. R. Rowe. "Machine Listening and Composing with Cypher." Computer Music Journal, Vol. 16, No. 1, 1992. C. Scaletti. "Kyma: An Object-oriented Language for Music Composition." in Proceedings of the International Computer Music Conference. San Francisco: International Computer Music Association, 1987. C. Scaletti. "The Kyma/Platypus Computer Music Workstation" in The Well-tempered Object: Musical Applications of Object-oriented Programming, Stephen Pope editor, MIT Press, 1991. 193