Page  202 ï~~Realizing the spatialization processing of Dialogue de lombre double by Pierre Boulez Howard Sandroff The University of Chicago, Department of Music 5845 S. Ellis Ave. Chicago, Illinois 60637 1. ABSTRACT Realizing the spatialization processing of Dialogue de lombre double (Boulez 1989) by Pierre Boulez can be accomplished using a personal computer and commercially available audio, MIDI hardware and software. It is my hope that this description will help performers overcome the seemingly insurmountabletechnical requirements of Dialogue de lombre double, thereby allowing this work to reach the large audience it deserves. 2, INTRODUCTION Dialogue is scored for clarinet and pre-recorded clarinet. The live clarinet is amplified and mixed with sympathetic vibrations from an amplified piano. The pre-recorded clarinet part is played back through a six channel sound system which surrounds the audience and is under the control of a second musician. During the performance, each pre-recorded section (Sigle initial, Transitions i-Ii, Il-Ill, ill-IV, IV-V, V-VI, Sigle final) alternates with a live clarinet section (I, II, ill, IV, V, VI). The live and recorded sections overlap at specific cue points in the score. Extensive instructions (Gerszo 1989), which accompany the score, describe the recording technique, audio processes, and location modulations (spatialization) required for preparation and playback of the pre-recorded clarinet. This paper will describe my approach to realizing the audio processing and spatialization of the pre-recorded clarinet. 3. SPATIALIZATION SYSTEM a.1 Hardware The first realization used a double system hardware/software configuration consisting of a Macintosh computer running MIDI sequencing software slaved via SMPTE Longitudinal Time Code (LTC) to an analog two-track tape recorder (ATR) for playback of the prerecorded.,clarinet.. The spatialization. was performedusing automated mixer/processors-which received instructions from the MIDI Due tothe limited accessibility of digital recording, editing and playback systems with LTC synchronization, I decided that analog two-tracks playback would be commonly-available at the many upcoming performance venue's. The hardware for performing the multi-channel spatialization is: 1. Three Yamaha DMP11 Digital Mixing Processors 2. One Opcode Studio Three SMPTE/MIDI Serial Interlace/convertor for the Macintosh 3. One Macintosh computer (minimum configuration, SE, hard drive, 2 meg RAM) 4. One stereo two-track analog tape recorder, 15 ips, with DBX type I Noise Reduction 5. Six loudspeakers and channels of power amplification Boulez's (Gerszo 1989) instructions call for automated signal routing and multi-channel panning to the six loudspeakers and dynamic digital signal processing including real time reverberation/delay processing/mixing and equalization. The DMP11 was selected because (at that time) it was the only MIDI controlled mixer/processors which allowed the user to program all of the effects and mixing parameters to respond to incoming MIDI Continuous Controller codes ($Bn, XX, YY). Most processes of the DMP1 1 (Channel and master gain, dynamic EQ, effects send/return, reverberation/delay processing) were programmed to receive and respond to a specific controller. Processes common to all six inputs were programmed to respond to the same controller. The left output of the tape recorder (clarinet track) was connected through a distribution amplifier to the first two channels of each of the three DMP1 is. The six channels were panned to either the left or right mix buss of their respective DMP1 1. Each of the DMP1 1 outputs was connected to one of the amplifier/speakers that surrounded the audience. The right output (LTC) of the the two-track tape recorder was connected to the Opcode Studio Three SMPTEMIDI interlace/convertor, thereby slaving the playback of the control sequence to the recorded LTC. 202

Page  203 ï~~3.2 Software Opcode Vision software was selected to create the control sequence because of its advanced graphic and line editing features which facilitated the creation of complex controller data. Vision is also able to synchronize its playback to incoming LTC (MTC). An additional consideration was Opcode's commitment to give Vision the future capability of playing digitized audio, using the Digidesign Sound Designer II format, directly from a hard disk, within a Vision sequence. 4. CREATING THE PRE-RECORDED CLARINET TRACK The pre-recorded clarinet track was recorded, using the published (Gerzso 1989) instructions for room acoustics and microphone placement, on an R-DAT Digital Recorder. Selections (from the master recording)were dubbed to track one of a two-track ATR at 15 ips encoded with DBX Type I noise reduction. The analog tape was edited and assembled using razor blade splicing techniques. In order to insure that the Macintosh was able to lock its playback to the ATR, despite the starting and stopping of the tape between sections, five (5) seconds of pre-roll (LTC without audio) was placed between each pre-recorded section. After assembly, the analog master (track two) was post-stripped with continuous LTC. A DAT safety and analog work copy with LTC was dubbed from the analog master. The safety DAT was placed in storage so that future performance tapes could be dubbed. 5. PROGRAMMING THE SPATIALIZATION 5.1 Conventons Due to the variety of mix and signal processes required, it was necessary to establish a set of conventions for programming the control sequence and the DMP1 1. Identifying (spotting) the LTC address of each section of the composition was the first part of the programming process. Once identified, each LTC address, less five (5) seconds to accommodate each sections pre-roll, was line edited into one continuous Vision sequence. A series of mix presets (scenes) were created in the DMP11, one scene for each pre-recorded section. These scenes set up an initial mix (including initial gain), equalization and processor programming. Conventions which established channel gain, master gain, effect send and return were necessary so that controller values would always correspond to a specific mix or processing characteristic. The scenes were stored in the DMP1 1 memory and assigned to receive a MIDI program change command. The next step in preparing the control sequence was placement of a program change command (corresponding to the DMP1 1 mix-preset) at the beginning of each section (Sigle initial, Transitions, Sigle final). Since the mixer was pre-set to the opening spatialization of each section, it was necessary to determine how subsequent controller values affect all of the dynamic processes. Maximum mix and processing parameters were assigned a controller value of "100" (corresponding to the opening channel and master gain of the initial scene) and experiments were conducted to determine the resulting mix and/or processing parameter response to maximum, minimum and changing controller values. For example, the DMP1 's response to gain changes (channel and master) lagged behind the incoming controller values. Accommodations were made by sending the gain change data approximately fifty (50) ms. before required. Finally, each musical cue, detailed in the score, was spotted to find its LTC address. Since LTC can only be read in real time, it was necessary to repeatedly play back each section of the recording while carefully logging the address of each musical cue. The addresses were then line edited into the control sequence. Once all conventions were established and the musical cues spotted and logged, programming of the spatialization began. 5.2 Programming the spatialization Each process had its own peculiar problem. The most difficult programming task was automating the rapid movement of musical material from one loudspeaker to another. For example, in Transition I to II the score calls for the appearance of each phrase in one or more loudspeaker. To insure a smooth transition of sound from loudspeaker to loudspeaker, it was necessary to cross-fade the exit of one phrase with the entrance of the next while maintaining residual reverberation in the former. It was determined that half (.5) second crossfades with one and a half (1.5) seconds of reverberant ring provided a smooth transition... The circular panning called for in Transition Iil to IV required the creation of data lists which would cross-fade between adjacent channels, using fader values, in a way that would simulate circular motion. Exponential curves of data were created using Vision's graphic tunctions. These data lists were then overlapped in time so that maximum gain between any two channels occurred at that point when the pan effect was centered. As the velocity of the circular pan was increased, the same exponential data lists were increasingly compressed in their overall duration. 203

Page  204 ï~~Certain effects required the use of two or more DMP1 1 processes simultaneously. For example in Transition I to Ill both the reverberation time and reverberation return increased at the same rate. First, a track of controller values corresponding to reverb return was created using a linear increase of gain over the duration of the reverb crescendo. Then, that data was copied to another track and assigned to the controller code corresponding to reverb time. Once the two processes were working in concert, over time, a scalar was calculated to find the most musical amount of increasing reverberation (time and gain) to accomplish the effect. 5.3 Performance cues Once the spatialization was complete, the next step was preparation of the performance cues. Finding the start/stop locations for the tape was difficult becausethe Macintosh required sufficient LTC to slave to the ATR playback. If, for instance, the Macintosh was unable to read the incoming LTC at the beginning of a section, the sequencer would not send the DMP11's program change command and the spatialization The solution was placing theupcoming program change command at the end of the previous section. Therefore, the DMP1 1 would reset to the coming mix scene before the tape was stopped. Thus, sound would be present at the beginning of the next section even if the Macintosh failed to immediately lock to the LTC at the start of the next section. The score calls for exact synchronization between the pre-recorded clarinet and live clarinet where the two parts overlap. It was therefore necessary to back time each tape start cue to a point in the live clarinet part to allow for five (5) seconds of pre-roll. Tape stop cue's coincided with the upcoming DMP1 program change command. The first performance of this realization took place October 20, 1989. Although imperfect, this system functioned adequately for two years of performing Dialogue. Venues including three performances in Chicago, ICMC at Ohio State University, ISCM in Boston, and the International Clarinet Society Annual Festival in Quebec. At each location (except Quebec) the host was responsible for providing tape playback, a sound reinforcement system and a Macintosh computer. The performers supplied the DMP1 is, MIDI interface and software. In Quebec, the entire system was provided by Yamaha Corporation of Canada. Programming for the DMP1 Is was stored as a MIDI Bulk Dump, Macintosh/Opcode format and up-loaded on location. 6. CONVERSION TO HARD DISK PLAYBACK In the summer of 1991; Digidesign Sound Tools andOpcode Studio Vision became available. It had always been my intent to use a single platform for audio and control track playback of the pre-recorded clarinet spatialization. Porting the clarinet track to a hard disc digital audio system (Digidesign Sound Tools) with synchronized MIDI sequencing (Studio Vision) on a single platform would give the performers more precise control, superior sound quality and greater reliability by replacing the, weakest link in the system, the ATR with LTC synchronization. 6.1 Hardware/software requirements For porting the audio and MIDI data: For performance: 1. Macintosh Ilci, 5 Meg RAM, System 6.07 or higher 1. Macintosh Ilci, 5 Meg RAM, System 6.07 or higher 2. Internal or external hard drive, 80 Mb 2. Internal or external hard drive, 80 Mb 3. Digidesign Sound Accelerator, DAT I/O and 3. Digidesign Sound Accelerator Sound Designer II, ver. 2.0 4. OPCODE Studio Vision. ver. 1.32 4. TASCAM DA 30 DAT Recorder/Reproducer or equivalent. 5. Six loudspeakers and channels of power amplification 5. OPCODE Studio Vision. ver. 1.32 6.2 DumpinQ the audio data The first step in this process was dumping the DAT master to a Macintosh hard disc. Using the AES/EBU port on the Digidesign DAT I/O (D to D convertor/interface) and a TASCAM DA 30 DAT Deck, the pre-recorded clarinet track was dubbed to the Macintosh hard disc using Digidesign Sound Tools. Next? the clarinet track was edited to define each of the seven sections (Sigle initial? Transitions and Sigle finale) as individual "regions'. These regions were listed as separate events in the Sound Designer II play-list and could be individually addressedl in theStudio Vision sequence. An additional benefit of editing the recording into regions was the removal of all of the 'dead air' between sections, thereby conserving hard disc space. The completed digital audio track required only 53 Mb of disc space. 204

Page  205 ï~~6.3 Importing the control sequence The next step was importing the Sound Designer II (SD II) play-list and the original Vision control sequence into Opcode Studio Vision. Studio Vision is essentially the same sequencing software as Vision with the added capability of playing back audio tracks from a SD II file. Because Studio Vision is backwardly compatible with Vision, opening up the original file within Studio Vision was all that was required. Once the play-list, audio track and control sequence resided on the Macintosh, the task of adjusting the control sequence to the audio track was fairly simple. Since each of the seven audio sections (Sigles & Transitions) was individually defined as a region, it was desirable to break up the original control sequence into seven individual sequences. The Time Code addresses were recalculated to begin from 00.00:00 in each respective sequence. The audio region play command was entered into the sequence at the recalculated LTC address and the DMP11 program change command placed one (1) second earlier. In most cases, all that was required was a little adjustment of the audio play command to insure synchronization with the control sequence. Fine tuning of the control sequence to the audio region was facilitated by Studio Visions capability to display the audio waveform with Time Code addresses. 6.4 Programming refinements The waveform display capability allowed me to refine the original programming. The Time Code address of each musical cue was fine tuned, controller values for the location modulation and circular panning were adjusted, equalization and more complex signal processing was added to each section to improve the overall smoothness of the spatialization. Eventually, each sequence (Sigles and Transitions) was programmed to instantly playback from a keyboard command thus, eliminating the necessity of back-timing the live clarinet cues. The two greatest advantages in using the Sound Tools System and Studio Vision are the ease of fine tuning the control sequence to the audio and the ability to play back the pre-recorded clarinet track from any point in the program without the concerns of pre-roll and/or loosing synchronization. Had I access to this system when the programming began, it is estimated that spotting and subsequent creation of data list would have taken half the time. In December 1991, Mr. Boulez visited my studio, listened to a rehearsal of Dialog and made some suggestions for further refinements to the spatialization. These refinements were incorporated into the programming for the first performance, using the Studio Vision playback, which took place on May 30, 1992 at the Ojai Festival in Ojai California. At this venue, the producers were responsible for providing all of the audio, MIDI and computer hardware. The pre-recorded clarinet was up-loaded to the Macintosh hard disc using a Hewlett Packard data DAT and Retrospect archiving software. DMP11 programming was up-loaded as System Exclusive Data from within the Studio Vision control sequence. Once the specified equipment was obtained, the spatialization worked flawlessly. 7, PROGRAMMER AS PERFORMER Through all of the experiences I have had programming and/or realizing the live and recorded performances of new works by my composer colleagues, I have come to the conclusion that the manipulation of electronic devices/conputers in the context of a musical performance offers the "computer musician" a unique opportunity to elevate and give increased credibility to our chosen medium. In each and every venue where Dialog was performed, I dressed, behaved and presented myself to the producers and audience as a performing musician and member of the ensemble. Realizing Dialog in collaboration with a clarinetist, offers the "computer musician" a very satisfying interpretive role and the opportunity to make an important statement to performers, concert producers, critics and audiences about the increasingly important role of the technician/artist. 8. ACKNOWLEDGEMENTS This project was supported by the Professional Audio and Musical Instrument Divisions of Yamaha Corporation of America and Opcode Systems, Inc. I would like to thank John Gatts, Don Morris, Mike Bennett from Yamaha and Paul de Benedictis from Opcode for their generous support, Pierre Boulez for encouraging this paper and clarinetist John Bruce Yeh for initiating the project. 9. REFERENCES Boulez, Pierre. Dialogue de rombre double for clarinet and pre-recorded clarinet. Universal Edition A.G., Vienna, 1989. Gerzso, Andrew. "Technical/Performance Notes," Dialogue de Ilomfre double. Universal Edition A.G., Vienna, 1989. 205