Page  00000153 REAL-TIME PROCESSING ON THE ROAD: A GUIDED TOUR OF [IKS]'S ABSTR/CNCR SETUP Pierre Alexandre Tremblay CeReNeM University of Huddersfield Huddersfield, HD1 3DH, U.K. +44 1 484 47 3608 p.a.tremblay@hud.ac.uk Nicolas Boucher D.B.Com Media 3691 St-Dominique Montreal, QC, H2X 2X8, Canada +1 514 861-8669 nicolas@dbcommedia.com Sylvain Pohu LIAM - Universite de Montreal c.p. 6128 succ. centre-ville Montreal, QC, H3C 3J7, Canada +1 514 343-5503 sylvain.pohu@umontreal.ca ABSTRACT This paper presents the strategies the authors have developed as performers and computer-music composers in the Montreal-based contemporary jazz ensemble [iks] in order to adapt an ideal studio setup for improvised mixed music into a sturdy tour-ready setup. Considerations of virtuosity on simple interfaces to digital signal processing are raised, and foreseen touring problems and their hypothetical solutions are explained, grouped around the DSP devices, the audio routing and the sound check strategies. Suggestions are made to address portability, adaptability and efficiency concerns without compromising the ensemble's own sound. 1. INTRODUCTION The contemporary jazz ensemble [iks] began exploring real-time processing of acoustic instruments on a regular basis in 2000 when electroacoustic composer and free jazz pianist Nicolas Boucher composed a suite for the ensemble and Max/MSP.[1] Boucher had already been working with real-time processing in an improvisation setup for four years, mainly with the electroacoustic improvisation ensemble Les Impromptistes.[2] Since then, real-time processing has become an important part of [iks]'s sound explorations in the studio, so much so that it was used throughout the recording of their fifth album, abstr/cncr. Its music now relied on real-time processing to the extent that the complete sound transformation setup was needed on tour. In this paper, we will briefly describe [iks]'s studio setup and its aesthetic bias. Then, we will discuss issues of portability, proposed solutions and their actual roadworth assessment. Finally, we will propose improvements for the next tour. 2. THE ABSTR/CNCR STUDIO SETUP 2.1. The Musical Approach Imposes the Setup [iks]'s music is a peculiar blend of improvised and written music, exploring the grey zone between timbre-based and note-based music, with and without pulse. Moreover, this blend changes from piece to piece: some works rely on a fixed macro-structure in which the improvisers are almost totally free; other works inherit a typical theme-and-variation development from the jazz idiom. Furthermore, some pieces are completely written, some completely improvised.[3] If [iks]'s music has something original, it lies in its integration of all kinds of music, from acousmatic to punk, from free-improvisation to dirty grooves. It is definitely postmodern, if we accept Lochhead antidefinition [4], blurring the lines of styles and genres. Neither rejecting noise for the sake of a melody, nor a pulse for the sake of freedom, it is an open space in which the musical influences of each member can interact. During the album preproduction, the idea of what sort of setup was needed became clearer with the time spent performing with electronic devices: since the music is highly improvised, and since the studio sessions would be freely organised, every source had to be available to every processing station at all times. This constraint is quite easily solved in any professional recording studio, where the multiple routing systems can provide several mixes to the different realtime processing stations. The setup, clearly illustrated in Figure 1, will send copies of each instrument sub-mix to the two very different real-time processing stations. 2.2. Two Different Real-Time Processing Stations, One Concern The first station, Station A, is computer-based, running Max/MSP. The audio I/O is achieved through a Mark of the Unicorn 2408mkI PCI audio interface, and a PC1600 is providing 16 faders and two pedals for MIDI control. It allows the use of custom-built soundprocessing software, from multi-effect units to score following devices. The processing source selection and routing is done through the software, by means of the eight discreet inputs of the audio interface. This system, now easily accessible with the increased computational power of laptops, was quite 'on the edge' of desktop computer performances in 2002. Note that this setup is similar to the one developed by Lawrence Casserley[5], as used with Evan Parker's Electro-Acoustic Ensemble. Even if [iks]'s musical style is at times very far from Parker's, we relate to most of Casserley's concerns when dealing with real-time processing technology as an improvisation instrument. The second station, Station B, is hardware-samplerbased. Developed by Nicolas Boucher and Eric Rocheleau through experimental work with Les Impromptistes since 1996, it consists of an Ensoniq ASR-10 sampler, 153

Page  00000154 Figure 1: The a which includes a versatile built-in effect unit. Another powerful feature of this sampler is the potential intermodulation of effect and sampling parameters, all of which is possible whilst loading other sound files from an external SCSI hard-drive. To select the processing and/or sampling input, a hardware mixer was used to blend the different instrument sub-mixes provided by the routing of Fig. 1. This second setup could seem a strangely limited option compared to setup A, but paradoxically these limits provide its main strength. As [iks] tends to rely on improvisation as a real-time composition device, we believe that mastery, i.e. the sublimation of the interface/instrument, whether it is new or old, is the best way to allow the inner-heard musical idea to be produced. Simple means like pedals, keyboards and faders may be less attractive and more limited than the latest multidimensional sensor, but their sublimation by years of experience, working with the same interface, within the same limits, allows a deeper and subtler expressivity, in the manner that a guitar player reaches a level of seamless musical fluidity. This approach certainly differs from the contemporary tendency of rejecting the need of mastery, well defined by Rebelo [6], and other developers of new interfaces. If we totally agree with Wessel and Wright when they say that "[...] early stage ease-of-use should not stand in the way of the continued development of musical expressivity."[7], we even go further by saying that that the said musical expressivity will reach higher levels of subtlety through extensive practice on a given digital signal processing instrument, as defined by Casserley [8]. Far from pretending [iks] has reached this asymptotic ideal, we practice daily towards it. If the real-time processing setup is an instrument in its own right, and the performer's intimacy with it is of utmost importance, this leads to interesting challenges in touring conditions. Our particular setup has induced its set of challenges that will now be discussed. bstr/cncr studio setup 3. THE ABSTR/CNCR TOUR SETUP 3.1. Touring Conditions When [iks] was chosen to be the 2003 Rising Star of the International Jazz Festival Organisation, it was a great opportunity to present [iks]'s music to the wider audience of jazz festivals worldwide.[9] As [iks]'s setup was dependant on technology, and we decided not to tour with our own sound engineer due to expense, it required a solid preproduction to address portability and adaptability concerns.[ 10] Obviously, since [iks] is not a headliner, it also meant that the ensemble would either be the opening act for a well-known artist, or that it would play secondary venues. In such conditions, the typical setup time is approximately 45 to 60 minutes, sound check included. In such stressed conditions, priorities had to be set, and actions had to be taken accordingly. In this case, the motto was: the show must go on! Therefore we preferred quick and dirty methods to long and clean ones, and generic devices to specific ones. 3.2. Foreseen Problems and Their Hypothetical Solutions 3.2.1. The Processing Devices Taking the full setup on tour could have been difficult, but one thing that eased its portability was the new generation of powerful laptops and soundcards. It was also possible to find newly released cheap USB midi fader-box, easily replaceable in case of fault anywhere in the world. International electricity discrepancies (60/50 Hz - 110/220 V) were no longer an issue for Station A, since both the audio interface and the computer had universal power supplies. Therefore, this real-time processing station was exactly what it was in the studio, just more compact. But there was still a performance concern: since a concert is a flowing event, there is very little 154

Page  00000155 Figure 2: The abstr/cncr tour setup signal flow changeover time between pieces, compared to the studio where time can be taken to explore different setups. A second computer-based setup was needed to add flexibility and fluidity to the set. Station A2 was therefore performed by Sylvain Pohu, in complement to Station Al still played by Pierre Alexandre Tremblay. For Station B, based on a hardware sampler, portability was more of an issue: neither the sampler, the mixing desk, nor the external hard-drive were 50 Hz/220 V compatible. However, as they had a sturdy build and, as stated earlier, were irreplaceable in Nicolas Boucher's fluent performance, we took the risk of asking for a voltage transformer from each venue. bridge to feed Station Al's audio interface. Then, to feed Station A2, we used Station Al's audio card latency free monitoring feature to relay the signal to specific outputs. This feature has the benefit of being computer-crashproof, the card being able to work in stand-alone mode. Since we were taking the signal directly from the microphones, instruments that are sound-captured by more than one microphone were problematic. Therefore, after several preproduction experiments, we concluded that the piano's higher register microphone was giving more convincing results than the lower register one, and that the snare drum microphone was more efficient than its over-head counterpart. A possible explanation is that their transient content was greater, inducing more dynamic processing sources, and was therefore more efficiently triggering effects and delays. Again because of the efficiency concerns, we decided to use a clip-on saxophone microphone for its good feedback rejection and its signal consistency independent of performing gesticulations, even if the timbre was far from ideal. Finally, the last sonic sacrifice we made was to use, instead of passive splitters, direct output from guitar and bass amps. Since the sound is almost never reproduced without spectral transformation, it was a small sacrifice for the sake of simplicity. 3.2.2. The Audio Routing Herein lay the biggest challenge. Most soundreinforcement consoles do not have the flexibility of studio consoles, and will certainly not have enough buses to return on-stage one sub-mix per instrument. Even if they did have enough buses, the setup time was too short to troubleshoot all potentially problematic connections. It was therefore decided to go for the quick and dirty way, by providing an independent setup for real-time sound capture, as illustrated in Figure 2. For each instrument needing real-time processing, a passive splitter was used - one XLR female to two XLR males - directly at the microphone's output. One split output was sent as usual to the sound reinforcement system, and the other was sent to Station B's mixing desk preamp. As the direct out of that desk, being postpreamp but pre-fader, a TRS multi-cable was used as a 3.2.3. The Sound Check As the concert schedule was often very tight, all aspects of audio setup had to be executed in the very limited time planned by the festivals. The setup and sound check procedures were therefore not left to chance. 155

Page  00000156 First, the tour technical rider explicitly stated that there would be passive splitters on stage, in addition to the usual elements: instrument list, number of monitors needed, stage plot, etc. This allowed the venue's technical director to be aware of the complex setup. Then, as soon as the ensemble got on stage, every cable was plugged and checked. This usually took thirty minutes. [iks] then performed a piece bluntly entitled 'The Ultimate Sound Check', in which every acoustic instrument played a solo on a cyclic groove provided by the rhythm section: four bars at the softest possible dynamic the ensemble can play, and four at the loudest. This was done for a simple reason: [iks]'s music is very dynamic, and its musicians control their dynamic range. By performing this test, we helped the engineer to grasp at a glance the dynamics of the ensemble, and it allowed the musicians to make sure that the monitoring system was well balanced on stage. Moreover, it allowed us to make sure the sound engineer did not put a noise gate on the snare drum, or over-compress the bass: [iks] claims the right to play its full dynamic range! Once this was done, there was usually five to ten minutes left to test the electronics in two steps. In the first step, a processing was put on the sax, to check the monitoring levels on stage between the instrument and the processing. Then, a musician from the band stood in the audience, to confirm that this balance is the same for the public. Finally, the sound engineers were asked not to touch the faders for the rest of the performance, arguing that if the balance is the same on stage than for the audience, the performers should be able to create their own mix while performing as an ensemble. The second step for the electronic balance was the adjustment of a software mix on Station Al, for a piece in which the ensemble is looped and then cross-fades with the live performers. For the illusion to work, the balance match has to be similar enough. Therefore, this mix had to be tweaked as part of the sound check. 4. POST-TOUR CONCLUSION The preproduction work was fruitful: the concerts happened even in these touring conditions far from optimal. Three main conclusions can be drawn for major improvement if such conditions were to happen again. First, we really grasped the importance of a dedicated sound engineer that shares the ensemble's musical concerns. [iks] used the venue's engineer this time, with mixed results. Bringing a sound engineer on tour would also save the time to explain at each venue the technical setup and its peculiar signal routing. Second conclusion: portability could be improved on Station B. Transferring the ASR-10 station to software has one major disadvantage: the performer will lose his fluency on the mastered interface. But this is nothing that will not be compensated for by several hours of rehearsal with the new instrument, and it has another great advantage in addition to the smaller and simpler setup: by using generic therefore replaceable devices, [iks] could leave a backup version of the virtual instruments on a web server to be used in case of luggage loss. The third conclusion is the need of a more redundant control setup for Stations Al and A2. Since they are performed by guitar players, sometimes a process is left active in the heat of the moment and the hands are busy playing the string instrument while an adjustment would be needed. This happened often enough to think about redundant control assignable on the fly, either by network communication, by foot controllers, or both. Such considerations are obviously not of the realm of the ideal world that the studio provides. But by sharing real-world hands-on experience, we hope to raise awareness that, with an adventurous soul and little compromise, it is possible to bring music relaying heavily on technology everywhere. 5. ACKNOWLEDGMENTS The authors would like to thank: Andr6 M6nard, Laurent Saulnier, and Johanne Bougie, of the Festival international de jazz de Montr6al, for making this tour possible; Sean Craig and Stefan Schneider, the two other [iks] members who, by their talent, patience, and recommendations, helped to improve the touring experience; Caroline Traube, for the countless comments on improving early versions of this paper. 6. REFERENCES [1] Boucher, N. "La sinc6rit6 du geste" in [iks] - le fil. Ora, Montr6al, 2000, track 9. [2] A. Gauthier, Une technique d'ecriture pour la musique acousmatique, une approche eclectique a l'aide d'outils d'analyse et de procedes algorithmiques. Master thesis, Universit6 de Montr6al, 2005. [3] [iks], abstr/cncr. Ora, Montr6al, 2003, 2cds. [4] Lochhead, J. and Auner, J. (ed), Postmodern Music, Postmodern Thought. Routledge, New York, 2002. [5] Casserley, L. A Digital Signal Processing Instrument for Improvised Music. Published on http://www.lcasserley.co.uk/ as of 30/04/2007. [6] Rebelo, P., "Haptic Sensation and Instrumental Transgression" in Contemporary Music Review, Vol. 25, No. 1/2, Feb/April 2006, pp. 27-35. [7] Wessel, D. and Wright, M. "Problems and Prospects for Intimate Musical Control of Computers" in Computer Music Journal, Vol. 26, No.3, Fall 2002, pp.11-22. [8] Casserley, L, "Plus Ca change: Journeys, Instruments and Networks, 1966-2000" in Leonardo Music Journal, Vol. 11 p.43, 2001 [9] [iks]'s official webpage - previous concerts, http://www.iksperience.com/, as of 30/04/2007. [10]P.A.Tremblay, "Pragmatic Considerations in Mixed Music: a Case Study of La Rage" in Proc. of the ICMC 2006, New Orleans, p. 527. 156