Enmeshed: live in 3D fog~Skip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact email@example.com to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 00000013 Enmeshed: live in 3D fog~ Michael Clarke Department of Music and Drama, University of Huddersfield, UK firstname.lastname@example.org Abstract This paper describes the compositional and technical approaches taken by the author in the creation of 'Enmeshed', a work for tenor saxophone and MSP. It focuses especially on my own use for the first time in a live situation of the fog~ algorithm, on the strategies evolved in writing for a unique 3D sound space, and on the approach taken to the interaction between the saxophone and the computer. Most importantly it describes how such technical concerns informed the creative ideas of the work, in particular the concept of 'distance'. 1 Introduction Enmeshed was largely composed when I was on sabbatical working at SARC, Queen's University, Belfast, at the kind invitation of its Director, Professor Michael Alcorn, in November and December 2004. It was written for the duo 1 a u t, Franziska Schroeder (saxophone) and Pedro Rebelo (computer), and designed for the unique threedimensional auditorium of the Sonic Laboratory at SARC where it was first performed during the 2005 Sonorities Festival. All the sounds in the piece are produced live; nothing is pre-recorded. The saxophone is processed by MSP and becomes enmeshed in a web of its own transformations (no other extra-musical connotations are intended). One of my main goals in this work was to find a way of generating a range of textures and timbres from this one source and be able to move between complex, multilayered textures and simpler sonorities. Musically the work is shaped by changes in 'distance' on various different levels, for example, in terms of timbre, texture, time, space and pitch. 2 Live fog~ and timbral distance I have worked for many years with FOF (Rodet 1984) and FOG synthesis, developing the algorithm in collaboration with Xavier Rodet and working with it creatively (Clarke 1996). Recent developments of the algorithm in the context of MSP have been designed to facilitate real-time work (Clarke and Rodet 2003) and Enmeshed represents my first use of the fog~ object in MSP in a live situation. This presented new challenges but also new opportunities in the way the algorithm could be used, both timbrally and texturally and as a means of structuring the work. FOG, like FOF, is a granular method of synthesis/processing. The precise envelopes and timings of grains mean that it can be controlled very precisely in terms of timbre (the PSOLA option, not employed here, takes this even further) but can also make smooth and continuous transformations between timbres and granular textures, rather as Stockhausen did using a comparable analogue technique in Kontakte (Stockhausen 1962; Clarke 1998). FOG is distinguished from FOF in that its grains are derived from a recorded buffer not a synthesised waveform, in this case from a buffer that is being recorded live. In the context of Enmeshed, fog~ enabled me to process the saxophone sounds and, by varying the parameters, move smoothly between timbral transformations close to the original, right through to complex granular textures. In terms of sonority and texture, therefore, I could shape the music through varying timbral distance, moving between the original saxophone and varying degrees of transformation. 3 Circular buffer and temporal distance In non-live use, fog usually reads data from a buffer that has been pre-loaded from a sound file. In the live context of Enmeshed I decided to use a 30" circular buffer which is continually updated by the live performance. The read pointer for the fog~ tracks the cycling write pointer of the buffer at a distance which can be varied from almost nothing to 30" maximum. In effect, therefore, the buffer is also acting as a variable delay line the output of which is subjected to FOG processing. This is another level on which 'distance' can be varied; this time usingtemporal distance. Structurally this also plays an important role in forming the large-scale shape of the work. At two key moments in the work the delays reduce rapidly to (almost) zero. Time collapses in on itself, everything is drawn together and concentrated at a single point in time before dispersing again. A technical necessity (the circular buffer 13
Page 00000014 for live use of FOG) has therefore also become an important creative tool. The longer delays (15" is typical) are also important in creating the multi-layered textures I was aiming to realise. In a work entirely made from transforming live input there is inevitably a sense in which everything is canonic: everything is in some way delayed transformation of the original. In some works I have deliberately played with this feature but in Enmeshed I wanted to be able to move at times from this to (an illusion of) multiple, simultaneously occurring, independent parts. This is achieved by layering different transformations, sometimes radically changed from the original source (e.g. through fog- granularisation), each metamorphosis delayed by a different amount from the original source. In the central section of the work, the saxophone cycles through a number of different materials in an almost ritualistic fashion and at any time it may be surrounded by several different transformations deriving from different passages of source material. But again this is variable: the textural 'distance', for example between the solo sax which opens the work unaccompanied and the more complex multi-layered textures, changes over time and is another key feature in the shaping of the music. 4 The Sonic Laboratory and 3D spatial distance Composing a work for the Sonic Laboratory at SARC presented a unique opportunity to surround the audience with sound. There are 40 loudspeakers in the lab in 5 layers of 8. Two of these layers were beneath the audience sitting on a metal grid allowing sound to permeate the floor. Much of my time in Belfast was spent working on how best to use this space, both technically and creatively. I was fortunate to have extensive access to the Lab and so was able to work for long periods in the space itself. I experimented with many different approaches, testing out different algorithms and control strategies and tried different approaches to using this resource. Some of my thinking developed from my earlier work with spatialisation (Clarke 1999), a key common factor being that I wanted the spatial element to be integrated with the creative structure not simply a decorative afterthought. In the end I used 24 loudspeakers, each an independent channel from the computer. These were in three layers of eight, a very high circle of eight above the audience, eight at normal height and eight beneath them (see Figure 1). For software I eventually reverted to something I had used previously, the IRCAM SPAT software. This seemed to have various advantages for what I needed over the other approaches I investigated. These included: the quality of sound (including the reverb), the flexibility of the algorithm (once you learn how to adapt it), and the perceptually oriented user interface (which could also be adapted to be controlled remotely by parameters). However, the maximum format for SPAT at this time was 8 channels at one level. I therefore added a height control so that the sound could be moved vertically between the three layers as well as horizontally. In fact it was necessary to use multiple SPATs for Enmeshed. The idea of creating multi-layered textures in which spatial positioning and distance would be an independent feature of each layer meant that 5 SPATs were needed: one for the original sax, one for the fog output and three for the gizmo processing (see below). Each could be used independently in terms of horizontal and vertical position and in terms of distance and reverberation parameters (Fig. 1). In this way complex textures could be given depth, the different layers varying their spatial position independently just as they could adjust their distance in terms of pitch, timbre, texture. At times a sense of spatial movement is important but for the most part it is the perception of different distances and positions for different parts of the texture that is most significant in this work. Movement is largely about changing these relationships rather than for its own sake. The fog- algorithm has its own inbuilt spatialisation in 2, 4 or 8 channels. This is significant because the spatial position is determined at the start of a grain and that grain holds its position throughout its duration even if the parameters change and other new overlapping grains take up different positions. This gives a richer granular texture than simply spatialising the fog- output externally (when all the active grains would be in the same position at any one moment). Four channel fog- spatialisation was used in the original version (because more than one computer was used, see below, and the number of channels between machines limited) and a SPAT used to add reverberation. Adaptability was another important factor in deciding how to approach the spatialisation. Although written for SARC, it was of course desirable to produce a work which could be performed elsewhere, in perhaps less generous conditions. The sophisticated design of the SPAT is such that the number of output channels (along with various other parameters) can be set using an argument to the main object which is then automatically used in setting up the subcomponents. This makes it relatively easy to move from, for example, 8 to 6 to 4 to 2 output channels on a horizontal level. My vertical panning algorithm was also constructed as a separate patch enabling this also to be adapted to different situations. By replacing the patch with a version adapted for two vertical layers of speakers, or indeed only one, it is possible, without great difficulty, to perform the piece with various configurations of speakers. Ideally the minimum should be 8 speakers in a cuboid arrangement, giving some sense of both horizontal and vertical positions even if not with the resolution of the original 24 channel set up. (A stereo CD version of the studio recording has been made for reference purposes but clearly this lacks much of the spatial definition and, as a result, some of the textural richness is lost.) 14
Page 00000015 5 Gizmo~ and pitch distance Gizmo~ is a sophisticated frequency domain transposition object in MSP (Dudas 2002). Two gizmos were used in Enmeshed to transpose the saxophone as part of more complex processes. Variable delay options were also used together with feedback loops, ring modulation and filtering. At one extreme, large transpositions combine with relatively short feedback times and high feedback levels resulting in complex timbres in which frequencies are folded back from the Nyquist frequency. Less extreme transpositions and slightly longer delays result in cascading effects, and no transposition results minimalist-like repeating patterns. Still longer delays, up to 30", result in layered textures as already discussed in relation to the fog~ circular buffers. The application of filtering and ring modulation to some of the four outputs (three direct outputs, a fourth feeding back into the fog input) further distances the processed sound from the saxophone. As with the fog processing there is here again a concern with creating variable distances between the original performance and the processed sound, in terms of pitch, timbre and time, and even it might be argued in terms of stylistic variation. 6 Feedback and textural distance In addition to feedback loops built in to each process (with fog~, internal feedback of its output into the circular buffer was not in fact used in the end), there are options for each of the processes to feed the others. This feature was used sparingly, but at times adds further to the complexity and depth of texture. A saxophone gesture may first be granulated using the fog~ and then pass through the gizmo algorithm or vice versa. At each stage the long delays may be in operation, resulting in the original sound reappearing in various transformations over a period of time alongside other material, performed live or delayed by a different amount. Differing levels and spatial settings (position, reverb etc.) for each of the transformations adds further to the textural depth 7 Interaction Controlling the interaction between the saxophone and computer presented a number of challenges. The initial idea was that the computer processes would be shaped in performance using the computer itself and a MIDI fader box (Peavey PC1600). However, this raised a number of issues in relation to Enmeshed. The nature of the processes, especially the FOG processing, required very precise combinations of parameter values to be given to achieve particular timbres or textures. At the same time the range of parameter values was wide. The resolution of MIDI data made this impossible to achieve in a straightforward way and, in the time available before the first performance it was not possible to work around these. The compromise was to set up trajectories between key parameter settings and for these to be triggered at cues in the score. The trajectories had fixed durations so that, although there was flexibility in the timing of the start of each trajectory, once triggered, each ran its course within a fixed duration. In this way precision of multiple parameter values was achieved but at the price of reducing the interactivity with the role of the computer performer being reduced. (In fact because the processing was split across two machines there were originally two sets of cues, one for the saxophone transformations, another for the spatialisation.) Another alternative is now being tried in order to give more ability to the second performer to shape the sound whilst at the same time making the necessary precise settings possible. This is an extension of the trajectory concept. Each cue maps the precise settings of the parameters required for that trajectory onto the range of a slider (or it could be another MIDI device). The performer then has a greater role in shaping the trajectory in interaction with the saxophonist. The key values for the terminae of the trajectories can therefore be found with ease as the extremes of the controller, but an interpretative element is possible in shaping the movement between these points. Different cues map different trajectories onto the controllers. 8 Further developments This area of interaction is one element of the work where there is still room for further investigation and development. Although the work was essentially complete at the time of the first performance (May 2005), there are also other elements which have continued to evolve. For example, the spatial control and its adaptability to different situations has continued to be developed. Technically there have been developments too. The processes employed are very CPU intensive. The fog~, gizmo~ and SPAT algorithms are all demanding of CPU and their extensive use resulted in the need to use two G4 laptop computers running at their maximum capacity (plus a third computer simply controlling the digital desk used in the hall). It might be thought that this indulgence in CPU intensive processes was excessive. But given the rapid development in computing power it was decided that, since the creative goals demanded it, it was worth continuing in this manner, with the likelihood that very shortly speeds would increase. This proved very quickly to be the case. Working on a studio recording in Autumn 2005 with a new dual processor 2.5 GHz G5, it proved possible to combine all the processes on this single machine with CPU to spare (which of course I immediately used, for example by increasing granular densities at certain moments). 15
Page 00000016 9 Summary MSP and similar programs offer an enormous range of possibilities for sound generation and transformation. The author believes it is important that such resources should be used, not just as elaborate sound effects but as an integral part of the formation of the shape and structure of the music. This paper has endeavoured to show one way in which this can be done. Working purely with live transformations is only one of the options available, one that offers particular possibilities but also provides particular challenges. In Enmeshed an attempt has been made to create a variety of textures from the simple to the complex counterpoint of different textural strands. More specifically, Enmeshed is a work shaped by changing 'distances' between the original saxophone and its multiple transformations: distance in terms of pitch, timbre, texture, space, etc. The musical structure derives from the contrapuntal interplay of these different qualities of distance, which themselves converge and diverge in the course of the work. Other areas of topical concern which this work has encountered are multi-channel spatialisation and interactive performance. In the end, however, for me as the composer, it is the creative potential that has been enabled by these technological developments which is most important. 10 Acknowledgments Thanks to all the staff at SARC for their help, advice and encouragement, especially Michael Alcorn and Chris Corrigan. Also to I a u t (Franziska Schroeder and Pedro Rebelo) for their commitment to this project. References Clarke, J. M. 1996. Composing at the Intersection of Time and Frequency. Organised Sound 1(2), 107-118. Clarke, J. M. 1998. Extending Contacts: the Concept of Unity in Computer Music. Perspectives of New Music 36(1), 221-246. Clarke, J. M, 1999. Composing with multi-channel spatialisation as an aspect of synthesis. In Proceedings of the International Computer Music Conference, pp. 17-19. San Francisco: International Computer Music Association. Clarke, J. M, and X. Rodet. 2003. Real-time FOF and FOG synthesis in MSP and its integration with PSOLA. In Proceedings of the International Computer Music Conference, pp. 287-290. San Francisco: International Computer Music Association. Dudas, R. 2002. Spectral Envelope Correction for Real-Time Transposition: Proposal of a 'FloatingFormant' Method. In Proceedings of the International Computer Music Conference, pp. 126-129. San Francisco: International Computer Music Association. Rodet, X. 1984. Time-domain formant-wave-function synthesis. Computer Music Journal 8(3): 15-31. Stockhausen, K. 1962. The Concept of Unity in Electronic Music. Perspectives of New Music 1(1), 39-48. m-1 Fog 14 Sx Spax Speakers M ii|| i i l| 8 Figure 1. Spatial processing in Enmeshed 16