Page  1 ï~~FRACTURING THE ACOUSMATIC: MERGING IMPROVISATION WITH DISASSEMBLED ACOUSMATIC MUSIC. Dr. Adrian Moore The University of Sheffield Department of Music ABSTRACT This paper questions whether it is possible to integrate free (or controlled) improvisation with electroacoustic materials with pre-composed soundfiles in a 'live' performance. It contributes to the on-going debate about the role of the performer and the advantages and disadvantages of working live. At the heart of this paper is a compositional drive that remains rooted within the acousmatic tradition. A working environment or instrument has been made in Max/MSP utilising two external controllers (A graphics tablet and a Behringer BFC2000 usb controller). This was developed over a year whilst working on three electroacoustic pieces. During this time, the instrument was used to afford a 'sense of performance' when experimenting with sounds. These sounds would then be recorded, heavily edited and mixed before being placed into a 'fixed' version. However, in a live setting, there would be little or no chance to refine, let alone, rewind and go again. The clear advantage of working live is the experience of performing yet there is a trade-off when considering this live performance against fixed works. The resulting 'improvisations' no longer sound like fixed works made live and it is the overwhelming perception that they should that perhaps remains a stumbling block in this research. However, these 'fracturing' processes have afforded an opportunity to re-asses the musical imperatives behind acousmatic music and ask which rules may be bent and which may be broken. 1. CONTEXT Emmerson [1] and Landy [2] have recently commented in depth upon the diversity of electroacoustic music that exists 'out there'. When I say 'out there' it is purely because I am 'in here'. The boundaries have never been more diffuse between those working commercially, those working at home and those working in academia. The internet has facilitated a free flow of music and ideas, with critical (and not so critical) comment and analysis obfuscating global understanding but affording individuality by allowing others to follow relatively independent paths. My point of reference is my departmental office at The University of Sheffield and my background that of the European acousmatic 'tape music' tradition. Fortunately, within the later development has been slowly but carefully documented. Breaking free from this tradition whilst remaining true to its principles has allowed for some rather searching questions to arise. Emmerson comments upon the numerous simultaneous historical layers of live electronic music that do (or do not) successfully cohabit. 'At each juncture the previous 'archaeological layer' has not peacefully given way to the next but has carried on, adapting and upgrading its technology, but not itself substantially changing aesthetic aims' (p116). He suggests that 'new generation' musicians move forward through a process of rejection or ignorance (and I read 'ignore' far more negatively than Emmerson, who, I am sure assumes that that which is ignored is previously acknowledged if not necessarily understood). What is most intriguing about this hybridized approach is its definite relocation to a 'chamber' venue and audience. Performance contexts are now completely different: bars, clubs and galleries have replaced concert halls; two or four loudspeakers have replaced 24; I now see before me a group of people at the front of the stage, a group at the bar, a group talking to each other - the 'front-and-centre' stationary audience is no more. And I am on the stage. The rights and wrongs of my visual presence are perhaps secondary to the plain fact that without me activating the machine, nothing would happen. 2. AESTHETIC AND TECHNICAL DEVELOPMENT 2.1 Technical outline My move towards the creation of a 'performance tool' arose out of a conscious need to make an intervention somewhere between the development of sounds using a diverse range of electroacoustic tools and the mixing process. This intervention eventually resulted in the creation of a Max/MSP patch which allowed soundfiles to be striped across a graphics tablet facilitating a basic granulation across multiple soundfiles. Sounds developed using traditional techniques were grouped by type and sent to the graphics tablet. The pen, its XY position controlling sound playback and (at times) pitch transposition, its pressure and angle often controlling spatialisation across four channels, enabled a live sculpting of the sounds. The activity felt 'live' and physical gestures, whilst not actually imposing characteristics upon the sound itself looked reactive to the sonic result. This is to say that landing upon something of extreme amplitude or dense spectra often resulted in a quick search for something more subdued, as if the energy of the sound flowed back through the

Page  2 ï~~pen. Over the course of one year, the patch was developed and refined to include the following. * A second controller, removing the need to use the mouse save for initialization purposes * Ring modulation and filtering across four channels * Secondary granulation of soundfiles to create steady drones * Simple recording of 'performed' material into buffers for looping * Recording XY data of the pen position over time for future playback * Accessing preset fader settings * Randomised automation of the pen movement * Reverberation * Playback of ready made soundfiles Though neither extensive nor necessarily innovative, there was sufficient depth in the patch to enable a significant amount of processing. I was presented with a device that required a certain degree of expertise and which, if used in a live context might conceivably require a score. For my current musical project entitled 'Transitory states' some 13 soundfiles are placed horizontally over the tablet. If one wants to work extensively with just one or two soundfiles, one can specify a range. Indeed, newly recorded buffers can be immediately transferred to the tablet and developed further if necessary. The development of soundfiles appropriate to 'live' manipulation became a major research question. What did these files consist of and how does one work with them 'live'? 2.2 Soundfile preparation Soundfiles for the graphics tablet were generally monophonic; there were no complicated mixes, with just a few of the textured sounds resulting from mixing. The majority of the soundfiles were montages (one sound attached at the end of another). The sounds ranged from the simple to the complex with a very basic approach of gesture/texture being used to classify types. Soundfiles for live playback were often developed from recorded improvisations of 'tablet' files, thus enabling a closer marriage of timbre. However, their construction immediately highlighted the differences between ready made files, sculpted and saved to disc, and sounds made 'on the fly'. The lack of precision inherent in the blank graphics tablet (the 'no frets' approach and no visual feedback) made attack-repeat gestures practically impossible. The graphics tablet is essentially monophonic; although grains of different lengths can overlap creating larger textures, the direct synchronisation of two or more sounds is almost impossible as the pen can only be in one place at one time. This led to the primary research question of this project: can free or semi-structured improvisation with material sit with ready-made acousmatic soundfiles in a musically coherent way when played live? Soundfiles comprising ready-made mixes were developed along the lines of a piano accompaniment to a violin sonata, providing a foundation (subservient to the solo violin), developing materials ahead of the solo violin (leading) as well as acting in a solo capacity. I was particularly inspired by Neal Farwell's (University of Bristol) work for violin and triggered soundfiles entitled Chaconnes. The computer part of this piece comprised some 150 soundfiles and was essentially a deconstruction both horizontally and vertically of a fairly continuous computer accompaniment. The process succeeded in providing a more 'interactive' and 'commanding' performance. In my own work, the idea has not been to strictly regulate where soundfiles should enter and exit but to generate a series of generic files closely related to 'tablet' files affording processes such as 'gesture/hocket' and 'texture/environment'. These soundfiles, once started may be stopped or may be allowed to run to completion. They may be played polyphonically. The performer may add to them, extending them horizontally or vertically or allow them to run 'solo'. 2.3 Performance Recent performances have followed a semi-structured plan which begins by defining the working area of the tablet with gestures from bottom-left to top-right, creating a particular sound object. Similar gestures, lingering at points along this trajectory allow for quite audible but subtle development. At one point the performer is directed to move the pen continuously in a small area of the tablet, so creating an undulating texture. This is then sampled and looped allowing the performer to explore other areas of the tablet (or to rest). Clearly, we must question whether sampling into a buffer during performance is worth the musical 'wait' or whether this also is something that should be precomposed. The ability to sample into a buffer is required if interesting improvisation is to be extended. However, if a compositional plan is to be adhered to, perhaps a greater palette of pre-composed materials is required (with different methods of selection and performance). At present, ready-mades are performed from button presses on the Behringer with a relatively small selection of files from which to choose. It should be stressed that these files have been pre-composed and are not simply a 'pool'. Future development will investigate background sampling into buffers based upon current (and past) movements so providing a number of real-time developments. The addition of automated learning and performance, whilst interesting in itself, would take this project into even more complicated territory, and would direct concerns more towards programming and less towards musical construction and improvisational

Page  3 ï~~freedom. The question remains however as to where one draws the line between live and composed. 3. THE LAPTOP PROBLEM There has been considerable discussion of the problem of laptops (especially their screen which is often seen as a barrier to communication). This problem will soon find a technological solution as laptops get smaller and our interface with it mutates. In this project, data on the laptop screen delivers numerous cues to the performer. Whilst something can be made visually of the performer working at the tablet/console, he may equally be seated within the audience as perched on stage. Composers increasingly appropriate 3rd party tools for musical purposes and the use of the Wacom graphics tablet is no exception. Michael Alcorn's (Queen's University Belfast) work for string quartet and computers entitled Leave no Trace used a graphics tablet to select performing models which were then displayed to the quartet. In my research, the graphics tablet can be scaled vertically in real-time, allowing any number of files to inhabit the surface. As soon as the pen nears the tablet, granulation begins; without this interaction there is no live sound. All outputs of the pen are translated to control data thanks to Olaf Matthes' external Max/MSP object (http://www.akustische-kunst.org/maxmsp/) 4. LINKS WITH DIFFUSION This project does not probe the problem of sound diffusion. Sound is output in four channels and specific trajectories are rarely called for. Pen angle often dictates a field of motion and four randomised ring modulators give the effect of automated spatialisation at lower frequencies which can produce interesting effects. In previous work [3], I speculated upon the idea of Manageable Musical Units (MMU). These were files containing pre-composed material which may be interrupted, adapted and extended. They may also be used as the basis for improvisation. My current working environment has eschewed the possibility of the readymade playback soundfiles also existing on the tablet and the ability to more accurately dictate diffusion trajectories. This latter area is most definitely where a second performer is required and the idea of a real duo becomes a possibility. 5. WHY FRACTURE THE ACOUSMATIC? Clearly one major question rests at the back of my mind. Why fracture the acousmatic and attempt to 're-create' certain historic acousmatic forms and structures in realtime? The answer is perhaps more personal than musical and reflects a desire to reach new audiences working with sound in live situations and bring to these audiences some aspects of the acousmatic which has engaged my work for almost 20 years. Since first performing live with this system in December 2007, working with some 600Mb of 32bit, 44.1Khz fourchannel pre-composed audio (for triggering) and 80Mb of 32bit, 44.1Khz stereo audio (for manipulation on the graphics tablet), the best results seem to arrive when improvisation becomes controlled and, given time, scored. The score dictates what files to lay over the tablet, roughly where to explore them on the tablet and what improvisation might best lead to a number of choices of ready-made soundfile - whereupon adjustments can be made to the environment (selecting different files), fading in modulations or adding reverb. It may, in the future, be feasible to package the patch, soundfiles and score and ask others to use the environment. In any case, the finished 'product' is never the same, the elements of risk are exciting and quite often visible and the process more communicative visually. The listener should not expect an acousmatic work to be made on the fly. My misgivings concerning interpretations and timings when I play back a four channel 'take' are the result of the acousmatic composer wishing to 'intervene' again. Do the obvious visual links made in live performance cover these timing slips or do they instantiate a completely different performing and listening paradigm that leans towards newer traditions? 6. CONCLUSION In an attempt to understand some of the significant changes in sound art practice, especially the live use of the laptop in (normally) smaller venues I have created a performance environment in Max/MSP. The patch arose out of a need to develop quickly sounds for fixed-media works with a greater degree of human interaction (changing many parameters at once) and in so doing, suggested the possibility of combining the playback of ready-made soundfiles with improvised passages, finding a middle-ground between two extremes (the fixed and the free), fracturing the acousmatic in the positive sense of the metaphor to reveal hidden creative opportunities. There is, it is hoped, sufficient scope in the research questions presented to afford continued practical and theoretical research. REFERENCES [1] Emmerson, S. Living Electronic Music, Ashgate, Aldershot, 2007. [2] Landy, L. Understanding the Art of Sound Organization, MIT Press, London, 2007. [3] Moore, A. "Making choices in electroacoustic music: bringing a sense of play back into fixed media works." Online text, 2007. http://www.shef.acuk/content/1/c6/04/14/88/3piecestex.: df [accessed 27.01.08]