Page  1 ï~~THE WORLDSCAPE LAPTOP ORCHESTRA: CREATING LIVE, INTERACTIVE DIGITAL MUSIC FOR AN ENSEMBLE OF FIFTY PERFORMERS Alex Harker Research Student (composition), Department of Music, University of York, YO10 5DD Angie Atmadjaja Research Student (composition), Department of Music, University of York, YO10 5DD Jethro Bagust Electronica Artist Machine Records Dr Ambrose Field Worldscape Project Director, Department of Music, University of York, YO10 5DD aef3 ABSTRACT This paper documents the musical works and technologies developed at the University of York for large-scale interaction between humans and computers within a live ensemble performance setting. Of particular concern to this study are the scalability and control of musical ideas and technologies within the Worldscape Laptop Orchestra (WLO). 1. INTRODUCTION The Worldscape Laptop Orchestra is an ensemble of 50 performers each using live laptop computing, which gave it's first performance on Nov 14th 2007. The concert took place in the Sir Jack Lyons Concert Hall at the University of York, UK, within the cross-media event Worldscape [1] from which the ensemble takes its name. The event was supported generously by Apple Computer Inc. Specifically commissioned pieces from composers Alex Harker, Angie Atmadjaja, Jethro Bagust and Ambrose Field were presented in the performance. We wished to explore the following research questions in our creative work: * when large numbers of human performers interact with digital systems what are the new dynamics of control and interactivity which emerge? * what role does an individual performer have in a digital ensemble work of this scale? * what are the design considerations for scalable performance technologies? * are there any technological barriers to implementing a large-ensemble performance completely wirelessly? Custom performance software was developed by the team using C++, Max/MSP[2], Pure Data[3], Python[4], and Google Earth[5] Interfaces to fit the specific musical needs of each work. 2. HISTORY, CONTEXT, AESTHETIC Laptop Orchestras are not a new concept. Trueman [6] documents ensembles ensembles using technology on a scale larger than that of the individual performer, together and discusses the 15 player Princeton Laptop Orchestra (Plork). Wishing not to duplicate this work, we aimed to specifically create an opportunity to push the limits of larger-scale interactivity in an environment where digital performance not need necessarily ascribe to any notions of traditional instrumentality. Our goal was to perform powerful, auditorium filling musical works over a multichannel PA system, with spatialisation techniques that could be accessed independently of the physical layout of the ensemble. For the WLO, instrumentally inspired one-laptop, onesound-source models for digital performance would perhaps have been less appropriate. The WLO is an 'orchestra' only in that it consists of a body of musicians working together: it provides a canvas on which musical works of different aesthetics and approaches can co-exist. We use the ensemble cooperatively, at times giving each performer the ability to manipulate the total acoustic output. Like 'Plork'[6], there are no constraints on the method of co-ordinating the musical output of each performer - our pieces use a mixture of computer-based inter-communication techniques and human conducting. The music described below does not aim to replicate the sound of acoustic instruments: the sound world we create is unashamedly electronic. 3. INFRASTRUCTURE CHALLENGES 3.1. Wi-Fi standards and protocol choice Before any musical works could be developed, a workable technological infrastructure needed to be established. As the first performance of the Worldscape Laptop Orchestra was part of a larger, cross-media

Page  2 ï~~show, performers were required to enter and leave the performance area quickly carrying their technology with them. It was therefore essential that all communication between the laptops and the main PA were to be accomplished wirelessly. This posed some interesting challenges as our initial tests indicated that commonly available Wi-Fi (802.11-g) did not have sufficient bandwidth or robustness to cope with 50 sources in close proximity in our application. Interference from other 801g wireless networks outside of our control was found to be a problem at the performance venue, so the 5 GHZ radio frequency band was adopted to avoid unwanted contention. The Wi-Fi standard finally selected for this project was the new 802.11-n standard, offering the potential for 248 Mbit/s wireless datarates over a 70m area. Although such speed was not obtained in practice, this system proved reliable and adequate for our needs. It is worth noting that the network protocol choice for a 50 performer wireless ensemble was vital to its success: TCP proved too inefficient in that a large number of packets were dropped due to the nature of losses in wireless medium itself [7], and these caused extensive bottlenecks inhibiting the real-time performance of the system. UDP was therefore adopted, and our communication algorithms were subsequently adjusted to cope with occaisional packet loss by building in a degree of redundancy into the data streams. 4.1. Swarms, for 50 laptop performers and live trumpet by Alex Harker. Swarms is an improvisation for solo trumpet and laptop orchestra that exploits the webcam as an intuitive and transparent interface for laptop performance. The musical starting point is the personal improvisational style of soloist Matthew Postle. Pre-recorded samples of his playing provide a rich sonic palette from which the laptop orchestra can construct their accompaniment. Video tracking of the laptop performers' hand movements translates physical gestures into musical information. In an acoustic ensemble, each performer has direct and exclusive control over the sound they produce. In Swarms, networking provides a means for exerting a macro-level global control over the orchestra from a central computer. This allows for the selection of materials and possibilities available to the orchestra as well as a less dictatorial type of control via a simple text messaging system. The latter can be used to request certain types of playing and to make suggestions to the performers. The composer operates the main computer, 'steering' the improvisation and effecting large scale structural changes, whilst the individual performers of the orchestra create micro-structures and gestural detail. This facilitates more sudden and differentiated sectional changes than might otherwise be possible, as well as reducing the complexity of the interface with which each performer is presented. Swarms features dense shifting textures of massed sounds, necessitating multiple participants producing similar sounds at any one time. However, it was also intended that textural and structural differentiation would be possible through the use of different sets of samples, registers or panning. The use of improvisation in a ensemble of such size serves to allow complex textures to be created; one can use the musical decisions of each member of the orchestra to create musically meaningful gestures in relation to the soloist. The freedom given to the performers is also important in terms of thinking about what might differentiate a performance by a laptop orchestra from a pre-composed fixed-media piece. As one computer may be used to produce the same sonic results as one hundred computers, simply by playing back a single audio file, a large digital ensemble cannot be thought of as performing the same function as a large instrumental ensemble, which enables a certainly level of textural density and complexity. The key feature of a large laptop orchestra is the high number of performers and interfaces, rather than the total amount of CPUs. Giving the performers freedom to improvise in Swarms is a way to make use of their real-time musical decision making abilities rather than a way to use the size of the ensemble to simply to perform a large amount of numbercrunching. 4. MUSIC Having established a solid network infrastructure, new musical works could be designed for the ensemble. The composers presented here used the ensemble in uniquely different ways, with corresponding changes in the interpersonal musical relationships within the performers. The following pieces arise from the creative exploration of the research questions set out in the introduction. Here, their respective composers specifically document the musical outcomes of their work.

Page  3 ï~~Swarms is written in Max/MSP/Jitter[2]. Three different patches perform the respective roles of client, server and synth, each computer in the setup being responsible for one of these three functions only. A fourth trivial patch is used to display text messages for the solo trumpet player so he is aware of the instructions sent to the orchestra. The client patch runs on each performer's laptop, and performs the video tracking and analysis. This is based on the cv.jit package by Jean-Marc Pelletier of IAMAS [8]. Optical flow tracking is used to derive information about the direction and magnitude of movement to be used either for continuous control of pitch/volume, or thresholded to produce triggers for sample playback. The patch also allows each player to select membership of one of five groups at any point during the piece: silent, sustained tones, loops, clouds, single gestures. Each group corresponds to a different set of parameters and set of samples selected on the server and a different method of sample playback. Local audio synthesis is carried out by the client patch, providing each member of the orchestra with clear audio feedback of their individual contribution via headphones. A set of network troubleshooting tools for checking connectivity and unique ID assignment are included, as well as the text message display for messages from the server computer. There is no internal link between the analysis and synthesis portions of the patch. The control data generated by the video tracking is sent directly over the network to the server computer which returns instructions to the local audio synthesis patch. Thus, each client patch on it's own does not represent a complete instrument, but rather can only function in the context of the network. Once connected to the server each laptop becomes a dynamically shifting instrument, as the mapping of gestures to musical output does not remain static throughout any one performance. The server patch performs all mapping between the control data derived from video tracking and the instructions sent to the synthesis engines. Instructions generated from incoming data are sent both to the originating client patch and also the main synth patches. Global controls are provided for parameters such as sample choice, pitch, volume, panning and to alter the type of mapping used between control data and musical parameters. The server patch can also be used to remotely control each of the synth. computers. This minimises the need for the composer to operate multiple computers in performance. Once the synth patch has been started on each computer all audio setup, sample loading can be carried out from the server. Network controls allow the composer to poll the network for any available synth or client patches and automatically connect them to the server, providing a unique ID for each one. The synth patch duplicates the synthesis engine found within the client patch, with the only addition being quadraphonic panning for the main sound projection system. Each synth computer is assigned ten of the client computers to synthesise, necessitating five synth computers in total to provide sufficient computing power. For each client the playback engine itself comprises a straightforward variable speed sample playback algorithm with multiple voices and as a separate looped playback algorithm. 4.2. Hide and Seek, for 50 laptop performers by Angie Atmadjaja and Jethro Bagust 'Hide & Seek' is a collaborative performance piece, created for a scalably-large number of participants. All participants ('seekers'), aim to collectively find a hidden target. Each seeker carries out this task by using graphic representation of a digital globe. The music for 'Hide & Seek' is entirely synthesized from simple waveforms and filtered noise, in keeping with traditions of early computer game music and chip-tune culture (see [9] for an example). Every seeker looks for their respective target by navigating across the globe and reducing the proximity between themselves and the target. A boundary is created around each target. Each time a seeker penetrates this boundary it causes the target to jump to a random new location. Each time the target changes its position, the boundary around it diminishes. As the game progresses, the act of seeking requires increasing precision. The game ends when it reaches a point where the proximity reaches precision of zero degree and the target can no longer be reached. The game unfolds using a combination of two control methods; firstly through a pre-composed sequence of events, and secondly, via a human conductor. With the first option, a counter keeps track of the number of jumps a target makes. When a predetermined amount is achieved, two independent targets are produced and a series of geographical boundary changes are triggered. These changes in turn trigger progressions within the musical states of the performance; a musical expression or element is introduced, creating contrast and increasing tension. Initially the seekers are chasing after one target. As the game reaches towards the end of the time limit, a pre-composed final musical climax is triggered to signify and notify every one involved that the end is near. With the second option, the conductor controls both time and sequence of events. He chooses when to split the target and which musical states are triggered. He can also choose to either increase or decrease the difficulty in the game according to the progress and when the game should end.

Page  4 ï~~Through the performance, each seeker is listening to their personal musical feedback. This contains auditory clues describing their distance from the objective, in a manner somewhat akin to military radar or sonar systems. Each seeker perceives a sound environment that correlates to their altitude, longitude and latitude positions. The distance between themselves and the target is determined through latitude and longitude values. As this distance decreases, the two sine tones converge. Their altitude on the other hand are translated into various aural layers. As they zoom from the outer space towards ground level, they travel through changing aural zones just as one travels through various atmospheres. This sound hint consisting of 2 sine tones is also only audible at certain heights, this limits seekers seeking speed as they navigate. Visually and aurally, targets are only visible to the audience. They are denoted visually by cross-hairs. The game begins with one target that in time splits into two. Periodically each target emits an impulse, which propagates out in a radial fashion interacting with seekers as it intersects them, i.e. a 360Â~ sonar. When the widening sonar sphere touches a seeker, two events occur. A ping or a beep can be heard aurally in relation to the target at the centre of that quadraphonic setup. At the same time, this seeker's visual coloured dot pulsates. The music generated for the audience is derived from a database of global statistics, and alters in accordance to the country in which the hidden target is located. There is a preset number of targets country positions. Each preset position consists of statistical data; attributes such as surface area, population density and CO2 emissions are translated into clusters of frequencies composed of a myriad sine tones. This acts as a bed of harmony supporting the aural spatial position of each seeker. Apart from the performer's laptops, three other systems were used for the performance. They are labeled, 'Target', 'Quadraphonic' (Ping) and 'Visuals' (fee figure 1 for details). The use of solely sythesised sound allows portability of the system by not having to include a collection of samples. The main laptop 'Visuals' is the only access to every seeker. Each seeker sends their global positions and receives the changing target positions from Visuals'. This is then relayed to the main laptop 'Quadraphonic' which in turn calculates the distance and informs the other systems of when a boundary is penetrated. The new target positions are then relayed back to 'Quadraphonic' and 'Visual' and distributed to all the seekers. Target PFully m~angan, Chrns$ Thr RPost a~t Ser r Png& seekers {.:; argetdi T rge seeker~ us Figure 1. The relationship between computing and musical elements in Hide and Seek. 4.3. 1906, for 50 laptop performers with multiscreen surround video projection by Ambrose Field 1906 is a live performance work with a newly realized five-screen surround video. This immersive video occupies the full field of frontal view extending into the peripheral vision, and uses images sourced from Thomas Edison's earliest movies - the San Francisco earthquake of 1906 [10] - as initial material. The audio sound world is slow-moving, quiet and delicate. Manipulation of the computer performance interface requires a great degree of patience, practice and control from the performers due to the extended timescales on which events are positioned. In 1906, human performers are deliberately required to perform actions that would be easy for a computer to undertake automatically: for example, to effect an extremely gradual and smoothly controlled filter change over 4 minutes (a task which would be trivial to control in software). The structural and technological design of 1906 is extremely simple. Performers are given articulative choice (in the form of processing algorithm, dynamics and time positioning) over a database of 100 sounds each. Each performer contributes to the total sound of the ensemble by becoming a layer in a complex, human controlled synthesizer. The aim of the piece is to create a continuous, simple and unified musical surface underneath which detailed timbral articulations can take place. The video has been processed so that time appears to have been slowed down considerably - seconds become minutes, and the sense of physical space has been considerably magnified, adding a sense of hyper-reality

Page  5 ï~~to minute details present in the film itself. The music is synchronized to the video by means of a human conductor. It is the specific job of the conductor in 1906 to make personal decisions as to when changes in the audio textures are required. Rather than simply execute commands from a computerized network scheduler, performers are required to interpret the instructions given in their own ways. The conductor does not engage in a one-way relationship with the music: instead the conductor must listen to the results, articulate the performance, yet also take an active part in designing the sound and structure of the final piece. This is very much like the live process of working with musical material in the studio, refining it, and being responsive to the possibilities generated. 1906 is realised is solely in Pure Data[3]. Each performer's computer generates a local audio stream for musical feedback by means of the built-in laptop loudspeaker. Human user interface data gathered from all performers is sent over wi-fi to an audio rendering server, which calculates the interactions between individual parts, renders the final result for the PA system, and provides performance feedback. 5. CONCLUSION The Worldscape Laptop Orchestra performance provided an opportunity to creatively investigate the dynamics of large-scale digital performance within one physical venue. Three pieces were created, testing new interaction models and aesthetics (Hide and Seek), musical control systems for large-scale interaction (Swarms) and the role of human participants within a highly digital environment (1906). Important lessons were learned regarding the scalability of standard technologies (such as the limitations of 802.11g for ensemble performance). The music described here is a creative response to these challenges and opportunities. Tne w orldscape Laptop Urcnestra pertorming tidae and Seek by Atmadjaja and Bagust. 6. REFERENCES [1] Field, A. Worldscape Laptop Orchestra 0528, last checked Feb 2008. [2] Cycling 74. Max/Msp/Jitter: / last checked Feb 2008. [3] Puckett, M. Pure Data. last checked Feb 2008. [4] [5] last checked Feb 2008. Google Earth: last checked Feb 2008 [6] Truman,D. Why a Laptop Orchestra, Organised Sound 10(3): 255-66. [7] Gaol and Sanghi. Improving Performance TCP Performance over wireless links ps z last checked Feb 2008. [8] Computer Video for Max/MSP. last checked Feb 2008. [9] http://www.blifestival.orgi/ last checked Feb 2008. [10] The Prelinger Archive: http:ii// last checked Feb 2008.