Page  00000180 CONTROLLING AURAL AND VISUAL PARTICLE SYSTEMS THROUGH HUMAN MOVEMENT Carlos Guedes ESMAE-IPP, Porto ESART-IPCB, Castelo Branco Portugal carlosguedes@mac.com ABSTRACT This paper describes the methods used t interactive installation using human mot both an aural and visual particle syster outlines the rotoscoping, meta-motion p visual particle system software. The pa into a detailed explanation of the ai developed for the project. 1. INTRODUCTION Will.0. Wisp is an interactive dance insi real-time particle systems to generate ch move with human motion, but have no characters are composed of a combinatior aural particles moving smoothly around environment containing a 6m wide curvec channel audio. The installation uses a c video tracking and motion sensors to wat to its space. The particle dancers move to or explode if approached too aggressive explains the development of the softwa installation [1]. 2. MOTION DATA Kirk Woolford LICA: Art, Lancaster University, Lancaster, UK k.woolford@lancaster.ac.uk dancers. Initial tests sequences were created with a Hypervision Reactor 3D motion capture system. ion t anat However, the system used for the initial tests was not Sin yto animte properly calibrated and required a great deal of "data Srosing, and cleaning", or manually correcting the data. Sometimes it rocessing, and ' S was necessary to adjust 18 points for every frame of per ten goes capture data. We decided this was just as timeudio software consuming as manually rotoscpoping data, so we created an intelligent rotoscoping package called "Rotocope" shown in "Fig. 1." Rotocope allows us to manually position markers over key frames in videotallation using taped movement sequences. Rotocope uses curve-fitting aracters which and smoothing algorithms to position the markers set form. The between the key frames. A 30 second sequence can be 1 of visual and roughly rotoscoped in 30 mins and refined in 3-4 hours. an installation Just before saving each sequence, the Rotocope I screen and 3- software scans the positions of all the points in each:ombination of frame to calculate their 2-dimensional motion vectors tch the visitors and acceleration over 5 frames or approx 0.2 seconds. avoid visitors The software looks for changes in acceleration and,ly. This paper enhances these. It then writes the motion vectors tre driving the together with the positions of each tracking point in the motion data files into a file which can be read by the installation software. 3. MOTION DRIVERS The installation control software is given a list of all the motion sequences in its movement database. It reads in each file and creates a data storage object for each movement sequence. This program handles the control of the motion sequences, contains the clock routines to play back the captured data at 25fps, and determines the branching from one sequence to another. The installation control software is multi-threaded and handles all the Open Sound Control (OSC) packets coming in from the tracking system and going out to the rendering programs. Will.O.wlsp can use 2 forms of tracking. The original tracking system uses an overhead camera and video analysis. The newer system uses an array of Passive Infra Red sensors and a micro-controller with a USB interface. A detailed description of these systems is beyond the scope of this paper. See references for a more detailed paper [1]. However, both systems calculate the position of the person/or people ng software moving closest to the screen and transmit this ped motion information via OSC to the main control program. on vectors The particle dancer is able to move on its X axis over an area 8 times its width. Even though it uses 2D data, it can move in a single plane on the X axis making it ise of motion appear to grow and shrink as it moves closer and further or the particle:i Figure 1. The "Rotocope" rotoscopi showing the reduction of video ta] sequences to 16 control points and moti The installation uses a small a datab< sequences as the base animation data f 180

Page  00000181 from the viewer. Installation control software continually monitors information from the tracking system. It uses this data to set the position of the dancer on screen as well as velocity and acceleration parameter for the particles. If a visitor to the installation moves too aggressively, the control software increases the acceleration of the particles so the dancer appears to explode. If visitors move too close to the particle dancer, the control software increases the velocity of the particles, repositions the dancer away from the viewer, then slowly decreases the velocity back to normal. The result is that the dancer appears to scatter and reform in a different location. The position of all the tracking points and the current particle velocity and acceleration are sent 25 times/sec to both the visual and aural particle renderers. 4. VISUAL PARTICLES The first test for will.0.Wlsp used traditional particle emitters attached to each tracking point. This gave an uninteresting visual effect. So we experimented with forms of flocking algorithms. The system generated 20,000 particles and gave each one of them a tracking point to follow. "Fig. 2." Shows the effect of the particles flocking around individual points. As the project developed, more forms of targeted tracking were developed whereby the individual particles would switch their targets once they reached a threshold distance. This gave the visual impression of particles flowing from either the center of the dancer's body out to its extremities, or flowing from the ground up through the torso and out to the extremities. the overall sonic environment. The other sonic layer is the sound of the particle dancer moving across the projection space. This second layer is generated directly from the particle flows explained in section 4. 6. AURAL PARTICLES Finding appropriate mappings of the visual data into sound proved a distinct challenge. We wanted observers to immediately relate the movement of the particle dancer to the sound being produced. Moreover, we wanted the sound to be a materialization of the particles themselves - with swift quality changes corresponding to the continuous visual change produced by the particles' movements. The aural particles consist of a granular texture generated by a combination of the XY position of each of the 16 movement points, the overall quantity of motion of those points, the particle velocity, and the area of the bounding box of the particle dancer being rendered. This granular texture contains 32 granular streams of aural particles. The data collected from each motion target point generates two independent granular streams. The rendering of the aural particles is all done in Max/MSP using custom objects. 6.1. The data being transmitted and the mappings Using Matt Wright's OSC externals [2], Max collects the packets containing the XY coordinates of each of the 16 movement targets, the speed of the visual particles, and XY coordinates of the two extremes of the bounding box of the particle dancer. These three elements comprise the data that will be subsequently mapped to sonic parameters in order to generate the aural particles. 6.2. Generating the aural particles Inside the Max/MSP patch, the calculation of the overall quantity of motion of the 16 control points is performed by initially calculating the overall change in position of each point and then by summing the all the changes (Fig. 3). The value of the calculation is then sent to m.bandit' that calculates the rhythm of the movement of the 16 control points. m.bandit belongs to Carlos Guedes's im-object collection [3][4][5] and determines the fundamental frequency of a low-frequency timevarying signal. This can be used to generate musical rhythms in real time according to the frequency of the signal, and has been used to enable dancers to generate musical rhythmic structures in real time from their movement as captured by a video camera. The fundamental frequency value that is calculated is then converted to samples-per-second and is multiplied by four quasi-proportional factors, which will constitute the four different rhythmic layers (sil, s2, s3, s4) that generate the granular streams (Fig. 4). 1 Max objects appear in bold typeface in the text. Figure 2. Particle Flocking: 20,000 particles following each of 16 movement targets. 5. THE SOUND FOR WILL.O.W1SP The sound for Will.0.Wlsp consists of two distinct stereo sonic layers. One, is a loop of a soundscape with crickets, owls, and other sounds which evoke a warm summer night. This is played over a minimalist melodic texture of harp-like sound giving a mysterious quality to 181

Page  00000182 The values are converted to samples-per second so each granular stream can be generated by a slownoiseobject connected to a resonant filter (reson-). slownoise- is an MSP external created by Paul Berg which allows for downsampling white noise by repeating the same noise value (a random real number between -1.0 and 1.0). The number of occurrences (integer value) of each noise value can be input as an argument to the object or sent to the object's inlet. This way of using this object generates a more or less resonant tone burst (depending on the Q factor), at the frequency of the resonant filter with random amplitude (Fig. 5). uncXY coordinate of point Figure 4. Determining the rhythm of the movement target points and generating the four different rhythmic layers. l I'venoi se I I Frequency Q.. n -,-IF200 40F I Figure 5. Example of tone bursts articulated at the speed of 11125 samples per second (four bursts per second at 44100 Hz) with frequency of 743 Hz and random amplitude using object slownoise-. Each granular stream consists of tone-bursts generated in proportion to the rhythm of the 16 movement target points. The frequency of the tone bursts is proportional to the Y-axis position of the point, and the stereo spatial position, or pan, of the stream is proportional to the X-axis position. The max velocity of the particles is mapped to the filter's Q factor in inverse relation- the higher the max velocity the lower the Q factor. Figure 6 shows how the output of each granular stream is generated using this technique. The output of the 32 streams is then used to modulate the amplitude of another granular texture and this output is finally modulated by a band limited random signal (rand-). The value corresponding to the variation of the area of the bounding box containing the particles is used to control the feedback of a delay connected to the output of the other granular texture (subpatch Grain) thus creating a "thickening" effect in the sonic texture when the area occupied by the visual particles is bigger (Fig. 7). This somewhat complex sonic network provides a rich and expressive texture of aural particles perfectly synchronized with the visual particles produced by the system. The combinations of both visual and aural particles give a powerful audiovisual effect in the installation that is enhanced by the fact that the particle Figure 3. Calculation of the overall quantity of motion of the 16 movement target points (top: calculation the movement of each point by subpatch QMo; bottom: sum of all of the movement measurements). 182

Page  00000183 dancer and its sound can move in a wide area (ca. 6 meters). Qe Ii factor inversel. Str~a rn0 proper lt~tonal to spe ~~ par tl le speed $1 200 1 1plit 50 20000 e1 r~ S i patial pcsition of SFrequency 10 40 stream from n position ownoie- pomitio of$1 100 0 of control point p osi n of z -oo-.| sen~ ef Isend~right Figure 6. The generation of a granular stream of aural particles using data from the control points coordinates and visual particles speed. r PEA LOutput from the Grain granular treams rec e ~ left irecei\ e right - FiAmp Modulation P~hepD~ly I P~h~pD~ly I '-between granular _____________________ ------ ~ trearns and I~-i ~---granul r teu~r~e ~g~~~~~~~~~~ Final Modulation by band-limited random signil Figure 7. Final output of the aural particles combining the output of the granular streams the other granular texture. 7. CONCLUSION In this paper we described the methods used to construct an interactive installation using human motion to animate both an aural and visual particle system in synch. During the presentation at the conference there will be a live demonstration of the installation as well as a detailed explanation of the technique explained here for the generation of the aural particles. A short video clip of this installation can be viewed at http://www.bhaptic.net/Will0_proj.html. 8. ACKNOWLEDGMENTS Will.0.wlsp was funded by a grant from the Amsterdams Fonds voor de Kunst and supported by the Lancaster Institute for the Contemporary Arts, Lancaster University. 9. REFERENCES [1] Woolford, K., "Will.0.wlsp ('willo-wisp')", Performance Research 11(4), 2007, pp 30-38. [2] http://www.cnmat.berkeley.edu/OpenSoundCon trol/Max/ [3] Guedes, C. Mapping Movement to Musical Rhythm. A Study in Interactive Dance. Ph.D. Thesis, New York University, New York, NY, 2005 [4] Guedes, C. "The m-objects: A small library for musical rhythm generation and musical tempo control from dance movement in real time." Proceedings of the International Computer Music Conference, International Computer Music Association, 2005, pp. 794-797 [5] Guedes. C. "Extracting musically-relevant rhythmic information from dance movement by applying pitch-tracking techniques to a video signal." Proceedings of the Sound and Music Computing Conference SMCO6, Marseille, France, 2006, pp. 25-33 183