Page  00000001 Movement-Activated Sound and Video Processing for Multimedia Dance/Theatre Todd Winkler, Chair Department of Music - Brown University Todd Winkler@Brown.edu http://www.brown.edu/Departments/Music/faculty/winkler Abstract Motion-sensing technology enables dancers to control various computer processes that can generate or process sound while altering their own projected video images. In turn the altered images and the sonic results influence choreographic decisions and kinesthetic response. This creates a dynamic three-way interaction that opens up new possibilities to explore the body as an agent for technological transformation, where the physical and virtual are merged. 1 Introduction Recent technology makes audio and video processing accessible and malleable in situations of live theatrical performance. Dancers, in particular, can take advantage of new forms of expression based on the relationship between performers and their video-altered image. These processes may be static, with dancers working within a fixed distorted world, or dynamic, with digital processing controlled by various types of sensors or by computer algorithms. The addition of interactive audio forces a dynamic confrontation between media, with movement, sound and image created interdependently during a performance. This paper describes techniques and artistic concepts in two related but very different performances. Falling Up is an evening-length work incorporating dance and theatre with movement-controlled audio/video playback and processing. The solo show is a collaboration between Cindy Cummings (performance and choreography) and Todd Winkler (sound, video and programming). It was created for the 2001 Dublin Fringe Festival. This highly structured work incorporates improvisational sections with moments that are tightly choreographed. The second performance, discussed briefly, addresses some of the issues raised when a similar system is used in a freely improvisational setting. 2 Artistic Concepts Falling Up explores themes of gravity, flying and many related metaphors. Inspired by inventors and pioneers, such as the first pilots, astronauts and digital explorers, we examine moments in the 20th Century where technology enabled us to achieve something previously impossible and changed how we think forever. This overall concept is coordinated with our use of new technology which, in its own small way bursts a conceptual bubble as to what is possible to control on stage with human movement. Video and audio processing are used to alter the performer's image, exploring the relationship between the dancer's physical movements and her digitally processed image. These techniques are also used to speculate on future "impossible" technologies, enabling the body to be transported, modified and projected. Ideas of invention and flying are integral aspects of both the technology and the choreography. Some of the themes explored in the work include: the first

Page  00000002 attempts at flight, travelling to the moon, space walk, stunt flying, time travel, black holes, and anti-gravity. Each theme shows a different type of interactive relationship between movement, video, and sound. These concepts are illustrated in three ways. First, we use scientific texts to help explain principles of flight, time travel, etc. Some of these texts are recent, and others date back to the 1890s, when engineers were working hard to invent new technologies that would enable flight. These texts appear in the performance as spoken by a disembodied computer voice, and via a theatrical character, a pilot, played by Cummings. Second, we employ archival footage of flying inventions and historical events; coupled with clips from science-fiction films that offer an explanation from popular culture, usually with humorous results. As it turns out, some of the early attempts at flight were just as imaginative--and unbelievable--as many of the far-fetched ideas found in science fiction films. Finally, these ideas are illustrated through dance using a new kinesthetic vocabulary refined and inspired by live video and sound processing. These techniques show the physical body displaced and distorted in time and space. Cummings developed the choreography with the final projection in mind; the distorted and delayed video images are as much a suggestion for new types of movement, as the movement is for creating abstracted images. A large video monitor, set on stage facing Cummings, serves as a mirror, allowing her to respond to her own processed image - a kind of mediated contact improvisation with herself, her movements altered based on the projection. Similarly, audio playback and audio processing provide an "aural mirror with particular sounds suggesting specific types of movement. High-impact percussive sounds, for example, might elicit a more energetic and aggressive response than a delicate filter sweep on an ethereal sound. The choreography is enhanced through use of the Very Nervous System (VNS), a device designed by artist David Rokeby, which uses a video camera to report the on-stage location and speed of the performer to a computer. Movements are identified and mapped in software to play and process sounds (Max/MSP), or to alter a live video feed using real-time video processing software (NATO). The computer generates most of the material based on the performer's movements, with each performance being a unique realization of the program' s many potential responses. The live video processing requires a camera operator to follow the performer, with the video feed going to the computer. The camera operator plays an essential role as a member of the performing ensemble, constructing the size, panning and speed of the various shots that will be processed. The computer is configured with three video cards. One is for a monitor for the computer operator to control various aspects of the performance, and two are used as separate video projections: one on a large black screen, and the other on a smaller white screen. The black screen shows no edges, and is used for space walk and futuristic sections where the body appears to be floating or travelling in space. Sound processing and playback are controlled primarily by Max/MSP, which is used for movement-controlled sample playback and processing, as well as to play back short speeches and automated musical elements. Algorithmic processes generate infinite variations based on specific parameters controlled by movement data. Moving into specific locations on stage can start or stop various musical functions, trigger specific sounds or cue video events. Continuous data, representing the dancer's overall speed, is used in the audio realm for such things as timbre shaping via filters, sample playback speed or delay. In the video realm, continuous data may be applied to image offset, color, luminance, or distortion. The spontaneity of the choreography and its complex relationship to the overall system promises that each performance will be unique.

Page  00000003 3 Relationships Between Movement, Sound and Video Since the audience's attention is split between screen images and live performance, each section was planned to guide the viewer's gaze. In many sections, we worked hard to have equal interest between the competing elements. In other sections, we purposefully favored the live performance or the resulting video image, using lighting, stage position, and the size of the video image. For example, in a section entitled, "Pod," Cummings created the dance specifically for the resulting look of the projection. Her somewhat abstract form appears on screen as a large pupa hanging upside down and swinging from the top of a large circle. Low lighting on stage deemphasizes the live aspect. General speed and activity are used to trigger a collection of sampled insect sounds that are further transformed and processed, accompanied by the constant low rumbling sound of a spacecraft. In this sci-fi scene of humanoid incubation, a distorted figure finally emerges from the pupa, hangs suspended from the ceiling, only to get sucked down into the center of a "black hole," an effect caused by the flat video image being wrapped inside a three-dimensional cone. A similar wrapping of the image onto the inside of a three-dimensional cylinder results in a gravitydefying walk up the sides of a spinning tunnel. In a section about time travel, the audience views the same performance simultaneously at ten difference moments in time. The live soloist is projected as a composite grid made up of nine video panels; each delayed differently in time. The movement is designed as an ensemble piece, with the live soloist creating dynamic interactions between the nine projected versions of herself as they appear to move apart, come together, touch each other or disappear out of the frame. Specific audio samples with similar time delays are triggered by location, while speed alters timbral characteristics via pitch shifting and flanging. A disembodied voice recites short phrases describing time travel taken from physics texts, such as, "The past, present and future are only an illusion, no matter how persistent." The most prominent section focused on the dancer, "Stunt Flight," was inspired by a visual score of actual stunt-flight choreography. It opens using interactive video processing with one parameter controlled by speed, causing the position of the performer on screen to rise up with faster movements, while a loud, responsive engine-like sound dips and dives, with audio processing simulating Doppler effect and engine thrust. The distorted voice of a control tower operator, actually an announcer at a stunt flight contest, narrates part of the dance. Later, using a different effect, Cummings fades away and finally disappears at fast speeds, only to rematerialize on a different part of the screen when she slows down (a simple effect caused by coupling speed to the amount of blurring). Two sections examine early space flight. The show begins with a huge close-up of the performer's face. Grainy distortions and warping makes her appear to be wearing a space helmet and has the look of old NASA transmissions. As she sits calmly in a chair, we listen to a speech, delivered by the computer, which describes in archaic scientific detail the impossibility of human flight (taken from an 1897 engineering text). Cummings' surprised response, with small facial movements registered via VNS, causes the text to stutter and repeat. In a humorous section called "Moon Tag," the image of the dancer is composited with historical footage of Neil Armstrong's first moonwalk, and they end up dancing together on screen. The accompanying audio montage combines original NASA transmissions with sci-fi voiceovers from old films discussing travelling to the moon. Another section features the most futuristic images with the most historic. A warped image of the dancer appears trapped inside a threedimensional cube, spinning and floating freely in a blue sky. The orientation and position in space of the cube is carefully choreographed. Suddenly, the cube shoots off into space, returning with a trapped video loop of the Wright brothers first filmed flight. Low rumbling sounds and high ethereal sounds accompany these images, with filters continuously changing the sound in response to the dancer's speed and

Page  00000004 proximity to the sensing camera. This section takes advantage of the 3-D features of Open GL on the Macintosh system, using NATO's Open GL objects. 4 Free Improvisation Some of the software used in Falling Up was rewritten as a system for free improvisation. The performance took place as part of the 2002 CalArts Festival of Electronic Music and Media, with dancer Francesca Penzani. The main additions to the software included programming a large number of pre-configured states for sound and video processing, a library of possible expressions that could serve instantaneously as starting points for improvisation. A new computer interface placed the computer operator more on an equal expressive level with the dancer, having to be engaged and decisive in each moment, while avoiding the usual delays and non-spontaneous acts of programming. The new interface also added the ability to route the video feed to a large collection of processes, and ways to alter various processing parameters for sound and video using movement data or by hand. This enabled a free exchange of responses between the dancer and the computer operator. With only one day to rehearse before a public performance, the dancer quickly became intuitively connected to the system and its behavior. The most important feedback "mirroring" factor became the sound, rather than the processed image, since moving around on stage and trying to figure out the strange looking results of that movement proved difficult and sometimes counter-intuitive. It takes time to adjust to physical movements producing abstract images. On the other hand, simple sound manipulation was very effective, giving the dancer the feeling of being physically involved with shaping and creating the soundtrack. For example, at one time during the performance, the speed of the dancer was associated with the cutoff frequency of a low-pass filter. Using white noise as a source, the faster gestures produced the immediately recognizable sounds of blowing wind. Other sound associations gave the performer the feeling of "carving space" or producing percussive rhythms. The overall result of the twenty-minute performance was surprisingly coherent. Although the structure and timing were awkward at times, these risks of free improvisation were compensated by serendipitous moments of highly engaging human and media convergence. 5 Conclusion These performances fuse aspects of the physical body with the extended possibilities of the electronic body into video projections and sound. The fact that these systems are now able to run on a single computer shows that this is just the very beginning of what promises to be a fascinating future for media convergence. This points to a growing trend of a type of digital virtuosity whereby sound, image, and movement data are interpreted and manipulated freely in a dynamic performance. 6 References Rokeby, David. Personal website, September 8, 2002. http://homepage.mac.com/davidrokeby/home.ht ml Winkler, Todd. "Making Motion Musical: Gesture Mapping Strategies for Interactive Computer Music." In Proceedings for the 1995 International Computer Music Conference. San Francisco, CA: Computer Music Association, 1995