Page  00000001 Trans-Domain Mapping: A Real-time Interactive System for Motion Acquisition and Musical Mapping Kia Ng 1,2, Sita Popat 3, Bee Ong 1, Ewan Stefani, Kris Popat 4, David Cooper1 Interdisciplinary Centre for Scientific Research in Music (ICSRiM) 1Department of Music 2 School of Computer Studies 3 Bretton Hall College 4 ULTRALAB University of Leeds, Leeds LS2 9JT, UK Anglia Polytechnic University, UK Introduction In this paper, we present an on-going research project focusing on Trans-Domain Mapping (TDM). This involves the mapping of one creative domain onto another. With the TDM framework, meaningful activities in one domain are digitised, captured, tracked, and mapped onto another domain. The system is designed to be a highly dynamic real-time performance tool; a full implementation of TDM would target the audio, visual, theatrical and interactive performance domains, providing the users with an interactive and augmented audio-visual environment. There has been considerable research of sensor-based gestural control for interactive performance, for example the AtoMIC sensor/MIDI-interface from IRCAM (AtoMIC) and the DIEM Digital Dance system (Siegel and Jacobsen 1999: 276, 1998: 29, Siegel, 1999: 56). In this project, we attempt to minimise the constraints on movement introduced by body-mounted sensors by using small and non-intrusive devices. This enables freedom of movement for the participants, whether dancers, actors, musicians or the audience itself. There has been an increasing growth of interest in this research area. Camurri et al. (2000: 57) presents a comprehensive background survey of related projects, including STEIM's BigEye (BigEye) and Rokeby's Very Nervous System (Winkler 1998, 1997). This paper also discusses other interesting applications of the system. Music via Motion (MvM) MvM is the first prototype under the framework of TDM. The software makes use of input from a video camera, and processes video-frames acquired in real-time. MvM detects and tracks visual changes of the scene under inspection, and makes use of the recognised gestures to generate interesting and relevant musical events using an extensible set of predefined mapping sub-modules. The prototype is portable, can be setup easily in a public environment and is designed to be intuitive and userfriendly to minimise the time needed for familiarisation. MvM uses a differencing tracker to detect motion. The tracker is sensitive under a range of lighting conditions and it is convenient to use since the user does not need to wear any sensors or markers. Modules MvM consists of five main modules: 1) A data acquisition module, which is responsible for communication with the imaging hardware. 2) A motion-tracking module, which detects visual changes. Currently MvM uses a differencing tracker, involving the subtraction of the current frame from the previous frame to detect changes between contiguous frames. 3) A music-mapping module, which consists of an extensible set of mapping sub-modules, for translating detected visual changes onto musical events. 4) A graphical user-interface module, which enables online configuration and control of the musical mapping sub-modules, and provides overall control of the scale type (tonality), note filters, and pitch and volume ranges. 5) Finally, an output module, which is responsible for the audio and graphical output. The main window of the system offers a graphical userinterface for the configurable parameters, the choice of the tracking algorithms, and other options. There is also a live video window, displaying the camera view, and a motion tracker window, highlighting the areas with detected movements. The system is intended to be lightweight, portable and efficient. It is implemented in C++ with Microsoft Video for Windows (VFW) and it has been successfully tested with various commercially available VFW compatible frame-grabbers, including web cameras with parallel and USB interfaces. Default Musical Mapping With this system, the user can be both the audience and the performer, controlling the events in visual and musical domains. We have developed several mapping

Page  00000002 functions, including a simple distance-to-MIDI-events mapping with many configurable parameters, such as scale type, pitch range and others. Parameters of motion such as proximity, trajectory, velocity and direction can also be tracked and mapped onto musical parameters such as pitch, velocity, timbre and duration. By default, the mapping module translates horizontal movement onto pitch. Imagine a virtual keyboard in front of a user; by waving his/her hand from left to right, the user plays a series of notes from a lower pitch to a higher pitch. The vertical axis is used to control volume - the height at which the activities were detected is mapped onto loudness. Motion at a higher position is translated to a louder sound and motion at a lower position is mapped onto a softer sound. MvM also offers user configurable active regions where detected visual activities in certain areas can be mapped onto different MIDI channels. By default, the system divides the scene under inspection into a number of equal size regions, and translates any detected visual changes in each region onto a user-definable MIDI channel. Figure 1 illustrates a user controlling different sounds in two active regions (left and right hands in different regions). * Designers, employing interactivity to enhance design with added dimensions. In a later section, we briefly present an ongoing collaborative project called COIN (Coat of Invisible Notes) which uses colour and motion tracking of specially designed costumes to trigger sound and musical phrases. * Composers can explore new compositional frameworks, offering real-time control, with a collection of pre-composed short musical segments. * There may also be applications for music therapists, to encourage movement, using this motion sensitive system to provide interactivity and creative feedback. -\\\\\ Figure 1: Active regions Currently, we are working on several visual feedback sub-systems of MvM to provide the users with a graphical representation of what the system sees and detects, so that they can make any necessary adjustment when controlling and interacting with MvM. Future work include background music generation using videodata from surveillance cameras, and virtual instrument design and interfacing inside an augmented 3D virtual environment (Ng et al., 2000: 109, Sequeira et al., 1999: 1, Johnson et al., 1998: 866, Ng et al., 1998: 356). Applications There has been much interest in MvM as a tool to explore new directions, from a variety of disciplines. This include: * Choreographers and dancers who are interested in exploring new choreographic possibilities inspired by MvM technology, and real-time control of the sound using their physical movement. Figure 2 shows some snapshots of dancers with the MvM system. Figure 2: Dance with MvM Choreographing with MvM From the viewpoint of the choreographer and dancer, MvM has a number of diverse advantages that effect different aspects of dance-making and performance. Traditionally, the main choreographic approaches to working with music or sound are either to begin with the music and allow it to dictate the movement, or to create the movement first, and add the sound afterwards by way of accompaniment. Occasionally, but all too rarely, there is the opportunity to choreograph alongside a composer, so that dance and music develop together. By contrast, MvM permits the choreographer to work from the initial stages of creativity with the dancers and the sound together, and to have that relationship in place throughout

Page  00000003 the rehearsal period. Ideally, a composer would be available for consultation, as the choreographer will not necessarily also be a musician, and assistance with preparation of soundscapes and phrases is beneficial to the performance product. However, MvM offers more than merely a different approach to the relationship between movement and sound. It also enables both choreographer and dancers to draw upon and indulge the subtlety of the movement itself. The most obvious characteristic for the audience is MvM's ability to stress any tiny movement that the dancer makes, by adding responsive sound to that movement. This gives the choreographer access to a subtlety that was previously only practical via video editing, where the camera guideed the viewer's eye to the individual movement. MvM can also highlight moments of climax through the direct relationship between crescendos of movement and sound, and by tracking and stressing sudden changes in momentum with increases in volume. An aspect that is not so obvious to the audience is the distillation of spatial and rhythmical awareness that MvM causes for the choreographer and dancers. The choreographer must be acutely aware of the shapes and pathways that all the dancers' movements take, both visually and in terms of the effect upon the sound. The relationship between movement pathways is also paramount, as this can create multiple layering of the sound. For the dancers, it provides knowledge of where self and others are in relation to the group and the stage space, and what movements are being performed, simply by listening. Working with dancers of varying experience and abilities, from students to professional performers, it has become apparent that MvM enhances their facility to work closely in unison or complementary movement, and to develop a cohesive performance to a high level regardless of their previous experience. Dancers frequently comment on the enhancement of their awareness of others in the space, and of the shape of the performance as a whole, simply from the sounds created by the movement. In solo work, dancers also become acutely aware of their own movement in terms of stillness and dynamics. Many dancers realise that when they thought that they were still, they were actually moving slightly; a fact that becomes suddenly obvious as MvM continues to track and respond to the smallest movement. Alongside this personal awareness, when performing alone with MvM the dancer discovers an unparalleled freedom to concentrate upon the movement itself. The dancer is no longer bound by predetermined musical or sound structures, but can indulge the movement to its full potential, with the confidence that the sound will follow him or her. The result is that the movement develops and is led by its own momentum and expressive qualities, which gives it a sense of authenticity for both performer and viewer. When the performer is working with others there is an added responsibility to them, but the same elements remain in effect, in response to the movement of the whole group. Continuing to work with choreographers and dancers, we hope to find ways of using MvM that are both effective and subtle, so that the relationships between movement and sound do not become predictable. Some possible solutions include more complex sound-mapping of the stage, and complete sound phrases as well as individual notes being triggered by the movement. These will help to make the effects less directly discernible, whilst maintaining the fluidity of the relationship between movement and sound that provides the movement with its authenticity. MvM as an Educational Tool Music is a vast and complex subject to teach in any setting, and often becomes entrenched in the practical aspects of learning standard notation systems and the technical elements of playing a musical instrument. There are, however, regions of music, which are by the nature of the teaching situation and tools available, difficult to realise in educational terms. MvM may be used as a conduit into some of these areas. With MvM, there is the potential to allow learners to access some parameters of music, which would not be available to them so easily. There is a clear place for a tool like MvM in the exploration of timbre and sound, the relationship between manipulation and sound, then repetition and structuring and finally the intricacies of control and expression for performance and creativity. Coat of Invisible Notes (COIN) This is an ongoing project, exploring creative application of the MvM technology with costume designs. A particular feature of the costumes is that they will be reversible and can be split apart into sections allowing the users to re-assemble and re-configure them in order to design their own image and to achieve different visual effects. These various changes in turn will be detected by MvM and will be used to alter the character of the musical responses. Everyday objects will be used within the costume for their visual, tactile and audio appeal, and items such as refillable sachets of pot-pourri in the pockets will give an aroma, extending the range of sensory stimulation. The design of the costumes will make them extremely tactile both inside and out to provide additional sensory experience. In tune with the costume design, which will make use of everyday objects, the composition of the music will feature sound derived from these and other similar sources. The intention of the music is to bring familiar sounds into the performance, to encourage the audience

Page  00000004 to perceive them differently in this artistic context. The relationship between music and sound will be explored, with the aim of expanding the audience's conception of music. The musical phrases will be composed/designed to be completely re-configurable (as with the costume) so that the performers/audience may re-arrange coherent musical structures from the musical materials that have been prepared. Phrases will contain melodic and rhythmic elements, sampled sounds, and electronically mutated versions of everyday sounds, which will allow for humorous or ironic juxtapositions. Future Development The MvM system is currently being extended to track visual activities in more than one view, with multiple cameras. A distributed MvM system with a music server is currently under development, which would offer additional control and features to the interactive environment. With the default basic setting, a second camera, perpendicular to it could be used to acquire other feedback or responses. The music server collects the resultant streams from all the motion trackers and performs the musical mapping function. With the main camera tracking and controlling the pitch (horizontal axis) and volume (vertical axis), detected motions from other viewpoints could be used to control the use (or selection) of sound. For example, a user could play an oboe when s/he is near to the main camera, but play a set of timpani when s/he is located further away from the camera. Conclusion MvM brings together multiple creative domains to create an interactive and augmented environment, providing the users with real-time control of musical sound by their physical movement. In front of the camera, the users seem to be able to swim with the wave of musical sound and pick invisible musical notes from the air. With the advancement in science and technology, it is hope that the TDM framework and other system like MvM, could be fully realised, integrating arts and science to offer artistic and creative sensory experience. Acknowledgement The authors would like to thanks all the dancers from the Bretton Hall College, and Eddie Copp (Head of Dance at Airedale Community Campus) and Claire Nicholson (Freelance Dancer/Choreographer) for their time and support and enthusiasms in the project. This project is receiving sponsorship from the Yorkshire Arts. References AtoMIC., Ircam. BigEye, Camurri, A., Hashimoto, S., Ricchetti, M., Ricci, A., Suzuki, K., Trocca, R., and Volpe, G. 2000. "EyeWeb: Toward Gesture and Affect Recognition in Interactive Dance and Music Systems." Computer Music Journal, MIT Press, 24(1): 57-69. Johnson, N., Galata, A. and Hogg, D. 1998. "The Acquisition and Use of Interaction Behaviour Models", Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 866-871. Ng K.C., Sequeira V., Bovisio E., Johnson N., Cooper D., Gonqalves J.G.M. and Hogg D. 2000. "Playing on a Holo-stage: Towards the Interaction between Real and Virtual Performers." Digital Creativity, Swets & Zeitlinger Publishers, 11(2): 109-117. Ng, K.C., Sequeira, V., Butterfield, S., Hogg, D.C. and Gonqalves, J.G.M. 1998. "An Integrated MultiSensory System for Photo-Realistic 3D Scene Reconstruction." Proceedings of ISPRS International Symposium on Real-Time Imaging and Dynamic Analysis, Hakodate, Japan, pp. 356-363. Sequeira, V., Ng, K.C., Wolfart, E., Gonqalves, J.G.M and Hogg, D.C. 1999. "Automated Reconstruction of 3D Models from Real Environment." ISPRS Journal of Photogrammetry and Remote Sensing, Elsevier, 54(1): 1-22. Siegel, W. 1999. "Two Compositions for Interactive Dance." Proceeding of the International Computer Music Conference, pp. 56-59. Siegel, W. and Jacobsen, J. 1998. "The Challenges of Interactive Dance, An Overview and Case Study." Computer Music Journal, 22(4): 29-43. Siegel, W. and Jacobsen, J. 1999. "Composing for the Digital Dance Interface." Proceeding of the International Computer Music Conference, pp. 276 -277. Wanderley, M. and Battier, M, eds. 2000. "Trends in Gestural Control of Music." Ircam - Centre Pompidou. Winkler, T. 1997. "Creating Interactive Dance with the Very Nervous System." Proceedings of the 1997 Connecticut College Symposium on Art and Technology, New London, Connecticut: Connecticut College. Winkler, T. 1998. "Composing Interactive Music: Techniques and Ideas Using Max." Cambridge, Massachusetts, MIT Press.