Dynamic Models for Musical Interaction in Virtual RealitySkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 358 ï~~Dynamic models for musical interaction in virtual reality. Peter Beyls email@example.com Abstract. We suggest the expression of abstract musical ideas as configurations of, and relationships between, active objects living in 2d space. The system proposed consists of a behavioural component, a visual representation and an interaction channel. The approach allows for richer forms of interaction through deeper integration of controller (the Mattel Powerglove) and process. The user explores intricate emergent properties observed from simple, local interactions in a distributed model; an example of social computing in a virtual orchestra. 1. Introduction. The present paper introduces behaviour and experience as an alternative to, respectively, knowledge and design in the context of interactive composing and is inspired by the cognitive theory put forward in (Minsky, 86) and the observation of interesting behaviour in complex dynamical systems (Beyls, 91). We suggest a spatial model of simple, interacting agents which engage in social computation: agents try to adjust their position in space according to their attitude toward other agents, including the user. Agents express individual affinities towards the rest of society -- affinities that work as constraints. Conflicting affinities create rich and intreguing behaviour which serves as a motor for musical pattern generation. In addition, we explore virtual reality technology for interfacing with the simulated musical society. The system accomodates gestural activity from a human interactor and an environment with actors that react according to their interpretation of particular external gestures (1) as well as according to the current internal context (2). This approach blurs the distinction between 'instrument' and 'algorithm'; instrumental, physical activity issued by the user and procedural musical evolution embedded in the system's rules become intimately intertwined. We design a virual world which supports this delicate blend of instrumental gesture and musical intention. The key idea is that knowledge is in the experience of the rich and responsive climate expressed by a collection of interacting agents and in the appreceation of their advanced autonomy. We experience a meaning in the actual conversational activity. This is in sharp contrast to the use of abstract symbolic representations where no such link exists between syntax and semantic meaning. In addition, we aim at the discovery of emergent properties i.e. coherent behaviour that grows spontaneously from many interacting forces within a virtual world (1) and critical interference from the outside: physical disturbances that push the system out of equilibrium (2) forcing it to explore its degrees of freedom. Note that it would be tremendously difficult to implement emergent properties in a rule-based program. Other examples of distributed ways of musical thinking and auto-organization are seen in the work of the computer improvisation collective The Hub, in Polansky's "Simple Actions" where macro- structures emerge from many simple colliding software modules and Vaggione's loosely organized collections of interacting DSP sample manipulators. 2. The DNA of musical actors. The current system is an extension of the spatial model documented in (Beyls, 90). An actor is a software object with the following properties; an x,y position in 2d space, an energy level (actors dissipate energy while interacting),a flag indicating whether the: actor is active or asleep, a list of affinities toward other agents and a 358
Page 359 ï~~list of simple rules. Any actor computes local stress by making a priority list through the evaluation of its current affinity toward all other fellow agents. Then, the actor under consideration will associate with the actor(s) contributing minimum stress, a process that translates to the actor moving to a new position in 2d space, which in turn provides a changed context for the interaction to start all over. The affinity evaluation is a two way process and actors may express conflicting affinities towards eachother leading to impossible relationships leading to strange oscillatory behaviour. On the other hand, actors may aggree and the system will settle to a point attractor and install dynamic equilibrium. The objective of the society as a whole is to minimize internal stress following the constraints specified by the agents' individual character. Musical virtual reality is a platform for speculation and surprise since any reality, inspired on reality itself or a product of the imagination, may be implemented. 3. Interfacing with a virtual orchestra. We aim at the interactive exploration of points of relative stability stepping inside this distributed model: physical interference takes place through manipulation of the graphic representation of the society of interacting agents using the Mattel Powerglove as a gesture transducer. This glove provides three dimensional x,y,z coordinate sensing, traces the rotation and panning of the hand and outputs 2-bit flex data from four fingers. However, it does not provide tactile feedback in contrast to acoustic instrumental controllers. Three operation modes exist: - User is seen as yet another agent, albeit at the other side of the screen but linked to its own icon, like a participant engaged in social computation. The composer actually performs through tele- presence. - The user may be viewed more like an explorer, like a bio-chemist pushing things under a microscope expecting something interesting and useful to happen. Gestures provide global or local energy or change relationships between virtual musicians by pointing and dragging specific actors. - Integration of live musicians and virtual musicians: output of acoustic instruments (via real-time pitch to MIDI conversion) may give birth to new agents ready to interact with existing agents, including the ones created from previous input: the MIDI stream interacts with its own history. Pitch will position an agent in space, velocity will determine its energy and influence its life cycle. In this mode, additional glove interaction is like behavioural conducting. Mapping the animated agents to MIDI out may be of arbitrary complexity. Only if an agent interacts it will create MIDI output on its private channel and according to its current stress, x/y position and internal energy. Agents reaching zero energy become inactive causing a major change in context until reactivated either by the user or by an internal clock. Seen from the instrumental control point of view, it seems extremely relevant to study force feedback in virtual musical instruments. Given a strong musical tradition of virtuoso tactile control, future research on linking physical gestures and musical intension/expression should prove to be useful for virtual reality as a whole. Acknowledgements. Thanks to Mark Trayle for strategic support, to AGE Inc. NY, for providing a serial interface for the Powerglove and to Atari Corporation for hardware support. References. Minsky, Marvin: The society of mind, Basic Books, NY, 1987 Beyls, Peter: Subsymbolic approaches to musical composition, a behavioural model. Proceedings of the ICMC, Glasgow 1990 Beyls, Peter: Chaos and creativity, The dynamic systems approach to musical composition. Leonardo Music Journal, Vol. 1, Nr. 1 1991 359