TOWARD A COMPLETE OPEN SOURCE SOFTWARE ENVIRONMENT FOR THE CREATION OF INTERACTIVE POLY-ARTISTIC WORKSSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact email@example.com to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 00000161 TOWARD A COMPLETE OPEN SOURCE SOFTWARE ENVIRONMENT FOR THE CREATION OF INTERACTIVE POLY-ARTISTIC WORKS Stephane Donikian IRISA donikian @irisa.fr Olivier Delerue IRCAM Olivier.Delerue@ ircam. fr Tommaso Bianco IRCAM Tommaso.B ianco @ircam.fr Gildas Clenet IRISA/INRIA Gildas.firstname.lastname@example.org ABSTRACT This paper leans on the analysis of recent creations in the field of interactive pluri-artistic pieces to put to evidence the difficulties encountered by artists and the existing lacks in terms of software components and technologies. To address this problem we present our project, ConceptMove, that considers some of these issues as well from the lower level (considering for instance communication protocols between the software components) as from the high level that considers the art piece under its overall functional description. Our objective is to propose a unified paradigm for describing interactive art pieces in order to be able to simplify the work of the authors, and process part of the work automatically, so that no only the authors can concretise their ideas in their favourite software, but communication between environments can be taken in charge automatically by the system. 1. INTRODUCTION Contemporary artistic creation nourishes more and more of the use of new technologies and we attend at the same time a decompartmentalization of the classic arts. Within the same performance theatre can mix with dance and circus, live music with preregistered or computer generated soundtracks and reality with virtuality. The difference between the traditional media and virtual reality is mainly the fact that the user in the first case is spectator of the universe which is presented to him, while in the second case, he gets immersed inside this universe. All works known as electronic installations relate to this field. On the other hand, in contemporary live performances, it is necessary to add an additional dimension, because the spectator is not an interactor but a spectator of an interaction between the two real and virtual universes that are juxtaposed and dialogued under its eyes. The capture of the user actions and the multi-sensory restitution can be actually rich by making the most of advanced technologies in the fields of virtual reality or of the video game. On the contrary, the dialogue between the dedicated software is today very limited: this is due to the lack of rich, pluri-artistic and open languages to support the creation process. In addition, numerical art is nourished of a dialogue of another kind, and not always simple, between artists and scientists on the design of the tools, hardware and software, of assistance to this type of artistic creation. After the presentation of related works, we will draw conclusions on the lacks existing today in particular on the level of the dialogue between the various dedicated software and on its implication on artistic creation. We then present our proposal to address some of these issues: the ConceptMOVE project. 2. RELATED WORK In the field of the dance, the capture and the restitution of the movements of the dancers are used since a certain number of years to help the choreographer in his process of artistic creation. The software Life Forms, dedicated to the animation of the human movements, was conceived by the laboratory of Tom Calvert , on bases inspired of the notation of Laban . Thecla Schiphorst  worked with the American choreographer on the first parts carried out with the assistance of this tool: the choreographic sentences imagined by Merce Cunningham were then proposed to the dancers who interpreted figures sometimes completely unheard-of. Currently this tool is completely integrated into the artistic process of the choreographer who prolonged in 1999 his use by coupling it with the creation of 3D virtual dancers projected on scene in a scenography of Paul Kaiser and Shelley Eshkar (Biped). Nicole and Norbert Corsino created an interactive 3D choreographic navigation: Topologies de l'instant N~7 containing clones of dancers in a scenography made up of five urban or desert worlds. The movements of dance were acquired by motion capture and are applied to the virtual dancers. PHASE is a multimodal system for the exploration of the music. It is the result of a project carried out jointly by two research laboratories IRCAM and CEA-LIST and by two companies ONDIM and HAPTION. The purpose of this project was to conceive and try out new multi-sensory forms of interaction for 161
Page 00000162 the exploration and the control of the sound and the music. WAVES is an installation by Andrea Cera and Herve Robbe that presents video sequences treated with EyesWeb and uses Max/MSP patches that generate sound materials. SCHLAG! is a creation of Roland Auzet, adapted from a book, The Tin Drum, written by Giinter Grass. Using various numerical technologies, Roland Auzet ties a confrontation in the form of interactive dialogue between modern technologies and the most traditional arts: the circus, percussive music and puppets. The Virtual Museum of Contemporary Photography is a research proposed by Ahad Yari Rad. The spectator is immersed in a 3d virtual environment representing the interior of a museum with around him suspended frameworks comprising a colour photograph in the first plan, and a second black and white photo behind. The user can move in this three-dimensional universe by doing some gestures, which are collected by a camera and are then analyzed by the EyesWeb software. Hands movements make it possible to erase the photography of foreground revealing gradually the second photography (see Figure 1). A framework for virtual reality application design has to fulfil two different and contradictory design goals. First, it must offer a high level point of view to the application designer. However efficiency is as well and important aspect of the implementation. Much work has been done in this area: most notable efforts include MR-Toolkit , Vr Juggler, Maverick, Cavelib and CAVERNSoft G2 . However these toolkits and libraries fall short of providing a developer with a standard software architecture or framework to use when building a virtual world. Such standard architectures have been researched, such as in Aviary , VEOS , and VRML. They all focus on different problems, but all are concerned by a few fundamental questions: what kind of objects are the building blocks of the software, how does their state evolve and how do they share data? With GASP  and its successor OpenMASK  research focused on devising a framework where code reuse through modularity and parallel or distributed evolution of the virtual world would be possible. OpenMASK is an Open Source Middleware for Virtual Reality. One of the main objectives of OpenMASK is to provide a common runtime and conception framework for the creation of virtual reality applications. A common point of a certain number of hybrid works is the use of the OSC (Open Sound Control) protocol. This protocol developed by Matt Wright makes it possible to make communicate together computers, synthesizers and other multi-media peripherals. OSC is functioning as a client/server mechanism with the transmission of units of data. This protocol is integrated today within a certain number of software such as EyesWeb, Max/MSP, PureData and OpenMASK. Unfortunately, the source data being able to be integrated (packed up) in an OSC message are very rudimentary: integer, real, string and temporal stamp: the proper language used has to be redefined for each new artistic work. 3. THE CONCEPT MOVE PROJECT Figure 1: Virtual Museum of Contempc Photography The environment of the museum is enriche sound creation adapted to the photographs and r to the actions by the user. This application co together the use of EyesWeb for the user inter Max/MSP for the music composition and Open for the immersive environment management. F shows the software architecture of the applicatic analysis of EyesWeb is sent to OpenMASK deduces from them the animation of the 3d sce the adapted points of view. According to the beha the user, OpenMASK sends to Max/MSP directi the sound rendering. EyesWeb Gesture Analysis! " i osC,,....,. )rary d by a eacting mbines raction, MASK;igure 2 n. The whicn An artist developing an interactive numerical work finds ne and himself confronted with the taking into account of the ivior of statute of the specta(c)tor in his work: which space of ves for freedom does it leave him and which are the acceptable degrees of freedom in the structure of work? In addition, one should not any more think one work like an individual process of creation but collective and this dimension of collective creation is in the heart of our Stereoscopic Visual research project. How to create an interactive plurirend. g cultural work collectively allowing the dialogue between individual creations? As much for traditional media such as the cinema, the process can be iterative (one composes the music on images or one films Spatialized starting from the rate/rhythm of a selected piece of ReSnerng music), as much interactive numerical work requires the existence of a contract signed between the various:um protagonists of creation. That starts with the definition osC MAX/MSP Sound Processing Figure 2: Software architecture for the virtual muse 162
Page 00000163 of the nature of the interactivity and its repercussions on the various components of work. We have noticed the necessity to build a higher level protocol between software components as it is so far just a transmission of integers or floating values. This subject is the matter of the RIAM Project entitled ConceptMOVE. The aim of this project is to model and develop a new meta-language allowing a high level communication between the different software implied in the creation and execution of interactive artistic installations. 4. THE META LANGUAGE Our proposal is to describe pluri-artistic and interactive art pieces in a most generic way, completely independently from the systems (software, programming languages...) that will be use to concretely realize the artists' ideas. Thus an art piece is composed of a set of "elements" belonging to the various universes to which it is related. Those universes can correspond to common artistic fields such as dance, music, video and interaction, as well as more abstract universes representing the artist's own system of representation for describing his desire. In this context, examples of such elements can be an audio sample or a sound texture in the musical world, a sequence either filmed or generated in the video world, an event such as "the spectator entered a virtual zone" or a data such as "position of the spectator along x axis" from the interaction world,... As a basis to the language, the art piece starts to exist as soon as these elements get structured together by different kinds of relations. Such relations can correspond to statements like: "sound A follows sound B", "video 1 matches audio sequence 2",... These are not limited to temporal relations but include spatial relations, interactivity and even more conceptual relations. Sketch #1 Interaction Event 7 Interaction Event X position of Spectator -Starts Equals Control spectator. Four elements a used in this composition: the first one the audio synthesis belongs to the musical world, the Video sequence comes from the visual world, finally the "event" and the "x position" tracked are elements in the interactive world. Our example makes use of 5 temporal relations specifying: 1. the audio synthesis starts when the piece starts 2. the video sequence finishes at the end of the piece 3. the interaction event starts the video sequence 4. the video sequence finishes the audio synthesis 5. the "x position of spectator" equals (in the sense of Allen's relations, e.g. same starting time and same ending time) the audio sequence. An additional (non temporal) relation establishes a mapping between the tracked x position of the spectator with a given parameter (brightness, density of rain,...) of the video sequence according to a transformation function (scaling, rotation,...). 5. IMPLEMENTATION AND AUTOMATIC CODE GENERATION Once the overall temporal and interactive description of the piece is accomplished, the authors (probably a composer, a video designer and an interaction expert) will decide what favourite environment or programming language (Max/MSP, Pure Data, OpenMask, Java Integrated Development Environment,...) each of them wishes to use to implement its own part of the work. It is our belief that each participant in the creation process should be able to use its favourite or most adapted environment despite the difficulties for making these environments communicate together. We believe as well that it should not be the task of the authors to handle this communication mechanism and that it could be implemented automatically as long as sufficient information is provided. Thus, we propose that this implementation description defining the software components used as well as the computers on which they will run and the communication between environments is described along with the conceptual description of the piece in a structured file. Each "element" refers then to a "service" defined in a given environment (software, IP address, receiving port number, communication protocol,...) and exposes a message format to communicate with the rest of the world. Knowing this information, it is then possible to generate automatically the low level procedures that handle communication in a distributed environment as illustrated in Figure 4: each collaborating system has its own "CM" (conceptMOVE) generated part, that implements the communication part and frees the authors from dealing with OSC/UDP messages. Moreover, the description is detailed enough to generate the "engine" of the art piece, e.g. a simple application able to schedule events in real time as their starting or ending date, relying on interactivity, becomes available. Audio Synthesis Video Sequence Finishes 21 Figure 3: Example of a simple interactive piece defined on the conceptual level The art piece is then described in such way, by means of elements and relations on a complete conceptual level. Figure 3 describes an example of a simple interactive piece using such conceptual description: an audio synthesis process is played until a given interaction event happens. This event stops the audio process and starts a video sequence which is controlled by the X coordinate of the position of the 163
Page 00000164 Tracking Cm MAX /MSP OpenMASK Video GM~ Sequence Audio Synthesis CM JAVA I FDevelopment that remains to the authors CM Automatically generated code Figure 4: Illustration of the generated code in several environments to simplify the communications between software components Each collaborating system should consider the rest of the environment as an extension of its own paradigm. For instance, a java application should consider the rest of the environment in terms of java classes and methods: this application could then communicate with a MAX/MSP patch just by considering the elements as objects and calling their appropriate methods. Reciprocally, a MAX/MSP patch will communicate directly with a java application using its send / receive common objects. Thus, authors can work together using their own environment and collaborating with each other without any necessary knowledge of the technology used to implement the rest of the art piece. To achieve this goal, our application operates as a general transformation on the description of the art piece for each targeted environment. Figure 5 illustrates the transformation process from the initial XML description to the resulting generated software. Currently addressed platforms are the Max/MSP and Pure Data environment, the OpenMask application, ListenSpace and the Java IDE. Next developments will consider OpenMusic, Flash, and eyesweb. XML description of Authoring Tool the interactive art piece ConceptMOVE Application (XML Transformation, Code generation and comipilation) MAX /MSP Other Code Jaa Code]Envionments J extrnaENGINEeb.. Figure 5: Illustration of the transformation process from the art piece XML description to the automatic code generation. 6. CONCLUSION AND PERSPECTIVES This paper focused on the realisation of interactive and pluri-artistic art pieces. We put to evidence, through the examination of a number of recent creations in this domain, the lacks and bottleneck in the existing technologies that are a brake to artistic creation. The ConceptMOVE project was then presented as a basis to address this problem. We admit that our proposition might not be universal and well suited to every interactive pluri-artitistic situations. Especially situations requiring fine grained interactivity such as score following might be hard to specify in our formalism. However we believe that our approach is well suited to a large number of interactivity situations and especially to virtual and augmented realities, as well as interactive narration. Future work sees different outlooks. For instance the creation of a comprehensive XML editor with an intuitive graphical user is an important issue. However, the most important perspective for our project remains in convincing the rest of the computer/art community in order to gather as many collaborators together as possible and for creating interactive pluri-artistic performances. 7. REFERENCES  R. Laban, Language of Movement. Macdonald and Evans LTD., 1966.  J. Landis, J. Chapman, and T. Calvert. New Directions in Dance, chapter Notation of Dance with Computer Assistance, pages 169 - 178, Pergamon Press, 1979.  T. Schiphorst. Dance and Technology I: Moving toward the Future, chapter LifeForms: Design Tools for Choreography. 1992.  C. Shaw and M. Green. The MR Toolkit peers package and experiment. Proceedings of VRAIS'93, pages 463 - 469, 1993.  Kyoung Park, Yong Cho, Naveen Krishnaprasad, Chris Scharver, Michael Lewis, Jason Leigh and Andrew Johnson. Cavernsoft g2: A toolkit for high performance teleimmersive collaboration. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology 2000, pages 8-15, Oct 22 -25 2000.  D. Snowdon and A. West. AVIARY: Design issues for future large-scale virtual environments. Presence, 3(4):288-308, 1994. [ s2W. Bricken and G. Coco. The VEOS project.  S. Donikian, A. Chauffaut, T. Duval and R. Kulpa. Gasp: from modular programming to distributed execution. In Computer Animation '98, IEEE Philadelphia, US A, pages 79-87, june 1998.  D. Margery, B. Arnaldi, A. Chauffaut, S. Donikian and T. Duval. Openmask: Multithreaded or modular animation and simulation kernel or kit: a general introduction. In S. Richir, P. Richard and B. Taravel editors. VRIC 2002, Laval, France, June 2002. 164