A Visual Programming Environment for Composing Interactive Performance SystemsSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact email@example.com to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 00000001 A Visual Programming Environment for Composing Interactive Performance Systems Bernhard Wagner MultiMedia Laboratory University of Zurich Inst.f.Informatik, Winterthurerstr. 190 CH-8057 Zurich, Switzerland firstname.lastname@example.org, http://www.ifi.unizh.ch/staff/bwagner.html Keywords: Performance, Visual Composition, Object-Oriented Framework, Real-time, Media-Patcher Abstract Based on a multimedia application framework a visual programming environment was built, that allows non-programmers to compose objects offered in the core framework without having to learn a programming language. Although having similarities with Opcode/IRCAM Max, our visual programming environment has some significant differences: Its connections are bi-directional, other media than MIDI, such as audio, 3D, 3D-animations are supported. 1 Introduction Based on the portable (UNIX, WinNT) objectoriented multimedia application framework MET++ [1,2] a visual programming environment was built, that allows non-programmers to compose objects offered in the core framework without having to learn a programming language such as Java or C++ . Although having similarities with Opcode/IRCAM Max [3,4,5], our visual programming environment has some significant differences: Its connections are bidirectional, other media than MIDI, such as audio, 2D, 3D, 2D-, 3D-animations, and video are supported. Due to the bi-directionality of the connections and the support for various media, the visual programming environment can act as a Media Patcher. The Media Patcher can use static or time-dependent geometric aspects of 3Dobjects (e.g. spatial coordinates, scaling, material, texture information) to control musical parameters. To allow for a more sophisticated mapping than e.g. some coordinate linearly to note pitch, intermediary analyses of media inputs can be programmed visually. Due to its MIDI- and TCP/IP-connectivity it can also be applied as a real-time system. It has been used as a real-time animation generator in a concert at the Conservatory of Zurich. The musicians improvise on MIDI- as well as traditional instruments. The acoustic output is converted by a pitch-to-midi converter. After being filtered and analyzed, the MIDI data are applied as coefficients to parametric functions controlling several properties of an animated 3Dmesh: x-, y-, z-coordinates and r-, g-, b components, and transparency, each controlled by an individual function. The controlling functions themselves can be exchanged at run-time. Due to a generic interpolation mechanism the exchanging of functions does not cause an abrupt change in the animation. If the visual programs become more complex the available screen space is quickly used up. To overcome this difficulty, a Tcl-Wrapper was built that allows to formulate more complex functions in terms of scripting. 2 The Object-Oriented Multimedia Application Framework MET++ The Object-Oriented Multimedia Application Framework MET++ has been presented at the ICMC 95 . It is well-suited for rapid application development and prototyping purposes in the area of multimedia and graphical user interfaces with high semantic feedback. Like other frameworks, MET++ has reached a stadium where it is mainly used in a black-box manner. This means that it consists mainly of components that can be instantiated and reused without the need for subclassing. The term black-box is used because the internals of the components don't have to be dealt with, only their interfaces. MET++ is not just a library or collection of isolated classes but a framework that pre-integrates the components and predefines their style of interaction. For example all time-dependent media can be edited regardless of their specific type in a special grouping editor provided by MET++. This editor allows the redundancy-free, hierarchical grouping of time dependent media. It therefore provides special grouping elements that
Page 00000003 dependent media that are started only when specific conditions are met, e.g. a sound or midi file is only played when certain geometric conditions are met. This allows to automatically create the appropriate sounds for an animation of a collision. The animator of a scene does not have to bother about defining the starting of the sound at the right moment and always adjusting it when the animation changes, but only once define the appropriate condition. 4 "ParaScape": a specialised component Though working with the visual programming environment consists mostly of composing objects by interconnecting them interactively it might be needed to develop specialized components in C++ mostly for efficiency reasons. The dichotomy between building components in a "system programming language" and gluing them together in a "scripting language" (or visual programming environment in our case) is discussed in . The idea for the performance at the conservatory of Zurich was to let musicians improvise and generate a score out of the music in real-time. This score would then influence the musicians and so generate a cybernetic feedback. By prototyping we tried several visual artifacts that could be used for the score and quite quickly found out that synthetic landscapes (meshes) have the strongest suggestive effect. Additionally, their multidimensionality accommodate to map the multidimensionality of the room of musical parameters. Three requests that the synthetic landscape should fulfill emerged: * Shape and colors of the mesh should be controllable for the mesh as a whole to reduce the number of degrees of freedom. * The changes of the mesh should happen smoothly * The effect of the played music on the mesh should not be too obvious to the musicians. They should only feel that their playing has an effect on the animation but not what effect. The first request could be complied with by introducing parametric functions. The mesh can be understood as a rectangular set of vertices. Each vertex has the parameters: spatial x-, y-, z-coordinates, r-, g-, b-values defining the color, and transparency a. For a mesh with the dimensions U and V this results in 7 x U x V degrees of freedom. To reduce this great number and to get a better control over the mesh as a whole seven parametric functions were introduced, each controlling one aspect of all vertices of the mesh. These parametric functions indicate the value for the specific attribute depending on the indices u and v of a vertex within the rectangular set, e.g. x = f(u,v) = v * cos(u) (1) The parametric functions applied to the mesh are specifically designed to be exchangeable at runtime. Since the visual program only supports the basic types like integer, float, and string (see ) the mesh contains a collection of about 30 functions. These are chosen by index, e.g. the function in (1) has the index 23. To fulfill the requirement of smooth changes of the mesh an interpolation scheme is used: (1-X) * f(u, v) + X * g(u, v); X in [0,1] (2) Where f is the first function and g is the second function. X is incremented from 0 to 1 linearly. Here is an example set of parametric functions (the color functions are neglected): x = v * cos(u); y = v * sin(u); z = a * u; and second set: (3) x = (b + a * cos(u)) * cos(v); y = (b + a * cos(u)) * sin(v); z = a * sin(u); (4) Fig. 2 shows the resulting shapes created by applying the sets (3), a helix and (4), a torus. The shape in between is one of the intermediate shapes. Fig 2: interpolating between helix and torus........::i~ix i..iiiii.ii:iiiiiii.............................iiiiiiiiiiiiiiii iii i::::l..::::::::::::::::: In this example all three parametric functions for x y, and z have been exchanged at once. But in a real-time performance they would be exchanged individually and independently of each other. Fig. 3 shows a sample setup of a visual program containing a mesh and receiving a subset of its parameters from two different MIDI-channels. The output of the first MIDI-channel is transformed by a script and by two simple filters...F.........................eix n d o ru In ~ ~ ~ ~ ~ ~......................tri fu ctonsfo...........ve be n xc ang d.t.nc..................e....... e...r. f..m........e..... u..................................n....d e... n.. n..... o.........t...... F i..............s a.m........... o... a.......l.............................n.........i...a........o.........................w.........e...M........n...........................s..........n.e.........................................w...i.........t....