Page  173 ï~~An Environment for Interactive Art -Sensor Integration and ApplicationsHaruhiro Katayose Tsutomu Kanamori and Seiji Inokuchi L.I.S.T. Senri LC, 11 If, Toyonaka, Osaka, 565, JAPAN katayose@ Abstract This paper describes the current overview of the Virtual Performer which is designed to compose and perform interactive multimedia arts with it. The Virtual Performer consists of sensory facilities, presentation facilities and authoring facilities. As for sensory facilities, this paper focusses on a motion capture sensor based on image processing and ATOM which is designed for the smallest Analog/MIDI conversion. The paper also desciribes applications of the Virtual Performer to the Computer Music and the Stage arts. 1. Introduction Computer Science has greatly developed and contributes in a wide area ranging from engineering to daily life. It is true that computer technology has led to the development of indispensable tools but there remains room for improvement of user-interfaces. Virtual reality which projects the human being at life-size into the virtual space is one of the most achieved technology [Zelter 1991]. At the same time, it requires the consideration of social impact and meaning. Answering this question, we have been developing the Virtual Performer which aims at stimulating the creativity or sensitivity of users and audience in the art field [Katayose 19931. Most of arts have a form in which the audience appreciate recorded or ready made works. Virtual reality technology provides us interactive live art form. Live performance is a form of art in which tension is the highest, because the same performance cannot be repeated. In live performances by human performers, each performer knows the scenario beforehand. He decides the timing of events to play written in the scenario, based on the information communication in real time. Furthermore, human performers who are familiar with each other can guess correctly what their partner will do next, observing the acoustic and motion gestures given by their partners. This function is achieved by the combination of the complementary utilization of sensors, knowledge processing in order to respond correctly to recognized information, and a facility of presentation. The purpose of the Virtual Performer we are proposing is to simulate such processing and to provide an interactive composing environment. The similar studMI ies regarding multimedia have been made be many research) ers [Camurri 1995] [Matsuda idaa) 1995]. na- MIDI Camurri's HARP/MIS project isda MIXER is one of the successful Mixed projects. Cumurri et al also W MIDI aims at motion caputure and its DATA applications to theinteractive " Z W arts. udio Y 2. Virtual Performer r j Originally we had two goals for developing the Virtual Performer. One is a role as the infrared image sensor Acoustic Sensor L.MIDI 1 (Sound data icMIDI 2 (Rough moti M I 3I 3(Detail motic \Posture Sensor A gular Senso TCP/IP MIDI Figure 1 Virtual Performer ICMC Proceedings 1996 173 Katayose et al.

Page  174 ï~~composer's environment, and the other is as a partner system for general use. The former aim is to compose total media-art. The media partner systems, which is different from the former artistic interest, aims to present human interfaces. An adaptive karaoke system and a music session system were developed as media partner systems. This paper focuses on the facility for composer's environment. The Virtual Performer consists of sensory facilities, presentation facilities and authoring facilities. The sensory facilities consist of various transducers and its plug-in style offers users free and optimal set up. Our approach of sensor development is to give sensors multi-modal functions. Multi-modal sensors are very good at supporting various arts. However, some problems arise and we should prepare countermeasures; generally conversion from data of sensors, down-sizing, data traffic control, standardization of data format, reduction of performer's load [Kanamori 1995]. The presentation facilities are MIDI, digital sound processing, CG generation and interactive Video controls. Regarding the sound generator, MIDI instruments and digital sound synthesis is available. Sound and Video effect processing with MIDI and SMPTE are also available. The authoring facilities offer two ways to design scenarios. One is to write recognition-action rules. The other is sequential connection of signal transformation. The former is mainly used to model the world of each scene. Using these facilities, we can produce synchronization of media and media transformation such as sound-visualization. This paper shows some ongoing activities to produce multimedia art with the Virtual Performer. 3. Sensory Facilities The sensors used in the system are the 3D-motioncapture sensor using CCD camera and attached sensors which detect angle, angular velocity, acceleration, posture and so on. The latter is a group of image sensors. The outputs of sensors are transformed into sequential digital values and transmitted with a wireless system. The system provides adjustment facility of output data for various sensing requirements. The users can easily adjust the gain, offset, resolution, sampling rate. The hardware configuration of sensors is also regulated. The connectors of each transducer to the integration module are designed to have the standard plug and the user can easily set the appropriate sensor configuration. 3.1 Motion Capture Sensor The motion capture sensor consists of infrared CCD cameras and plural infrared photo-diodes or lights. It is not easy to identify the lights with the pure (passive) image processing. The sensor identifies the lights and acquires the position with the active light control. The identification of the light is achieved by finding images which synchronize the active coded light in the time domain. This system obtains three dimensional positional data observing the size of light area in each image. Figure 2 Motion Capture Sensor 3.2 Cyber Shakuhachi Cyber Shakuhachi is a hyper instrument which specially equips above sensors to acquire characteristic techniques of shakuhachi. The gyro sensor is used in order to detect the three dimensional angular movement which can be seen in vibrato techniques. The finger-form data are detected by the touch sensors. Four electrodes attached around each finger hole can detect a delicate fingering called "kazashi"; gradual pushing of a hole. There are special techniques realized by high-speed finger form change. In shakuhachi performing, different pitches can be output with the same finger-form. An acoustic sensor is used to distinguish this pitch difference. In addition to specified pitches, the acoustic sensor detects continuous change of pitch and loudness, and triggers such as vibratos and tremolo, which can be extracted by the additional pattern matching procedure. 3.3 ATOM ATOM ( analog to MIDI converter) is the smallest MIDI instrument which has several analogs input for many kinds of sensors. The size of ATOM is only about one inch cube. ATOM is able to control sensors and generate MIDI signal directly. It uses a microcontroller called 'PIC16C71' (Microchip Co.). The high performance of the PIC16C71 can be attributed to a number of architectural features commonly found in RISC microprocessors. The PIC16C71 uses a HARVARD architecture, program Katayose et al. 174 ICMC Proceedings 1996

Page  175 ï~~connection of signal transformation. Sequential signal transformation is supported on MAX. The original authorware is mainly used as a Production System Engine and as a controller of SGI machines for the generation of image and sound. Figure 3 ATOM Overview and data are accessed from separate memories. This improves bandwidth over traditional Von-Neuman architecture. Separating program and data memory further allows instructions to be sized differently than 8 -bit wide data word. In the PICI 6C71, op-codes are 14 -bit wide making in possible to have all single word instructions. A two-stage pipeline overlaps fetch and execution of instructions. Consequently, all instruction executes in a single cycle expect for program branches. The PIC16C71 typically achieves a 2:1 code compression and a 4:1 speed improvement over other 8-bit microcontroller in its class. We achieved the down-sizing of ATOM by the most suitable circuit and an effective utilization of its inside functions. A performer is able to hold ATOM within one of his hand. ATOM operates with only single 5 Voltage through a special MIDI cable. Using full-scannable OP-amplifiers, every analog signal is amplified to standardized 0-5Volt. The A/D converter module has four analog input channels multiplexed. The PIC16C71 does not have any serial communication interface. We developed MIDI output functions using the bit translation and the inside timer. An interruption of the timer generates a pseudo interval of MIDI (31.25Kbps). MIDI's serial data are made and data traffic is controlled by this interval. 4. Presentation Facilities and Artistic Project 4.1 Control Artists and system designers design the artistic pieces by giving scene sequence, scene components and sound and visual design as shown in Figure 4. First, the artist decides the sensors and timers which are used for the scene controls. The scene manager selects the scene number based on given scene sequence and sensor status. The current scene manager activates each scene component on which how to control Presentation Facilities is described. The authoring facilities are MAX and the original currently developing software on SGI. The description schemata of the scene components are production rules and sequential Figure 4 Control of Multimedia Piece 4.2.Presentation Facilities Sound presentation facilities are commercial digital samplers, PCM synthesizers, digital sound effecters, digital audio mixers, own-made sound board and SGI Indy. These facilities are controlled by MIDI or TCP/IP. The sound board is made of 10 commercial PCM sound boards. this board can process grains for granular synthesis in real time. We use SGI Indy for real-time sound processing. These sound facilities are used with adequate combination in each artistic piece. 4.3.Artistic project We hava been carrying on three artistic projects using Virtual Performer Environment;Tikukan no utya Project, DM1 Project andPEGASUS Project. This paper introduces the first two of the these projects. Tikukan no utya Project Since 1993, we have been producing "Tikukan no utyi "[Cosmology of bamboo pipes] for the shakuhachi and the Virtual Performer. The staff are Music/Shakuhachi" Simura Satosi, Video: Masal Ohashi, Engineers: Haruhiro Katayose, Tsutomu Kanamori. "Tikukan no utyfi V" is performed in this ICMC. "Tikukan no utyfi" has a style that music and video are controlled by the computer-recognized body-actions and skills seen in playing the shakuhachi. The ICMC Proceedings 1996 175 Katayose et al.

Page  176 ï~~5.Conclusion Figure 5 Tikukan no utyti control sources are the triggers which are recognized using sensor-fusion technique, and values which are obtained directly from the motion sensors and time transition. In a religious manner, shakuhachi performance means expression of cosmology or something changing dynamically in the player's mind. Simplicity and complicity live together in shakuhachi performing. The expression of this paradox is the artistic theme of "Tikukan no utyG." DM1 Project This Dance, Multimedia, Interaction (DMI project) is a project for an interactive multimedia stage featuring dance, which was given the first performance in 1995 November. The various gestures of the dancers, including triggers and value information on the stage ware detected and used to control sound, CGs, Videos and lights. The title of the piece is "Birth, Evolution, Re-generation and Calm Water." The artistic concept is harmony between the spiritual and technology. The artistic director is Mariko Takayasu, a choreographer. This paper presented the overview of the Virtual Performer which is designed to compose and perform interactive multimedia arts with it. Regarding sensory facilities, this paper introduced a motion capture system, Cyber-Shakuhachi and ATOM. This paper also introduced some artistic activities. This is an ongoing project and we would like to improve the system as a environi the interactive multimedia arts ant \ produce some experimental pieses. References [Camrri 1995] A. Camurri. Interactive Dance/Music Systems. Proc. ICMC, pp.245-252, 1995 [Chadabe 1983] L.Chadabe. Interactive Composing. Proc. ICMC, pp.298-306, 1983. [Kanamori 1995] T. Kanamori et al. Sensor Integration for Interactive Digital art. Proc. ICMC, pp.265 -267, 1993. [Katayose 1993] H. Katayose et al. Virtual Perofrmer. Proc. ICMC, pp.138-145, 1993. [Nagashima 1995] Y. Nagashima et al. A Compositional Environmnt with Intersection and Interaction between Musical Model and Graphical Model, Proc. ICMC, pp.369-370. [Mastuda 1995] S. matsuda et al. A visual-to-sound interactive computer performance system 'Edge', Proc. ICMC, pp.599-600, 1995. [Zelter 1991] D. Zeltzer. Autonomy, Interaction and Presence. MIT Media Lab, MA, 1991. Figure 6 Photos from the stage "Birth, Evolution, Re-generation and Calm Water." Katayose et al. 176 ICMC Proceedings 1996