Page  196 ï~~Demonstration of Gesture Sensors for the Shakuhachi *Haruhro Katayose, *Tsutomu Kanamori, **Satosi Simura, and *Seiji Inokuchi *Laboratories of Image Information Science and Technology (L.I.S.T.) *Senti LC, 1 If, Toyonaka, Osaka, 565, JAPAN *katayose@image-lab.or.jp **Osaka Univertiy of Arts **Higashiyama, Konan-cho, Minamikawachi, Osaka, 585, JAPAN Abstract We have been developing the Virtual Performer in order to study KANSEI (sensuous) Information Processing and non-verbal Human Comunication. The Virtual Performer is composed of the sensor module which acquires the nvironmental information, the control module which analyzes the acquired data and plans how to respond to the environment, and performance module which express the response using Audio-visual equipments. We use the Virtual Performer in two ways; an composing environment and media-partner (amusement system.) This paper focuses on how to make "Tikukan no uchu II", as the latter usage of the virtual performer. 1. Introduction Live performance is a form of art in which tension is the highest, because the same performance cannot be repeated. In live performances by human performers, each performer knows the scenario beforehand. He decides the timing of events to play which is written in the scenario, based on the information communication in real time. Furthermore, human performers who are familiar with each other can guess correctly what their partner will do next, observing the acoustic and motion gestures given by their partners. This function is achieved by the combination of a complementary utilization of sensors, knowledge processing in order to respond correctly to recognized information, and a facility of presentation. The purpose of the the Virtual Performer we propose here is to simulate such processing and to provide an interactive composing environment (Fig. 1). [Katayose 1993]. The Virtual Performer is composed of gesture sensors, a module analyzing and responding obtained information, and presentation facility. The first is the Figure 1. Virtual Performer Interactive Performance 196 ICMC Proceedings 1994

Page  197 ï~~sensing module. Sensor fusion technique is used in the construction of the gesture sensors. The sensing module consists of sensors for motion gestures and acoustic gestures. As for motion sensors, different kinds of sensors are used complementarily in order to acquire data with high resolution in a wide range. As for acoustic data, chords can be detected in real time. The second part is a module analyzing and responding obtained information, that is a performance controller. This module decides performance data analyzing the recognized gestures. Uncertainty of the data acquired by the sensors is clarified in this module, considering all output from each sensor. In this module, higher musical primitives, such as name chords, periods of phrases, and structural accents, are also analyzed in real time. These musical primitives, as well as low level signals from sensors, are used as control parameters of performances. The third is the CG and sound generator. In addition to artistic CG, a human physical model which generates animation synchronized to the performance has been constructed. 2. 'T Â~(Tikukan no utytill [Cosmology of bamboo pipes] Since last year, we have been composing t vQ)4 (Tikukan no utyui [Cosmology of bamboo pipes] for the shakuhachi and the Virtual Performer. "Tikukan no utyui II" is the second piece composed for this ICMC. The staff are the following. Music/Shakuhachi: Simura Satosi BGV: Takashi Ito Engineers: Haruhiro Katayose, Tsutomu Kanamori It is not rare that artists and engineers cooperate with each other, to produce computer music. But, in Japan our project is a unique one. A series of pieces "Tikukan no utyui" has a style that music is controlled by the computer-recognized body-actions and skills seen in playing the shakuhachi. The control sources are the triggers which are recognized using sensor-fusion technique, values which are obtained directly from the motion sensors and time transition which is defined when the piece is played. The piece is composed through the iteration of connecting the control sources to the controlled target and the auditory tests. The shape and performing style of the shakuhachi is simple compared with the western music represented by orchestra. Nevertheless, the player's wellcontrolled physical actions, such as head shaking and fingers moving near the holes, make its sound fully expressive. In a religious manner, shakuhachi performance means expression of cosmology or something changing dynamically in the player's mind. Simplicity and complicity live together in shakuhachi performing. The expression of this paradox is the artistic theme of "Tikukan no utyu II." The Fig. 2 shows a part of the score of Tikukan no utyti II. 3. Technical layout Fig. 3 shows the technical layout for "Tikukan no utyui II". A synthesizer, effecters, digital mixers are controlled dynamically by the player's action. The main usage of the effecters is to add variation of tonecolor, pitch, reverberation of shakuhachi solo. The sen C-9 _ b'~wS I~ Figure 2 Part of the score of Tikukan no utyu II. ICMC Proceedings 1994 197 Interactive Performance

Page  198 ï~~sors brought to this [CMC is a small tour model; touch/ gyro/bender sensor box and an acoustic sensor box. 3.1 Sequence of the music "Tikukan no utyi II" requires changing the flow of the audio-line several times in one performance. Effecters are used both in direct and parallel connection. Mixers are used in order to switch connecting effecters as well as ordinal level and panpot control of the sound. The program of "Tikukan no utyti II" composes of ten scenes. The initial volume parameters of each scene, which are sent to the mixers, control the connection of audio line. This configuration is effective to make various usage of limited number of effecters. Scene-change triggers are fingering forms and the trigger of special switch attached to the shakuhachi. The control program of "Tikukan no utyu II" is developed as MAX patches. When the input sensor data is big and the number of MAX objects hitting to the input data is big, the system does not work well because of overload. The following countermeasure was made to reduce overload. 1) Integrated sensor data control in the main patch The main patch has 10 subpatches correspnding to each scene. The main patch sends just required sensor data to each subpatch, which reduces the number of MAX receive objects. 2) Input control of sensor data at the MIDI merger It is impossible to use all sensors in one subpatch because of too heavy data traffic, but there is a require ment to use various kinds of sensors in a piece. The MIDI patcher (or MIDI merger ) control of each scene enables us to use various combination of using sensors. 3.2 Effector Control The effecters are YAMAHA SPX990. This effecter accepts three MIDI external controls for each effect module in addition to the program change; pitch change and two types of control parameters. The primary usage of the pitch-change module is literally to change pitch of the input signal. But there is a defect that the tone color may change compared with analog processing device. We used this phenomenon as a tool of real-time tone color control. 4. Sensors The sensors of the Virtual Performer are, gyro sensors, supersonic sensors, touch sensors, benders, pressure sensors, an acoustic sensor and an image sensor. The specification of the sensor are shown in Figure 4. The Gyro Sensor for Shakuhachi is used in order to detect the three dimensional angular movement which can be seen in vibrato techniques. The finger form data are detected by the touch sensors. Four electrodes attached around each finger hole can detect a delicate fingering called "kazashi"; gradual pushing of a hole. There are special techniques realized by high-speed finger form change. MACI (sensor fusion computer ) monitor AF (00) touch AF (30-34) for KAZASHI. AF 35 0xx."' finger switch AF 3A 127,126 MIDI ch 1 MIX A 9 gyro AF(01-03) 2 MIX 8 10 AUDIO LINE bnder AF(04,05) 3 S1000 11 4 S1000 12 S1000 5 S1000 13 EffectA MIDI LINE 6 S1000 14 EffectB sensor pother: controlod by touch sensor 7 S1000 15 Acoustic Sensor s siooo 16 Sonsor(Gyro. touch, t-iggerAi when sensor fusion is required 3-20 has to be replaced by 2->1. Figure 3. Technical layout of Tikukan no utyfi II. Interactive Performance 198 ICMC Proceedings 1994

Page  199 ï~~Space Measurable Obtained Time resolution resolution area technique Portament, Acousic 4Oms Octave Jump, DsopendVibrato, eage' son Image looms 128 x 128 lens type, Shon a Sensor esye hs Supersonic 30ms x Number 20mm(40kHz) 5m cube Shosa Sensors Narrow Yuri Gyro Sensor 20ms 0.mV/deg Yusur Touch Sensor Onl Oms, Off 2oms Korokoro Uchi Figure 4. Specification of Sensors in Fig. 3 takes charge in such time domain pattern recognition. In shakuhachi performing, different pitches can be output with the same finger form. An acoustic sensor is used to distinguish this pitch difference. In addition to specified pitches, the acoustic sensor detects continuous change of pitch and loudness, and triggers such as vibratos and tremolo, which can be extracted by the additional pattern matching procedure. The sensors brought to this ICMC is tour model; which detect movement only in sitting performing style. The bender, in a meaning, non high-technology sensor has been specially prepared, responding the player's request of the force feedback. 5. Conclusions In this paper we have presented the gesture sensors for the shakuhachi, focussing on the technical layout for "Tikukan no utyt II." This paper has been shown some techniques when multiple sensors are used in an actual computer music; the data traffic control and a part of the sensor fusion. More technical information of each sensor is described in [Kanamori et al 19931. The configuration shown in this paper may be a little bit over-spec, for the tour model. But we verified its efficiency through our current attempt (piece) to use more sensors to detect dynamic motion of the player on the whole stage. Our next goal is to lesson the constraint in the composing and to use sensor data for the control of video effecters. 6. Related work Interactive composing has been actively discussed and played by a number of artists/researchers. Gesture sensors are also developed by many researchers. Due to the page limitation, the papers this paper refers are only a part of the prior elaborated work. References [Chadabe 1983] L.Chadabe. Interactive Composing. Proc. ICMC, pp.298--306, 1983. [Kanamori 1993] T. Kanamori et al. Gesture Sensor in Virtual Perofrmer. Proc. ICMC, pp.127-129, 1993. [Katayose 1993] H. Katayose et al. Virtual Performer, Proc. ICMC, pp. 138-145, 1993. [Machover et al. 1989] Tod Machover and Joe Chung. Hyperinstrument: musically intelligent and interactive performacne and creativity systems. Proc. ICMC pp.186-190, 1989. Figure 5. Gyro and touch sensors and the performer ICMC Proceedings 1994 199 Interactive Performance