Design of a Flute Interface to Control Synthesis ModelsSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 00000228 Design of a Flute Interface to Control Synthesis Models Ystad S0lvi Voinier Thierry ystad @ lma.cnrs-mrs.fr voinier @lma.cnrs-mrs.fr Laboratoire de Mecanique et d'Acoustique 31, Chemin Joseph Aiguier 13402 Marseille France Abstract The aim of this work is to construct a digital flute interface which makes it possible for the musician to preserve the playing techniques of the traditional instrument. The realization of the digital flute has been done by connecting Hall effect sensors to the finger keys of a classical flute and by placing a microphone at the embouchure level. These connections do not modify the flute which can be played as a traditional instrument. The sensors which give the state of the finger keys are connected to an interface which generates MIDI codes (Icube). This information is then processed by a Macintosh running Max/MSP. We then obtain a MIDI instrument, which can operate as a controller for any MIDI synthesizer. Furthermore, if the computer is powerful enough, synthesis models can be implemented with the MSP program. I - Introduction Digital wind instruments generally differ a lot from traditional instruments, and most professional players have therefore shown little or no interest in such instruments. So far, keyboard instruments have been the most successful among such instruments since the playing techniques are the same as on a real piano (although the sensation of the resistance of the keys and the sound are not the same as on a real piano). Thus, in order to propose a digital wind instrument which conserves traditional playing techniques, a real flute with sensors and a microphone has been used to pilot a synthesis model. The sensors give the state of the finger keys and are connected to an interface which generates MIDI codes (Icube). This information is then processed by a Macintosh running Max/MSP. We then obtain a MIDI instrument which can operate as a controller for any MIDI synthesizer. Furthermore, if the computer is powerful enough, synthesis models can be implemented with the MSP program. The sound model to be controlled by the interface is based on a combination of physical and signal models adapted to the simulation of wind instruments. This model makes it possible to resynthesize a sound from a given wind instrument and to manipulate it by acting on the parameters of the synthesis model. In this paper we start by giving a rapid description of the construction of the sound model. Then the flute interface will be described before we discuss the possibilities such an instrument offers concerning the performance. II - Sound models Sound modeling is a part of the analysis synthesis concept which consists in constructing a synthetic sound from a natural one by using algorithmic synthesis models. For this purpose, powerful synthesis models which can be piloted in real-time should be constructed. Such models can be divided into two groups: signal models the aim of which is to simulate a perceptive effect and physical models which take into account the most important physical characteristics of the sound source. By combining these two classes of synthesis models, we have defined so-called hybrid models. 1I.1 - Physical Models Even though the aim of sound models is to simulate a perceptive effect, it is important to take into account the most important physical characteristics of the sound production system. In this case we have used simplified one-dimensional models corresponding to vibrating systems such as vibrating strings or resonating tubes. Wave guide models which consist in simulating the behaviour of the waves by a looped system containing a delay line and a filter F taking into account dispersive and dissipative effects as well as the boundary conditions have been used. The impulse response of this filter has been constructed in a way similar to the inversion of a time-frequency representation. Physical models give excellent resynthesis results for transient excitations. However for sustained sounds it is necessary to extract and to model the source independently. Since the physics of the sources generally is very complicated, we have chosen signal models for this purpose. 11.2 - Signal Models As already mentioned, signal models simulate a perceptive effect with mathematical equations. Such models generally give very good -228 - ICMC Proceedings 1999
Page 00000229 results, but they are difficult to pilot by an interface since they generally contain a lot of parameters and since these parameters do not have a physical sense. In our case we have chosen to separate the source and the resonator, although they are interacting in a real instrument. The source has been extracted from the signal by deconvolution and further decomposed in a deterministic and a stochastic contribution by adaptive filtering (LMS). Generally the deterministic contribution has a nonlinear behavior, and has therefore been modeled by waveshaping synthesis. The index of distortion is to be chosen so that the spectral evolution of the synthetic signal is similar to the spectral evolution of the real signal when the air pressure from the players mouth is changed. This can be done by perceptive criteria such as the tristimulus criterion which consists in considering the loudness in three separate parts of the spectrum. The index of distortion is found by minimising the difference between the tristimulus of the real and the synthetic sounds. The stochastic contribution of the source signal was supposed to be stationary and ergodic, and could therefore be characterized by its power spectral density and its probability density function. 11.3 - Hybrid Models In this section we have briefly described how physical and signal models can be used to simulate sounds. Physical models simulating the propagation of elastic waves (waveguide synthesis models) are well adapted to interfaces imitating traditional musical instruments, since there are relatively few parameters intervening in the model, and since these parameters are physically meaningful. However, for a great number of instruments, non-linear phenomena related to the source make it difficult to construct a complete physical model of the whole instrument. This is why we have chosen to separate the source and the resonator and to model them independently. In the final digital instrument we then combine these two models to make a so-called hybrid model. This model is mainly piloted by two parameters; the frequency for the physical model and the air pressure from the players mouth for the signal model. In the next section, a detailed description of the interface used for this purpose is given. III - The Flute Interface As already mentioned, we wanted to control the sound model with an interface which could be of interest for musicians. Thanks to powerful computers, interesting realisations of new interfaces are possible nowadays. So far, most digital instruments use keyboard interfaces which detect the speed at which the keys are struck. Such models are not satisfactory when modeling sustained instruments, since for such instruments the sound can-be modified after the attack. Although the musical industry has tried to solve this problem by adding so-called breath controllers or after touch systems, the instrumental playing is generally closely related to the structure of the instrument. This means that for example the linear structure of a keyboard is not easily adapted to the trumpet play, and that the information given by a keyboard is poor compared to the possibilities that a flute player disposes when playing a sound. This is why we decided to use a classical flute to pilot the hybrid model. A small magnet has been connected to each finger key of the instrument and a Hall effect sensor has been mounted in front of each magnet so that all the sensors are supported by a rail parallel to the instrument (Figure 1). The output voltage of the sensors give us a measure of the distance between each keypad and the corresponding hole, allowing to detect whether the hole is open or closed. Furthermore, by regular sampling of this distance, we can also measure the speed at which each key is pressed or released. This information can be used to control the keypad noise of the synthesis model, which has been found important for the realism of the synthetized sound. In order to measure the acoustic pressure of the instrument, a microphone has been placed at the embouchure level. More precisely, the original cork of the instrument has been removed, and replaced by a home made assembly wearing the microphone and still allowing fine tuning of the instrument. A microphone able to handle the high acoustic pressure (= 140 dB SPL) inside the flute pipe was chosen. All these modifications have been made in such a way that the instrument remains playable. An I-Cube System (AID interface) has been used to power the sensors, digitize their output voltage and send these measurements to a MIDI interfaced computer. The interface allow the sampling of 16 sensors up to a 95 Hz rate with a 7-bits resolution, which has been found well enough for this application. A Max object (iCube) comes with the hardware, which allows to easily create a Max patch in order to process data from the sensors. The processing essentially consists in checking the open or closed state for each hole, then finding in a lookup table the associated pitch. A wrong fingering is then not recognized, in this case, the last valid pitch remains active. The microphone signal is connected to the sound input of the Macintosh. It can then be sampled at the audio rate and processed by standard MSP objects in order to obtain the pressure envelope. The envelope magnitude is then sampled to a lower rate (50 Hz) and used to trigger note on and note off MIDI messages with an associated pitch coming from the state of the holes. Since fluctuations of the envelope signal at about 5 Hz are known to be correlated with tremolo and vibrato, they are used to produce MIDI messages like aftertouch and pitch bend. ICMC Proceedings 1999 -229 -
Page 00000230 Because of the ability of the I-Cube to manage 32 inputs (at the expense of a lower (65 Hz) sampling rate), some more analog inputs can be used to allow additional synthesis parameter or MIDI messages. The player now has a traditional instrument, able to send MIDI information about the play. This could be used for driving any MIDI synthetizer, for playing analysis purpose, or with a little bit more programming for accompaniment of the flute player (let's think to a flutist playing a Disklavier). Figure 1: Realisation of an interface based on a flute equipped with sensors at the finger holes and a microphone detecting the internal driving pressure. IV - Performance In figure 2 the hybrid flute model with the flute interface is shown. We here recognise the physical model simulating the resonator of the instrument which consists in a loop with a delay line and a filter. This model is controlled by the players finger position. This effect is added to the input of the resonator. The signal model simulating the source signal is of course also injected into the resonator. We can see that it has a stochastic and a deterministic contribution, and that it is piloted by the driving pressure measured at the embouchure level. The vibrato estimation is a part of this model and is calculated by band-pass filtering the driving pressure, since flutists use pressure fluctuations to produce the vibrato. It is further added to the frequency input of the oscillator which generates the input signal of the waveshaping function. The amplitude input of the oscillator corresponds to the distortion index which has been calculated thanks to the tristimulus criterion. The arrows in the figure indicate the parameters which would be interesting to modify. The frequency of the vibrato can for instance be changed by acting on the filter selecting the fluctuations of the amplitude modulation law corresponding to the internal pressure. By changing the gain of the output filter, the depth of the vibrato can be changed. The distortion index is a very sensitive parameter which has been estimated to fit the spectral evolution of the flute sound. Nevertheless, a change in the correspondence between the internal pressure and the distortion index can be imagined. A brass effect can then be given to the flute sound by increasing the variation domain of the distortion index. The timbre of the deterministic part of the source signal can further be changed by modifying the characteristics of the distortion function. By acting on the noise filter (power spectral density) and on the statistics (probability density function) the characteristics of the noise can be modified. It can of course also be completely removed from the sound, or be the only part of the source injected into the resonator. Thanks to the Hall effect sensors connected to the key pads of the flute, the velocity with which the finger hole is closed can be calculated. Thus, the noise from the key-pads can be taken into account in the model. The characteristics of the resonator can be altered to make, for instance cross synthesis effects. By using for instance a loop filter corresponding to a string with a flute excitation, a very particular sound would be generated, corresponding to blowing into a string. In a similar way, external sources to be filtered by the resonator can be added. This would for example allow the generation of the noise made by the flutist while playing. The physical model also makes it possible to change the structure of the instrument and play on an instrument which is physically impossible to realize or to play on (like a flute measuring 30 meters). -230 - ICMC Proceedings 1999
Page 00000231 Figure 2: Hybrid sound model simulating a flute. The arrows indicate some of the possible sound transformations that can be done by acting on the model's parameters. V - Conclusion In order to make a digital wind instrument which concerves the playing techniques of a traditional instrument, we have used a sound model which combines physical and signal models. The physical (waveguide) model simulates the resonator of the instrument which is supposed to be linear. The signal model simulates the source of the instrument which is separated from the resonator by deconvolution. The source behaves non-linearly and is simulated with waveshaping synthesis and perceptive criteria such as the tristimulus criterion. In order to pilot the sound model, an interface consisting of a traditional flute equipped with sensors at the finger keys and a microphone at the embouchure level has been constructed. The instrument can thus be played as a traditional one, and can also be connected to a computer to pilot the sound model. In this case the information about the finger position given by the state of the sensors is transformed into MIDI codes by an Icube. The codes together with the pressure from the microphone is then processed by a Macintosh running Max/MSP. The instrument can thus be used to resynthesize a given instrument and to manipulate it by acting on the parameters of the synthesis model. For that purpose, some commands have been added to the instrument. The instrument can also act as a controller for any MIDI synthesizer. In addition it can be of importance for performance analysis purposes and we expect to use it in the future to better understand the interpretation process of wind instruments. V - References Ystad S. "Sound modeling using a combination of physical and signal models", Ph. D. Thesis defended March 11, 1998. University of Aix-Marseille II, with references. ICMC Proceedings 1999 -231 -