A Hierarchical System for Controlling Synthesis by Phyical ModelingSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 108 ï~~A HIERARCHICAL SYSTEM FOR CONTROLLING SYNTHESIS BY PHYSICAL MODELING Perry R. Cook Stanford CCRMA email@example.com ABSTRACT: Sound synthesis using physical models provides many expressive possibilities, with an associated cost that all of the parameters must be controlled, and sometimes parameters interact in ways which are not obvious. The way that a human player solves many of these problems is by listening to (and feeling) their instrument and constantly updating the parameters. Here a software architecture and system is described which contains various objects encapsulating the instrument physics, and the physics, expertise, and perception of the player. Inter-object connections to modify instrument control parameters "on the fly" yield more consistent performances. 1. Introduction Control parameter values for physical modeling synthesis are difficult to obtain by linear or non-linear system identification techniques because of the rapidly varying nature of transients. Further, since many interesting physical models contain a non-linear oscillator, parameters which work well once may not work the next time the same performance is attempted. The way that a human player solves these problems is by setting up instrument control parameters from memory, then adapting the control parameters based on auditory feedback from the instrument. Synthesis by physical modeling can also benefit from a feedback control system such as this, and can further benefit from a more complete simulation of the physics and expertise of the player. A software architecture, evolved from the experience of creating several physical models and controllers (Cook92,92b), will be described which contains various objects encapsulating the instrument physics, player physics, player expertise, and simple aspects of player perception. 2. Objects In a Complete System Figure 1 shows a general Performer/Instrument class hierarchy. Blocks will be discussed with examples. Performer Instrument IC Performer Physics [II t.amilyExp IIPerceptionA lnetr. Family Acoustics nst,.,Fam,.. ','I Performance Physic, / f [str. Specific tnstr. Expert Al Specific Intr. Acoustics PerfomPrrPysimsnehst:r s, [ Fire Context Exprt! [A Fine Context Acousticsrm Figure 1. Class hierarchy for physical models and controls, using an Performer/Instrument paradigm. A. Instrument Family Acoustics: The physical acoustics characteristics of a particular instrument family, such as the brass family, bowed string family, etc. At. Acoustics of Specific Instruments within a family, containing all the relevant tone-producing musical acoustics. These are waveguide synthesis models (Smith87), such as Clarlns, Hoselns, Flutelns, and DSPSinger objects used in the ClariNot, HosePlayer, SlideFlute, and SPASM programs. Aij. Very Specific Instrument Acoustics B. Instrument Performance Physics Model: Models the physics, other than the acoustics modeled in A, of the instrument. Examples include the mass and damping of the trombone slide or trumpet valve (dynamically modeled), or the length of the guitar fretboard (lookup table) to be used in calculating how long it should take a human arm to slide from one position to the other. The bandwidth required for truly coupling the instrument physics to the performer physics could be quite high, although it is unlikely that this level of simulation must take place at audio rates. Bi. These model a specific member of the family. C. Performer Physics Model: Contains physical limitations of the player, such as the mass/spring/ damping of the arm and hand (Janosy et al 94). Also contains rules for perceptual/physiological guidance of articulators to targets, using instrument physical descriptions acquired by querying B. For example, the jaw can drop only so quickly, and the tongue body moves slower than the tongue tip. The Singer program now includes mass and damping on each articulator (Cook93), so that each articulator drives to the target point according to an individual time constant. Another case might involve modeling the time required for a flute player' s finger to move, possibly modeling the interaction of fingers within the hand. Ci. If desired, the specific instrument case could be addressed, noting that a skilled player develops muscles and strategies 108 8IC MC PROCEEDINGS 1995
Page 109 ï~~unique to their specific instrument. D. Expert. Instrument Family: Contains an expert's knowledge for controlling instruments of the same family. Examples of such families include bowed strings of the same tuning characteristics, three valved brass instruments, etc. Such knowledge might include coarse settings for lip tension and valve position to form particular notes on brass instruments. The object MIDIController within the HosePlayer program contains a lookup table of slide and lip parameters which give a coarse setting for each MIDI note number. Di. Expert. Specific Instrument: Detailed settings for the specific member within the instrument family, such as the lip and breath offsets which differentiate a tuba from a trumpet. Dij. The fine details that cause a good player to be comfortably proficient on a very specific instrument. This part should probably be adaptive subject to feedback (Szilas et al. 93)(Wesse191), so that the player object can quickly adjust to a new instrument. Some functionality could be implemented by neural networks (Lee et al. 92). E. Perceptual Model: "Listens" to output of physical model and drives control parameters to achieve the desired output. This object should have the capability of a 'musically astute' listener, but not necessarily any knowledge of the specific instrument The Expert Objects (D) could be sent messages specifying that the sound is flat, unrich, or too loud. The necessary changes to the control parameters would be processed with input from the Expert Objects, then appropriate messages would be sent to the physical model. The Perceptual Model could be composed of simple time domain pitch detector coupled to a 'classical' expert system, or a full blown model of the ear and brain (when such exists). 3. Implementation ofta Simple Player/Instrument System Figure 2 shows the software and connection architecture of a simple system including many of the elements described above. The PlayerController block does all decision making and processing, and various other blocks respond to queries from the PlayerController. The Expert is a simple lookup table, the Perceptual Model is a simple pitch and power detection program (Cook et al. 92). The link from Player to Instrument is accomplished via a MIDI connection, over which simple MIDI Control Change messages are passed. The Instrument consists of a DSP synthesis instrument, controlled by an InstrumentController, whose purpose is to convert MIDI messages into parameter changes for the synthesis instrument. Figure 3 shows the user interface panels for the programs implementing these Performer and Instrument functions. Ple r veer j..4',:,,_ _ _ _ a, j Y.L::1:.:.Y".....~....e;-:.':.\::::: i!-i~::: "i "~.. 1.:;. ":". b i..:_ rrI............:........... Figure 2. Simple Performer/Instrument system Figure 3. Performer/Instrument TPlayer and HosePlayer. TPlayer implementing a modular control scheme. contains expert listener, which modifies slide and/or lip parameters. 4. References (Cook92) P. Cook, "SPASM: a Real-Time Vocal Tract Physical Model Editor/Controller and Singer. the Companion Software Synthesis System," Computer Music Journal, 17: 1. (Cook92b) P. d'ook, "A Meta-Wind-lnstrument Physical Model, and a Meta-Controller for Real Time Performance Control," ICMC, San Jose. (Cook et al. 93) P. Cook, D. Morrill, and J. Smith, "An A MID! Control and Performance System for Brass Instruments," ICMC, Tokyo. (Cook93) P. Cook, "New Control Strategies for the Singer Articulatory Voice Synthesis System," Stockholm Music Acoustics Conference. (Janosy et al. 94) Z. Janos Â~, M. Karjalainen, and V. Valimaki, "Intelligent Synthesis Control with Applications to a Physical Model of the Acoustic Guita," ICMC, Aarhus. (Lee et a/. 92) M. Lee and D. Wessel 1992, "Connectionist Models for Control of Sound Synthesis," ICMC, San Jose. (Smith87) J. Smith, "Musical Applications of Digital Waveguides," CCRMA Report STAN-M-39. (Szilas et al. 93) N. Szilas and C. Cadoz, "Physical Models that learn," ICMC, Tokyo. (Wessel9 1) D.Wessel, "Instruments That Learn, Refined Controllers, and Source Model Loudspeakers," Computer Music Journal, 15: 4, pp. 82-85. IC M C PROCEEDINGS 199510 109