ï~~Distal Learning of Musical Instrument Control Parameters Michael A. Casey Perceptual Computing Group MIT Media Laboratory Cambridge, MA 02139 mkc@media.mit.edu Abstract This paper describes a parameter estimation method for sound generating environments based on distal supervised learning techniques using muli-layer neural networks. The paradigm we have chosen for investigation is that of learning to control a musical instrument in order to produce an intended sound. We present the general framework of distal learning from contemporary control theory literature and show that musical instrument control is a distal learning problem. Examples of the application of distal learning to the control of various sound synthesis environments are discussed. We also consider representational issues for signal-based learning in neural networks. 1 Introduction When performers play musical instruments they realize a mapping from an internal representation of sound intentions to a set of motor actions that, when applied to the instrument, create the intended sounds. Clearly this mapping is learned over a significant amount of time during which the performer practices many musical passages and many forms of articulation appropriate to their instrument. After much training the musician is able to realize novel intentions without having to practice every possible situation in advance. Such is the case in improvisation, for example, where the performer draws on previously learned skills to create new musical outcomes. In this paper we are primarily concerned with the issue of timbral control of a musical instrument and in modeling the learning process that allows a performer to produce a sound outcome from a sound intention. Space dictates that we limit the discussion to the modeling of static control environments. However, the methods presented here can also be applied to dynamic control environments. We first introduce the distal learning problem and show that it is appropriate for learning to control a musical instrument. 2 Distal Learning The distal learning problem is illustrated in Figure 1. The Learner controls a set of distal variables via a set of proximal variables. The proximal variables are inputs to an environment that produces a distal outcome. Musical performers directly control action parameters such as bow pressure, bowing speed and finger positions. These control parameters pass through the musical instrument, which is a complex dynamical system, and the resulting sound is a transformation of these inputs. Thus the performer has indirect control over the sound outcome. The learner holds an internal representation of the sound that they want to produce, i.e. a sound intention, and it is the difference between this and the sound outcome that is used to drive the learning of the control parameters to the instrument. We refer to the sound output as y, and the sound intention as y*. Thus the error term for learning can simply be stated as: E = (y*-y) (1) The musical instrument is referred to as a plant or physical environment to which the learner has to find an inverse model that maps 1B.4 240 ICMC Proceedings 1993 0
Top of page Top of page