Page  00000252 Feedback in Musical Computer Applications Pauli Laine Kai Lassfolk Department of Musicology P.O.Box 35 (Vironkatu 1) 00014 University of Helsinki, Finland Email: Pauli.Laine, Kai.Lassfolk Abstract This paper addresses the issue of various forms of feedback in musical activity and their possible computer simulations. In particular, feedback situations involved in a musical performance and in composition are considered. Simulation of human musical performance raises a need for new communication protocols. Their basic requirements are described. The concept of dynamical data representation in compositional processing is also discussed. 1 Introduction Almost all human musical activity including composition and performance involves some kind of feedback. Yet, most computer music systems are based on one-way communication or decision making. For example, MIDI is principally a one-way communication protocol with few special case exceptions. This means that MIDI offers no means for a computer to adjust its behavior to suit the sonic characteristics of an instrument in the same way that a human player can. On a general level music can be seen as interactive behavior involving control feedback in a variety of forms. A musician uses audio feedback to perceive the response of the instrument (like the attack time, sound and pitch) to some playing action. The musician also uses non-auditive information like touch and kinesthesis to be able to control the instrument in a proper way. A composer may listen to her music but can also receive visual and symbolic or non-symbolic grouping information from the produced musical event stream. This integrated musical processing system of humans results in a music playing and composing behavior, which is far more effective than anything achieved by conventional generative algorithms. Some practical processing examples executed by a human musician include the following: A human player can anticipate different response times of instruments. A musician can adjust the playing movement so as to reach a desired pitch. A composer notices the forming of scales, patterns, harmonic note series and repetitions of musical patterns resulting from arbitrary compositional decisions. Feedback or feedback-like behavior has been applied in various ways to existing computer music systems. The classic generate and test method can be seen as a feedback-like system where a randomly generated parameter is compared against a predefined rule set. However, the method does not alter either the behavior of the random number generator or the rule set during the generation process. Thus, the method lacks true feedback. The Markov chain method is a similar example of a random process based on a predefined rule set. Artificial neural networks use true feedback during a dedicated learning phase. In the generative phase, however, the rule set remains static. More genuine musical feedback has been applied to experimental performance systems, e.g. [Goto et al. 1996]. The concept of acoustic feedback has been applied to instrument modeling. Our study addresses musical feedback on a general level, which involves musical communication and decision making in composition, performance and sound generation thus elaborating further the ideas given in [Laine 1998]. 2 Basic concepts We use the term feedback in the same sense as in cybernetics [Ashby 1968, Wiener 1971]. By the term "integrated feedback" we mean a set of computational techniques in a single system for simulating feedback relations such as an intended compositional/improvisational gesture and its perceived acoustic realization. By the term "distributed feedback" we mean a cybernetic relationship between two or more distinct systems like between a human player and an instrument. A complex system may use both integrated and distributed feedback. In an integrated musical feedback system each event negotiates with its environment about possible actions. These actions could include creation of a note, embellishment of the a note, clarification of the emerging harmony and adding silence to make room for actions by other musical events. - 252 - ICMC Proceedings 1999

Page  00000253 In a distributed feedback situation two or more systems use two-way communication links and a common communication protocol for the negotiation process. Here, one of the systems may act as an active "master" while the others (i.e. "slaves") only passively transmit mechanical response messages to the master's command messages. 3 Implementation issues Implementation of a feedback system requires algorithms different from conventional algorithmic composition. There can be no fixed grammar to dictate the flow of events, nor can there be simple feed forwarding random decisions based on probabilities. The required system should be able to analyze the context of an event (or group of events) from several viewpoints, feed the analysis data back to a generating apparatus and adjust the parameters of the apparatus accordingly. This scheme applies as well to symbolic algorithmic event generation as to sound synthesis and computerized playing. One of the most important and distinguishing factors in musical feedback is that the processed data is very complex (multiply delayed, patterned, symbolic, even non-symbolic). The data may consist of processes or process-control values, like those controlling pattern generation [Laine 1997]. This kind of "polymorphism" causes a very different situation than when the feedback data consist of some continuous parameter. When considering the situation with delayed feedback loops and complex data one may notice that not only a certain specific algorithm can be implemented using feedback, but the whole program can and should be integrated to one complicated system with several feedback loops and an analysis system. The one complicated integrated system can lead to very non-linear and unforeseeable behavior of the program and it is a difficult task to program it (Hofstadter 1995, 123). It is not possible to control such system in conventional means of adjusting some simple linear parameters (like maximum pitch or ambitus). 4 Problems addressed by musical feedback The different response times and sound or pitch response of instruments can cause problems in computerized playing. MIDI-based systems commonly use the same note-on timing, sound and pitch system regardless of the type of the instrument. This causes difficulties when changing the instrument from, say, vibraphone to strings or when playing instruments having non-linear pitch or sound response. If MIDI would allow sending the characteristics of the instrument back to the playing/composing algorithm, the performance would be easier to control algorithmically. Compared to the actions of a human composer conventional composing or improvisatory algorithms do not recognize the musical patterns and situations which are emerging from rules or grammars. Common rule-based algoritms go on without analyzing the audible result. In the case of context-sensitive grammars or Markov-chains the feedback from the generated event sequence is used to select items from predefined rule-set, not to modify the parameters of the generating apparatus. If a musical line could be analyzed and this analysis information would be sent back to the composing algorithm, the composing process would be closer to human compositional processes. 5 Feedback in musical communication Full-blown simulation of musical feedback in a computer controlled performance would require a complex performance/synthesis/analysis system including generation of control events, generation of sound based on the control data and analysis of the audio data for the adjustment of the performance. Technically this would require highly complex audio analysis systems that are not yet available. Moreover, acoustic signal analysis involves potential loss of data and handling of external irrelevant signals. One practical solution is to integrate the concept of feedback into communication protocols. A digital synthesis instrument has potential to generate detailed descriptional data of the sound it produces. Provided a suitable communication protocol, the instrument could transmit an analysis of its actions back to the device that controls it. In order to achieve this, three basic requirements must be placed on the protocol: 1. The communication protocol should be able to uniquely map a request message to a corresponding reply message. 2. The protocol should be able to express the exact time of both request and reply messages. 3. There should be a scheme for mapping the content of a request with the content of a reply. The first two requirements have already been addressed in general-purpose computer communication protocols. The third requirement is specific to music. Examples of such mappings are: 1. note number (e.g. as in MIDI) vs. fundamental frequency 2. velocity vs. amplitude 3. timing vs. onset 4. pitch change vs. fundamental frequency The principle here is that gestural control data are mapped to equivalent or closely related sonic parame ICMC Proceedings 1999 -253 -

Page  00000254 ters that can be expressed in conventional acoustical units such as Hertz or dB. Control data may, in turn, be expressed in mechanical units in a similar way as note numbers, velocity values or control change values are represented in MIDI. The protocol specification should give a sonic equivalent to each control parameter but the exact mapping between, e.g., note numbers and corresponding frequencies would be instrument specific. MIDI does not fulfill any of the three above requirements. However, a limited form of musical feedback may still be implemented with the aid of some extensions to the protocol. We intend to address this issue in a later publication. A bidirectional musical communication protocol would, to large extent, if not completely, solve the problem of determining the role of sonic and gestural control data. Such a protocol would also enable performance software to adapt phrasing to suit the sonic characteristics of different instruments, even without prior knowledge of the type of the instrument. Adjustment related data could be obtained either in real time during the performance or from a rule set obtained via a separate training process. 6 Integrated feedback and dynamic data It is necessary to use new data representation to define a feedback system to adjust the parameters for nonlinear sound synthesis (such as virtual acoustic instrument). For example to arrive to a desired pitch might require the analysis of the spectral components of the generated sound, its amplitude and estimate its pitch. With the term dynamic data representation we refer to the communication system inside an integrated feedback system. On many occasions, feedback data is non-symbolic. The data can be continuous, tactile or sonic [Clynes 1982]. The area of "compositional feedback" involves representation of symbolic and non-symbolic musical data and processes. Some ideas are presented here as an example of such integrated compositional algorithmic system. From a cybernetic perspective every compositional program structure should include a mechanism for possible feedback. The basic requirements for the mechanism include: avoiding the recursive infinite feedback loops, negotiation as a form of feedback and different structural levels of representation. To test the theoretical generative apparatus simple experimental design was devised. The pitches produced by the apparatus was fed to simple pitch detector device consisting of a row of adaptive artificial neurons representing the pitches of the scale. These artificial neurons was designed so that they get "tired" after constant exposure to identical pitches. Figure 1: Pitch perceptor with combined perception activation stream and feedback connection shown. A more detailed example involves a device where total activity of the pitch perception array can be used to show integrated pitch-distribution/rhythmic activity (figure 1). This continuous pitch perceptron can be used for analyzing the harmonic color and the variations of the pitches used. The resulting perception neurons activity curves can be used either alone or combined to modify the parameters of the pitch stream generator. 7 Implementational questions IThe issue of data representation arises when designing feedback oriented programs. Music representational issues have been discussed by [Honing 1993], who suggests that the paradigm of microworlds and exploratory pr6gramming might facilitate approaching the general music representation system. The most difficult problem is to develop a unified method for representing timed polymorphic data. It is necessary to develop a new knowledge representation method, which is not declarative and which differs from procedural knowledge representation in that the timed behavior of the system is part of its knowledge and intelligence. This new way of representing musical knowledge, inspired by Minsky [1972], can be called rhythmic knowledge representation. We assume, that a similar model is used in the human mind when processing and imagining music. What makes programming of feedback algorithms more complicated are the delays (for handling delayed feedback see [O'Brien 1996]) often occurring in feedback situtation - some delay is often even needed for a proper feedback effect. The communication of the delayed feedback to the algorithm requires time-based, or relative-to-time behavior even for the functions which are not originally time-based. -254 - ICMC Proceedings 1999

Page  00000255 8 Conclusion and further research Our aim was to investigate the design principles leading to computer music programs that are more analogous to human behavior. Some basic issues and problems have been charted in this paper. The main task of defining the data representation definitions and communication protocols as well the challenging task of defining integrated and more complex systems, leading to more holistic musical system in general remains. We plan to continue to refine the above ideas, and may, in the near future, propose a preliminary definition for musical feedback data representation and a related communication protocol. References [Ashby 1968] Ashby, Ross W.: An introduction to cybernetics. London Methuen 1968. [Clynes 1982] Clynes, Manfred: Music, mind and brain. 1982, Plenum Press, New York. [Goto et al. 1996] Goto, M., Hidaka, I., Matsumoto, H., Kurode, Y., Muraoka, Y.: A Jazz Session System for Interplay among Players. ICMC 1996, Hong Kong. [Hofstadter 1995] Hofstadter, D. R., & The Fluid Analogies Research Group. Fluid Concepts and Creative Analogies. Computer Models of the Fundamental Mechanisms of Thought. New York: Basic Books 1995. [Honing 1993] Honing, Henkjan: A microworld approach to the formalization of musical knowledge. Computers and the Humanities, n:27, 1993. [Laine 1997] Laine, Pauli: Generating musical patterns using mutually inhibited artificial neurons. ICMC 1997, Thessaloniki. [Laine 1998] Laine, Pauli: Cybernetic perspective to Music Algorithms - The Control Feedback in Cognitive modelling. Proceedings of 5th International Conference of Musical Perception and Cognition, Seoul 1998. [Minsky 1972] Minsky, Marvin: Computation - Finite and Infinite Machines. Prentice-Hall, London 1972. [O'Brien 1996] O'Brien, K.M., "Task-level Control for Networked Telerobotics", SM Thesis, MIT, 1996 [Wiener 1971] Wiener, N.: Cybernetics or control and communication in the animal and the machine, 2. Ed. Cambridge, Mass. 1971. ICMC Proceedings 1999 -255 -