Page  214 ï~~Integration of Aid to Composition and Performance Environments: Experiences of Interactions between Patchwork and Max-ISPW J.B. Barriere & X. Chabot Ircam, 31 rue St Merri, Paris, 75004 France Tel: 33 1 44 78 48 23; Fax: 33 1 44 72 68 92; Abstract While aid to composition and performance tools are usely distinct and even opposed, we try to show how they can complement each other, and why they should in fact be integrated in a unified environment to satisfy the compositional needs for a better interaction between instrumental writing and sound synthesis. This is illustrated by examples of communications between the Patchwork aid to composition and the Max-ISPW performance environments, realized with composers at the Ircam Pedagogy department 1 Introduction: the need for integration of aid to composition and performance Traditionnally, score representations and sound control, structural manipulations and sonic materials, are opposed and therefore processed separately in computer music environments. This separation replicates the actual cut between the score and the instrument in the traditionnal western instrumental world, which in turn tends to dissimulate the very nature of the difference between composition, the score as its medium, and interpretation as its realization. Most of computer music environments to date tend to maintain, by their implicit or explicit conceptual design, the illusion that this separation between score and orchestra is logical, despite the obvious unification made possible by data processing in computers. We propose here a model of continuity and interaction between aid to composition and performance environments, based on experiences realized with Patchwork on the Macintosh [Laurson, 1989][Malt, 1993] and Max on the Ircam Signal Processing Workstation (ISPW) [Lindemann, 1991][Puckette, 1991], that we hope to serve for the future as a prototype of an integrated environment. We believe that the traditionnal approach is problematic from both the conceptual and pragmatic perspective: musicians during the process of the compositional activity, have in mind and do manipulate musical ideas as totalities, as sound concepts or images, which for the most part do not isolate effects and causes of musical production. Therefore we consider that computer music environments should actively attempt to propose a unified way of conceiving music, from the conceptual background (strategies of descriptions of musical ideas, including notes, sketches, and scores) to their practical realizations in sound. In that spirit, aid to composition and synthesis control should be unified, at least as closely interacting pieces of software. What this implies, goes much beyond the simple insertion of structural generation into synthesis programs; it suggests multiple representations of data and knowledge, connected as openly and freely by the composer, as possible when composing and writing on a piece of paper. The possibility to reach the complexity required for such a compositionnal environment, has been limited by the current state of computer programming languages which limits drastically the capacity of adding information and knowledge to an existing corpus dynamically. It is also limited by the conservative approach to the problem, which maintains the mentionned dichotomy; and by the aesthetic background of most composers working in this field. The accumulation of musical knowledge that occurs in composition environments such as Patchwork, as opposed to the fixed strategy of traditional positivistic algorithmic composition, clearly shows a way. A programming environment such as Max on the ISPW is perfect for event processing and scheduling, real-time control and sound processing, but is not yet adequate to implement rule-based procedures. By connecting these two already complex and mature environments, we propose a model of integrated environment for composition that lets one prototype both structural manipulations and systems of instrumental control. 8B.3 214 ICMC Proceedings 1993

Page  215 ï~~In this presentation, we will illustrate this approach with musical examples extracted from a series of pieces or sketches realized in the Ircam Pedagogy department. These examples introduce various analysistransformation-resynthesis-control processes. This research was stimulated by the need for a better use of instrumental timbral properties to control timbral characteristics of synthesized sounds, the need to relate harmony and timbre, and more generally the need to connect musical form and sound material organically. 3 The interface between analysis techniques. Patchwork and Max The general scheme with which these examples were realized is the following: analysis data is read in Patchwork, processed, then written directly as Max abstractions (for instance 'qlist' sequence data or message boxes). These abstractions are read into Max-ISPW patches and their data is used to control- and to be controlled by -real-time synthesis or sound transformation. originally as an application of the Chant program under Unix [Rodet, 1984] [Barri~re 1991], the analysis was ported on the Macintosh, and the synthesis/transformation part realized as a bank of filters on the ISPW. FFT-I is an analysis/synthesis package developped at Ircam by Xavier Rodet and Philippe Depalle [Rodet, 1992]. It provides a very efficient additive synthesis process, which allows to control separately the noise and the harmonic parts of a sound. In the Max-ISPW implementation, the models of resonance can be excited by noise or sampled sources, and/or by live instrumental sounds captured on stage by microphones. A library of sampled excitations was also build by collecting various instrumental sources and separating the noisy and periodic parts with FF1T-1. These excitations can therefore be triggered on request, for instance by the score-follower, in order to complement the attack of an instrument and to process the combination through the models of resonance. The data for the models can be produced either directly by analysis and/or compositionnaly with Patchwork, which offers proper utilities to manipulate data from either the models of resonance analysis, Iana or FFT-1. For instance, a library of utilities for interpolation techniques between two or more models is available to the composer to prepare new models. Besides, all the resources of Patchwork can of course be used to manipulate or generate this data, with serial, spectral, stochastic, chaotic or ad hoc procedures [Malt, 1993]. Psychoacoustical control (critical bands masking, perceptual correction of spectral frequencies or amplitude) can finally be used to process these interpolations. Once the composer has build his collections of models and control data, he can automatically produce Max-ISPW patches. Max control allows the addition of real-time scaling of fitlters frequencies, amplitudes, and bandwitdths, as well as vibrato and jitter, spectral distortion, spectral correction depending on the fundamental and loudness, etc., mapped with pitch, amplitude, and spec~tral following when technically possible and compositionally relevant. This basic process has been used to explore the relations between harmony and the timbral characteristic of an instrument. We The analysis data comes mainly from three different sources: the Lana, models of resonance, and FF17-1 programs. Lana is a program developped at Ircam by G. Assayag [Assayag, 1985], which implements the E. Terhardt algorithms [Terhardt, 1982] for determination of perceptual weigths of partials in complex sounds. These algorithms allow also the distinction between spectral and tonal pitches. Models of resonance is an analysis/synthesis technique developped at Ircam by J.B. Barriere, Y. Potard, P.F. Baisnde, for the modelization of impulse-like sounds (percussions, pizzicatti, buzzs, etc.) [Barri~re, 1985][Potard, 1986][Baisnee, 1986]. This technique also allows one, using the same models, to process noise and recorded sounds sources, ant therefore therefore to create continuity between synthesis and transformation. Developped ICMC Proceedings 1993 215 8B.3

Page  216 ï~~will now show how through three examples. 4.1 Three examples of relations between harmony and the timbral characteristic of an instrument 4.1 Interpolation between chords and spectra The first example comes from Alma-Luvia by Florence Baschet for voice, clarinet, viola, cristal Baschet, direct-to-disk, and ISPW. In one passage, the composer was looking for a sound layer which would hold together the harmony and the sound from both the tape part and the instruments. The general idea was to analyse with Iana, a clarinet sound used in the direct-to-disk part, at regular intervals (500ms). The series of spectra thus obtained was read into Patchwork and compared with the two chords surrounding the passage in order to choose an adequate transposition and to compute the interpolation between the series of spectra and the chords. The process output is a new series of spectra of which frequency components are matched; that data is written out in the form of Max qlist sequence data. These Max abstractions representing the interpolation are read into the Max-ISPW patch and are used to control a model of resonance whose input is the live singer plus noise modulated by her voice's amplitude. The process was used at several places in the piece, with various harmonic contexts and played back at various speeds. Ina I=r m l Inlna clarine sample sftton interpolation 4.2 Adding an instrumental attack to a chord The secOnd example was done in the context of preliminary experiments for a project for marimba and ISPW with the composer Brian Ferneyhough. The aim is to play harmonic material produced by Patchwork and add the instrumental characteristic of the marimba attack. Spectra of several marimba notes are produced in advance and their partials arbitrarily associated to chords components (chords are considered as spectra). In performance, an additive synthesis instrument is fed on the one hand with frequencies corresponding to the chord pitches, and on the other hand with the time varying amplitude of a corresponding partials of the marimba whose sound is analyzed in real-time. realime ISPWMaxim o nd nciessis spebankum ospeiie frequencies Note that in the Patchwork-Max-ISPW implementation, the box labeled 'iana spectme' in the figure above corresponds in fact to a set of spectra associated with a 'keyboard split' similar to that of a sampled instruent. The pitch of the live marimba, triggers selection and transposition of the lana spectrm used to peek in the FTp table. This scheme of precomputing spectra as 'peek grid' in a real-time FFT table is not too sensitive to the marimba real pitch, and was tested using different instruments for iana analysis and real-time stimulation. The additive synthesis instrument also includes simple vibrato and spectral controls. 4.3 Tracking a timbral sequence Tracking timbral characteristics of a continously evolving sound-- for example a flute sound going from normal to breathy or a cello sound going from normal to sul ponticello, like respectively in Kaija Saariaho's NoaNoa and Pre~s [Chabot, 1993]-, is specially important if one wants to establish interactions between instrument and computer other than simple triggering. In these first experiments, pitches, articulations, and timbral variations were completely notated in the instrumental score; the basic idea was to analyse beforehand the instrumental part at specific places, memorize the data sets and to match them against the analysis data computed in realtime on the performing instrument. A matching criterium tells where the instrument is in the timbral sequence (i.e. of which pre-analyzed set the performing 8B.3 216 ICMC Proceedings 1993

Page  217 ï~~instrument is the closest). The figure below gives an example where the sequence has two elements A and B. lna in* A 1 The index in the timbral sequence, or 'timbral indicator', is applied directly to a synthesis or sound transformation parameter: time stretching, harmonizer feedback and transposition, timbral variation of a sampled sequence, or to control the playback of an additive synthesis sequence. 5 Conclusion: toward an integrated compositional software Despite of what has been showed as interaction potential between Patchwork and Max-ISPW, there is a clear limitation to this approach, due to the fact that these are two powerful yet distinct applications, and that the communication is basically unidirectional, without real feed-back. What we did was to explore this potential. To go much further clearly demands not only more control structures and signal processing in Max-ISPW and higher level representations in Patchwork, but a real common langage that would at least allow the two applications to closely interact. We hope our experiences will help to make understand this compositional need. References [Assayag, 1985] G6rard Assayag, Michelle Castellango, Claudy Malherbe. Functional Integration of Complex Instrumental Sounds in Musical Writing. Proceedings of the 1985 International Computer Music Conference, Vancouver, Berkeley, Computer Music Association, 1985. [Baisnde, 1986] Pierre-Francois Baisnee, JeanBaptiste Barriere, Olivier Kwchlin, Miller Puckette, Robert Rowe. Real-time Interaction between Musicians and Computer: Performance Utilizations of the 4X. Proceedings of 1986 International Computer Music Conference, La Haye, Berkeley, Computer Music Association, 1986. [Barriere, 1985] Jean-Baptiste Barriire, Yves Potard, Pierre-Francois Baisnde. Models of Continuity between Synthesis and Processing for the Elaboration and Control of Timbre Structures. Proceedings of the 1985 International Computer Music Conference, Vancouver, Berkeley, Computer Music Association, 1985. [Barribre, 1991] Jean-Baptiste Barriere, Mikael Laurson, Francisco lovino. A new CHANT Synthesizer in C and Its Control Environment in Patchwork. Proceedings of the 1991 International Computer Music Conference, Montreal, Berkeley, 1991. [Chabot 1993] Xavier Chabot, Kaija Saariaho, JeanBaptiste Barrire. On the Realization of NoaNoa and Pres, two Pieces for Solo Instruments and the Ircam Signal Processing Workstation'. Proceedings of the 1993 International Computer Music Conference, Tokyo, Berkeley, 1993. [Laurson, 1989] Mikael Laurson, Jacques Duthen. Patchwork: a Graphic Language in Preform. Proceedings of the 1989 International Computer Music Conference, Columbus, Berkeley, 1989. [Lindemann, 1991] Eric Lindemann. The Architecture of the Ircam Musical Workstation. Cambridge, Massachusets, Computer Music Journal, Vol 15, nÂ~3, 1991. [Malt, 1993] Mikhail Malt. Patchwork Introduction. Ircam Documentation, 1993. [Puckette, 1991] Miller Puckette. Combining Event and Signal Processing in the Max Graphical Programming Environment. Cambridge, Massachusets, Computer Music Journal, Vol 15, nÂ~3, 1991. [Potard, 1986] Yves Potard, Pierre-Francois Baisn~e, Jean-Baptiste Barribre. Experimenting with Models of Resonance Produced with a New Technique for the Analysis of Impulsive Sounds. Proceedings of the 1986 International Computer Music Conference, La Haye, Berkeley, Computer Music Association, 1986. [Rodet, 1984] Xavier Rodet, Yeves Potard, JeanBaptiste Barribre. The Chant Project: Form the Synthesis of the Singing Voice to Synthesis in general. Computer Music Journal, Vol 8, nÂ~3, MIT Press, 1984. [Rodet, 1992] Xavier Rodet, Philippe Depalle. A New Additive Synthesis Method Using Fourier Transform and Spectra Envelopes. Proceedings of the 1986 International Computer Music Conference, San Jose, Berkeley, Computer Music Association, 1992. [Terhardt, 1982] Ernest Terhardt, G. Stroll, M. Seewa..Algorithm for Extraction of Pitch and Pitch Salience from Complex Tonal Signals. Journal of the Acoustical Society of America, Vol 71, n03, 1982. ICMC Proceedings 1993 217 8B.3