On the Realization of NoaNoa and Près, Two Pieces for Solo Instruments and Ircam Signal Processing WorkstationSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 210 ï~~On the Realization of NoaNoa and Pr,s, Two Pieces for Solo Instruments and Ircam Signal Processing Workstation X. Chabot, K. Saariaho, J.B. Barriere IRCAM, 31 rue St Merri, Paris, 75004 France Tel: 33 1 44 78 48 23; Fax: 33 1 44 72 68 92 email@example.com; firstname.lastname@example.org Abstract NoaNoa and Prs are two solo pieces, one for flute, the other for cello, realized during 1992 by Kaija Saariaho with the Ircam Signal Processing Workstation (ISPW, based on the Next computer), and with Xavier Chabot from the Ircam Pedagogy department, in the pedagogical perspective to explore the potentialities of Max on the ISPW, and specially its connexion with Patchwork used to prepare data and patches. In the two pieces, the electronic parts amplify and develop the sonic structure of the solo instrument, exemplify the close relationship between sound material and musical structure, and explore real-time strategies for control over various synthesis and signal processing algorithms. 1 Introduction: timbre and harmony. from analysis to performance Kaija Saariaho is interested in timbre as an extension of acoustical instruments and as a globalisation of sonic phenomenon. Consequently, she is interested in the idea of a timbral space, in which one can define variations and interpolations, and finally by the relations between timbre and harmony, which allows her to unify instrumental and synthetical writing. These ideas are present in nearly all her pieces with sound synthesis. K. Saariaho generally uses two types of models for the control of synthesis: referential models, based on unprocessed analysis data of an instrumental sound; and abstract models, created after compositional and psychoacoustical selection criteria on analysis data of the same instrumental sounds. Referential models, for example models of resonance, are used for sound synthesis and sound processing. For each work a specific set of such models is created in order to define a timbral space which permits timbral interpolations controlled by compositional processes. Synthesis or referential models allow stepping out of the normal behavior of instrumental sounds: for instance gong models with glissandi of partials, or with timbral and harmonic transformations, processes which again, are under compositional control. Abstract models can be interpolated with referential models and are used mainly to embody the timbre/harmony relationship. They can be seen as chords which, depending on their content will fuse in one timbre, or separate as a more or less inharmonic chord. The composition aid environment Patchwork [Laurson 1989][Malt 1993] is used at all stages of the development process: producing analysis commands, getting, displaying, and processing analysis data, producing synthesis data and control structures; see [Barri~re 1993] for examples. Both NoaNoa and PrOs tend to create a continuum out of very contrasting musical material. Several layers of processes (for instance pitch, rhythm, playing mode transformations) add up and evolve independantly or in correlation, resulting in a very rich and complex texture although coming from a solo instrument such as the cello or the flute. Each of these interpolation processes implies a movement; one has to invent directionalities that can really be perceived and make sense in the composition, by playing on polarities and contrasts between models. In this way, synthesis becomes an integrated resource to music composition. In NoaNoa and PrOs, these analysis and interpolation processes are used not only in the instrumental writing or for the production of synthetic models of prepared sounds, but also in real-time with various strategies of realization depending on the musical and technical contexts. For both works the instrumental score was completely written out before starting developing the live electronics part. Sound synthesis and transformation are not meant to stage new equipment; in fact they are independant of a particular machine (several versions already exist for various setups). The electronic part is meant to develop, amplify, underline musical processes, instrumental gestures, and 8B.2 210 ICMC Proceedings 1993
Page 211 ï~~sonic material; it is thought as resulting from the music composition and the performance rather than the opposite. The two pieces also have to be related to other works by Kaija Saariaho for the same instruments: Laconisme de l'aile (1982) for solo flute, Petals (1988) for cello and electronics, A lafumee (1991) for solo cello, alto flute and orchestra, Amers for solo cello, ensemble, and live electronics (1992). 2 aNoa NoaNoa is a piece for flute and the Ircam Signal Processing Workstation (ISPW, based on the Next computer)[Lindemann, 1991]. The basic material includes various classes of playing modes and patterns controlled by several layers of interpolation processes. For example the 'trill' class includes: mordant, trill, microtonal trill, vibrato, flutter tongue, multiphonic; the 'noisy' class includes: breathy notes, overblowing into harmonics, speaking in the flute, tonguing with spoken phonemes, multiphonics, flutter tongue; the 'pattern' class includes: scale, microtonal scale, glissando, repetitive pattern, etc. The real-time electronics part implements the following modules: a prepared 'sampled flute' able to interpolate continuously between normal flute sound, breathy flute sound, and flute with phoneme tonguing; time stretching modules used to playback recorded speech at various speeds and to control playback loops; infinite reverberation units with glissando, tremolo, and vibrato features, a convolution module able to sample spectra on the fly and to convolve them with the instrument sound (see below); models of resonance modules (bank of filters whose parameters are derived from a specific analysis technique developped at Ircam [Barrire, 1985] [Potard, 1986] [Baisn6e, 1986]; finally, commonly used modules such as harmonizers, delays, reverberation unit. The control structure is built arround the pitch tracker and the score follower [Puckette, 1992] which triggers enabling and disabling modules and processes, establishes module interconnections, and triggers data uploading. The piece in its last version is highly interactive and almost every module is controlled in real-time from the flute using pitch inflection, amplitude, trill speed, spectral content, note duration, and articulation. Control data pre-computed by Patchwork consists of pitch and rhythm sequences, timbral interpolation trajectories, reading trajectories in a sampled spoken phrase, and models of resonance data, and is prepared and packaged for Max with the aid to composition environment Patchwork. 2.1 Example of real-time convolution with Max-ISPW Let us detail here the convolution module. The multiplied spectrum is the performing instrument, while the multiplying spectrum is static and stored in a table, but can be updated in real-time. The Max-ISPW 'fft-' module takes a sample stream, or 'signal', at its input and outputs two 'signals': the real and complex parts of the transform, plus one output sending a trigger message with each new FF17 window. Convolution here is performed by multiplying each sample of both the real and complex parts of the FFT by the sample corresponding to the same frequency read in the multiplying spectrum table. Thus the multiplying spectrum table must be read synchroneously with the output of the EFT samples. This is done through a Max extern object realized by Z. Settel, which takes the message from the 'fft-' module and outputs samples one by one. The multiplying spectrum table is updated as often as needed, either entirely or reset and written at selected frequencies. Convolution was used in the following cases. First, simple filtering, where the spectrum table is set with frequency/amplitude pair lists obtained from the Iana program [Assayag, 1985] (implementation of Terhardt's algorithms) [Terhardt, 1982] analysis and processed by Patchwork. Such a filtering is static but a sequence of frames can also be played back as in additive synthesis. The result sounds like an efficient implementation of a bandpass filter bank with fixed bandwidth. Second, cross-synthesis between two spectra, where the multiplying spectrum is written on the fly during performance by another fft module connected to the live instrument input. Several spectrum tables can be sampled and mixed before being written in the multiplying spectrum table. ICMC Proceedings 1993 211 8B.2
Page 212 ï~~This has been used to cross-synthesize an instrumental sound with such a characteric spectral shape as a vowel. For example, in NoaNoa, spectra of vowels sung by the flutist while playing, or of a breathy low C, are memorized and convolved later with the flute sound. The title of the piece refers to a woodcut by Paul Gauguin called NoaNoa. It also refers to a travelling diary of the same name, written by Gauguin during his visit in Tahiti 1891 -93. The fragments of phrases selected for the voice part in the piece come from this book. NoaNoa is also a team work. Many details in the flute part were worked out with the flutist Camilla Hoitenga to whom the piece is dedicated. Another version for Macintosh was realized by Alexandre Mihalic; here the performer sends triggers to Max, thus controlling a direct-to-disk, on which all transformations have been recorded, and a digital reverberation whose parameters are controlled by the amplitude of the flute. Prs for cello, ISPW, and direct-to-disk, was developed at the same time than Amers for solo cello, ensemble, and electronics.. The original idea and the basic material of the solo cello part is similar for the two works, but form, structure, sound space, and overall atmosphere are very different. The cello is mounted with a special microphone originally developed for Amers; this microphone is made of four pickups which allows isolating the audio signal of the four strings from each other. Thus a single bow stroke becomes a spatial gesture. The first of PrOs three sections centers around analysis at successive time intervals, of an evolving cello trill which is at the beginning of Amers and Pres. The analyzed trill alternates between normal sound and natural harmonic sound, and evolves from normal playing 'sul tasto' to playing with more bow pressure 'sul ponticello'. Two spectra are deduced from each analysis: a complete spectrum with all components and a reduced spectrum holding only components which are perceptually relevant, after frequency masking. An analysis set taken at the beginning of the trill is quasi harmonic while another set, taken towards the end of trill when the sound is noisy and, richer in inharmonic partials. For each set, synthesis of the complete spectrum gives a unique timbre while synthesis of the reduced spectrum generates a set of pitches, that are perceived as an harmony. Many more sounds were analyzed to give material for Amers and PrOs but the analysis of this first trill is central for the piece in defining the movement between harmonic relaxation and tensions as well as the coherence between instrumental and synthetic sounds. More transformation processes run concurrently with the duality timbreharmony. First, playing modes transformations (which induce timbre transformation) for example, normal soundsul ponticello-sul tasto, or trill-tremologlissando-microtones-harmonics, or again normal sound-natural harmonic-under pressured-over pressured bow (the last one producing a noisy 'scratched' sound) and more generally the transformation of sound into noise, represented by the 'scratchy' cello sound and its counterpart: ocean waves samples. Second, rhythmic processes. Third, the opposition between static and dynamic elements. Each of these processes in the cello part has its equivalent in the electronic part. The second section deals with spatialisation of the four strings. The cello part is made of pseudo regular and repetitive patterns which spreads out on the four strings and thus in space, and overlaps with playing mode transformations as in the first section. The electronic part is based on a 'sampled cello' which is able to interpolate between sounds with more or less harmonics; the sampled cello is controlled by independent processes for rhythm and timbre variation, and creates with the live cello a dense polyrhythmic 8B.2 212 ICMC Proceedings 1993
Page 213 ï~~texture. The contrast pure/noisy is introduced again in the cello part, this time being sudden instead of progressive, and is amplified in the electronics by the playback of a 'cluster' sound and the activation of a real-time time stretching module. The third section summarises ideas from the two first sections. The research for both Amer and PrOs was conducted by Kaija Saariaho with the cellist Anssi Karttunen who also created the pieces, and with Ramon Gonzales-Arroyo and Xavier Chabot for the electronic part. The real-time computer processing allows creating various textures starting from cello sounds, and evolving between noisy and crystalline character, echoing the violence and the quietness of the sea. The title Pres, or rather Prts de la mer (close to the ocean) is referring first to the twin piece. It also refers to the poetry of Saint-John Perse, and specially his work Amers. "(...) 'Sea of trance and infraction', it initiates to all the experiences which allow man to cross the usual borders" [Sacotte, 1991]. In Aners, the cello is a navigator who directs himself toward different aims between the waves created by the others instruments and the synthesised sounds. PrOs is more concentrated on the navigator himself, on his thoughts and reactions when looking at the sea, 'diversity in the principle and parity of the being'. 4 Conclusion: composing timbre Pres and NoaNoa are fighting against the limits of a solo piece. To find ways of extension, Kaija Saariaho is using a electronic set up, which allows her to amplify, to multiply, and to extend the sonic structures of her writing for cello or flute. They were made in the spirit of experimentation with instrument timbre and its relationship to synthesis. Interaction between instmMent and computer is not only based on digital signal processing and intelligent feature detection modules, but relies also on the coherence of the underlying compositional concepts. [Assayag, 1985] G~rard Assayag, Michelle Castellango, Claudy Maiherbe. Functional Integration of Complex Instrumental Sounds in Musical Writing. Proceedings of the 198S International Computer Music Conference, Vancouver, Berkeley, Computer Music Association, 1985. [Barribre, 1985] Jean-Baptiste Barriere, Yves Potard, Pierre-Francois Baisnee. Models of Continuity between Synthesis and Processing for the Elaboration and Control of Timbre Structures. Proceedings of the 1985 International Computer Music Conference, Vancouver, Berkeley, Computer Music Association, 1985. [Barriire, 1993] Jean-Baptiste Barribre, Xavier Chabot. Integration of Aid to Composition and Performance Environments: Experiences of Interactions between Patchwork and MaxISPW. Proceedings of 1993 International Computer Music Conference, Tokyo, Berkeley, Computer Music Association, 1993. [Laurson, 1989] Mikael Laurson, Jacques Duthen. Patchwork: a Graphic Language in Preform. Proceedings of the 1989 International Computer Music Conference, Columbus, Berkeley, 1989. [Potard, 1986] Yves Potard, Pierre-Francois Baisn6e, Jean-Baptiste Barriire. Experimenting with Models of Resonance Produced with a New Technique for the Analysis of Impulsive Sounds. Proceedings of the 1986 International Computer Music Conference, La Haye, Berkeley, Computer Music Association, 1986. [Lindemann, 1991] Eric Lindemann. The Architecture of the Ircam Musical Workstation. Cambridge, Massachusets, Computer Music Journal, Vol 15, n03, 1991. [Malt, 1993] Mikhail Malt. Patchwork Introduction. Ircam Documentation, 1993. [Puckette, 1991] Miller Puckette. Combining Event and Signal Processing in the Max Graphical Programming Environment. Cambridge, Massachusets, Computer Music Journal, Vol 15, nÂ~3, 1991. [Puckette, 1992] Miller Puckette and Cort Lippe. Score Following in Practice. Proceedings of the 1992 International Computer Music Conference, San Jose, Berkeley, Computer Music Association, 1992. [Saariaho, 1990] Kaija Saariaho. Shaping a Compositional Network with Computer. Proceedings of the 1984 International Computer Music Conference, Paris, Berkeley, Computer Music Association, 1984. [Saariaho, 1990] Kaija Saariaho. Timbre et harmonie. in Jean-Baptiste Barrire (ed.). Le timbre, m~ftaphore pour Ia composition, Paris, Christian Bourgois, 1990. [Sacotte, 1991] Mireille Sacotte. Saint-John Perse, Belfond, Paris, 1991. (Terhardt, 1982] Ernest Terhardt, G. Stroll, M. Seewa. Algorithm for Extraction of Pitch and Pitch Salience from Complex Tonal Signals. Journal of the Acoustical Society of America, Vol 71, n03, 1982. ICMC Proceedings 1993 213 8B.2