IANA: A Real-Time Environment for Analysis and Extraction of Frequency Components of Complex Orchestral Sounds and its Application within a Musical RealizationSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact email@example.com to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 292 ï~~Iana(a real-time environment for analysis and extraction of frequency components of complex orchestral sounds and its application within a musical realization) Todor Todoroff(1), Eric Daubresse(2) & Joshua Fineberg(2) Facult6 Polytechnique and Conservatoire Royal de Mons (1), IRCAM (2) Introduction A new analysis module functioning within the real-time environment of MAX/FTS has permitted the creation of a new type of interaction between an instrument, an ensemble, or an orchestra and a computer. This module was used to produce, at IRCAM, the piece EMPREINTES for instruments and real-time analysis-synthesis. The piece EMPREINTES seeks to steal bits and pieces of movement and color from a musical evolution, using them to create something new: a sort of halo spiralling around a musical process. At times it may seem simply an imprint of its surroundings, enriching without truly altering its context; however, at other moments, it may become a force of transformation itself. By capturing fragments of one object and transmitting them to another, both it and the objects from which the material has been captured are changed. Thus textures and colors do not interact directly, as in most pieces: they must pass through this conduit, whose presence may be more or less felt depending on the moment. Evolutions become oblique: objects maintain their own inertia while slowly being pulled toward each other. In the space between and around them is the new object, which exists as a kind of transformed reflection of its surroundings. Realizing this piece came after more than two years of research and development. Pieces mixing electronics with instrumental writing, generally fall into two general categories. A first group works with a sort of pseudo 'real-time' which forces music to be reduced to an over-simplified schematic representation such as MIDI (representing music as a collection of notes, velocities, attack times, etc...); color, movement and sound quality are put aside to permit extremely precise synchronisation. Equally troubling with these pieces is their tendency to concentrate only on soloists within an ensemble ( this choice often being made for technical and logistic reasons, not musical ones). The other approach which has, in fact, created some thoroughly successful pieces, works directly with sonic material. Outside of real-time, these composers make use of sonic analysis or work directly with the abstract structures of sound to generate all the musical elements of a piece, regardless of whether the final results are to be produced instrumentally, electronically or in some combination of the two. Those pieces, however, normally make use of very simple models (instrumental spectra, simple modulations, or distortions) the power of the music is derived not so much from the choice of model, but rather through the beauty of its realization. 292 2ICMC PROCEEDINGS 1995
Page 293 ï~~Therefore if one wants to capture the essence of a passage of that type, it is essential to look at the final realization and not the original model. The problem, however, is that most tools available for analysis are designed for simple harmonic sounds, not complex orchestral evolutions. Extremely detailed analysis can, of course, follow these movements; the quantity of information they generate is, however, totally unmanageable. The module we have created uses an algorithm for extraction of peaks and determining the psychoacoustic importance of each peak created by Ernst Terhardt*. This algorithm allows an analysis with hundreds, or even thousands of values to be reduced to only the few that are most significant to the perception. The module performs the following steps on an incoming signal: (1) analysis of the sound using a fast fourier transform, (2) extraction of the component frequencies, (3) calculation of masking effects on the amplitude and frequency of each component, (4) attribution of a perceptive weight for the corrected components, (5) extraction of the virtual fundamental. This system allows us to look at an extremely complex sound and, without pretending to present a complete picture, the essential imprint of the sound is clearly seen and may be exploited. IRCAM's ISPW with three boards receives the signal for analysis coming from the ensemble, a Yamaha DMC1000 is used as matrix allowing any sub-grouping from individual instruments to the entire ensemble to be analysed. Around the analysis module we have implanted four banks of oscillators (42 total) and a bank of 12 second order filters exited by an internal sampler. The filters and oscillators can receive a complete array of controls (various envelopes, jitters, waveforms for the oscillators and excitation sources for the filters, etc...). The data coming out from the analysis is not simply resynthesized (a process whose only real interest would be photographic). We have a battery of treatments which can be performed either on the incoming signal prior to analysis or on the data created by iana-. There is, for example, an algorithm for interpolating analysis results with pre-calculated model (implemented by Girard Assayag, who also worked on the original nonreal-time implementation of iana at IRCAM). Other treatments include modulations between elements contained in different windows, distortions of the frequencies, buffers for distancing the extraction of an analysis from its use, and various means of selecting one or more values from within the totality of the analysis. Summary By implementing in real-time a complex analysis and treatment environment it is our goal to enable the computer to 'listen' to and in a sense 'understand' what the ensemble is playing. This allows the machine to react musically, stealing the bits and pieces of the realization that are most important in the current execution. If pitches are slightly higher or lower, the resulting sounds produced by the computer will be changed (as in an ensemble, the players constantly adjust their intonation) but much more subtle changes will also be taken into account ( the brilliance of the flute, or the violinists vibrato, etc...). The typical master-slave relationships between an ensemble and electronics, where either the musicians must try to synchronize with a precalculated ideal or the machine must do its best to follow a performance in which it is only tangentially connected, may be replaced by a true interaction. * Terhardt Ernst, Stoli Gerhard & Seewann Manfred, "Algorithm for Extraction of Pitch and Pitch Salience from Complex Tonal Signals," JASA 71 (3), March 1982, pp. 679-688. I C MC P ROC EED I N G S 199529 293