Page  00000001 Radiation Control on Multi-Loudspeaker Device: La Time'e Nicolas Misdariis, Franqois Nicolas, Olivier Warusfel, Rene Causse I.R.C.A.M - Centre Georges Pompidou email: misdarii@ircam.fr Abstract The control of directivity represents a new stake in the reproduction of sound by electroacoustics devices. We sum up first a previous method based on an approximation of a given radiation pattern by linear combination of fields radiated by a set of loudspeakers. Ongoing developments, conducted in a musical context, focus on the general improvment of the system and optimization of the control. A new architecture rests on the preliminary constitution of a set of basic directivities with which the cardio'city coefficient and the pattern orientation can be tuned in realtime. Furthermore, as it appears also significant to tune independently spatial and spectral characteristics, an automatic analysis/synthesis procedure is adapted to get a faithful power spectrum reproduction while still offering control on the directivity patterns with the device. Then, we present the musical experience already acquired through the collaboration with a composer: various esquisses (spatial counterpoint, duo between a real instrument and its image, etc.) have been experimented and took finally shape in a music performance programmed during summer 2001. 1 Introduction Within the contemporary context, the greatest part of musical works tends to make coexist acoustical sounds coming from live instruments with electronic sounds diffused by loudspeaker units. As a matter of fact, in most of such cases, the instruments on stage - and sometimes even voices - must also be amplified to compete with power and tone of the electroacoustics. Moreover, loudspeakers are generally placed around the audience, disregarding the natural architectural space and creating a proper sound spatialization. For all these reasons, it may appear difficult to mix coherently both the live and virtual musical parts. Then an alternative approach would be to consider a localised sound diffusion device modeled on musical instruments, with respect to radiation properties, and that would set down - rather than constrain - the room acoustical feature. In fact, when the emission occurs in an enclosed space, directivity becomes an important perceptual parameter as it affects time and spectral distribution of the sound reaching the listeners' ears. The current state of art in this domain assumes that radiation control requires a new generation of transducers based on a multi-driver architecture. Several studies have yet produced consistent results such as creating a sound beam adjustable in frequency and width by means of ID and 2D loudspeaker arrays or super directive devices using ultrasonic transducers. However, the aim of such development is generally to focus sound information on a limited audience area, and to minimize the room effect. In the present paper, we propose a slightly different approach for the intent is here to reproduce or, at least, to control the way sounds will interact with the room. 2 Radiation Synthesis Method The complete method is detailed in Warusfel (1997) and summarized in Misdariis (2001); following section is just a general overview of its principle. The general framework is the radiation synthesis of a target source T with a composite model built by combination of N elementary sources (loudspeakers), each producing the field P(r, o). Propagation laws state that if the model generate the exact pressure field on a given surrounding surface S, both target and model radiation will be equal everywhere out of S. This principle can be explicited by the decomposition of T on the {1), according to the equation:

Page  00000002 Na,(o) -P(roo)= T(ro,o),VroeS (1) where {a J are coefficients depending on frequency Co. However, the method leads only to an approximation of T because, as {P) are not at first glance a base of the freefield radiation problem, existence and uniqueness of {ai} are not ensured at all. The solution is optimized to get the best reproduction respect to a given criterion: in the event, equation (1) is read as a distance between target and model so as its minimization - within an algebrical space - should lead to N equations/unknowns system giving, for each (o, the values of the complex coefficients {a }: N (T-al P P) = 0 Vi=1...N (2) j=1 where (K. ) is defined by (f g) = f(r)g(ro) dS From a signal processing point of view, the computed {a, coefficients are filters to apply to the composite. Figure 1 shows the implementation used for reproducing a given directivity pattern. The filtering is processed in two main stages. First, a common filter designed from the average magnitude response of the a, } that leads to obtain a global spectral equalization of the source and can be modeled by a parametric IIR filter. Secondly, specific residual filters that keep in important phase relations between the different drivers and, in consequence, require a FFT convolution process using their FIR characterization. The constraints on accuracy of directivity reproduction are fallen in aid of other requirements such as enlarged bandwidth or real-time processing and versatility of the control. For that, we built a new prototype on a cubic shape, getting six independently driven sources. 3.1 New prototype & implementation To increase the bandwidth, this geometrical structure is repeated in three frequency bands - with different size - in order to stay, as far as possible, within the limits of spatial aliasing problems. Hence, mid-frequencies (250Hz-2kHz) go through 7" drivers mounted on a 25cm cube while a 8cm cube equiped with tweeters deals with high-frequencies (2kHz-20kHz). Finally, a simplified current sub-bass system (4 horizontal drivers) is devoted to low-frequencies. As for signal processing, a new architecture is designed. Four elementary directivity patterns are permanently synthesised: the monopole HO (0_order spherical harmonic) and the dipoles HI (1st order spherical harmonic) in the three canonical directions of the cube (X, Y, Z). Moreover, thanks to the specific geometrical symmetry of the cubic shape, the implementation is simplified: HO will require a common filter for all loudspeakers, while HI will need identical, but out of phase, filters for two opposite sides (fig. 2). Figure 1. Initial DSP implementation This global approach was applied on different handmade prototypes, whereas a Matlab toolbox was developped for simulating the optimization filtering. Finally, the results showed quiet good agreements between experimental measurements and software simulations, but in a limited frequency range (200-2000Hz). 3 Getting a Set of Basic Directivities From these basements, we carried on the work with a rather derived approach and in a musical application aim. Figure 2. New implementation: combination of basic directivities (HO, Hlx/y/z) by a set of coefficients (aO, alx/y/z) for two input signals Sl(t) and S2(t). Once again, to keep coherent relations between the two directivity families (HO & HI), the design of filters must be carefully done, especially in the phase domain: a header module, common to all elementary directivities, delivers a minimum phase equalization of the composite source and specific residual filters are dedicated to monopole and dipole (fig. 3a). These processes ensure a white spectral behaviour with regards to the average spectrum produced by the two basic patterns and no difference in phase between HO and a frontal point of the positive lobe of HI (fig. 3b).

Page  00000003 Fig 3a. magnitude (up) Fig 3b. power spectrum and phase ratio (low) (up) and frontal phase of residual filters for ratio (low) for synthesised HO [-] and H1 [- -]. HO [-] and H1 [- -]. From these elementary directivities the composite source can simply generate intermediate patterns by weighting them with gains. It avoids complex interpolations between sets of filters related to different directivities: the weight between HO and HI controls cardioicity of the combined pattern and the weight within HI rules its orientation. Thus, the overall implementation needs only a limited number of filters, the four filtering modules HO, Hlx, Hly and Hlz (fig. 2) being processed before dispatching the signal to the six drivers. Moreover, this architecture also allows to reproduce simultaneously different source signals, each one having its own radiation: a set of weighting coefficients is simply related to each source signal (fig. 2). Finally, this procedure may be associated to a simple user interface allowing real-time manipulation: for each source signal, a slider for the cardioicity coefficient and a trackball-like controller for the 3D-orientation. 3.2 Results characterization Performances of the so-built source are first investigated in term of reconstruction. Figure 4 shows synthesis of basic directivities measured at 500Hz with the mid-range system. Because of spatial aliasing, these results will go degrading around the high cut-off frequency (2kHz). However, beyond this limit, the process hands over to the smaller HF-cube, thus maintaining aliasing problems within acceptable limits. Figure 4. Reproduction of HO (left), HI (center), R2 (right) at 500Hz (theoretical reference in [- -]). Perceptually, the main effect comes from the control of the direct to reverberated energy ratio, figuring out musical relevant notions as delocalisation, wide or addressed sounds, etc... For the 1st order spherical harmonics, this degree of freedom is linked to the directivity index of the source (DI) that can change from +4.7 dB, when the dipole face the audience, to theoretically -oo, when the listener is in its zero. In order to check the potential of the chosen basic directivities, their impulse responses were measured and analysed with regards to time vs. spatial distribution of the energy: Figure 5 shows the frontal/lateral energy ratio respectively measured by a cardioid and a bi-directional microphone. As expected, frontal HI will give the more precise localization of the source, this tendency being also supported by the first reflections. Lateral HI presents low values of the frontal/lateral balance from the very beginning of the response which will give the sensation of a wide source and high room envelopment. As for vertical HI, in spite of a low direct sound level, it is also associated to a late frontal balance providing the weakest source width impression and low room envelopment. This comes from the zero plan of the dipole that is horizontal in this case, so as no lateral reflections can be created: the resulting impression is like a monophonic sound. All these properties can be easily noticed when listening to sound examples, and allow assigning perceptually relevant identities to the sounds created or reproduced by the composite source. S10 vertical dipole This being, the use of HI appears to be very critical, especially on stage. Actually, as shown on Figure 4, this pattern gets a 300 shadow zone aperture defining the area where no direct sound is coming from the source. Practically, this means that only a reduced part of the audience would hear most of directivities as they should be heard. In order to overcome this issue, a third basic directivity R2 is introduced, resulting from the combination

Page  00000004 of a 2nd order spherical harmonic and the monopole. As HI, this new pattern is bi-directionnal but with narrower beams, offering a larger shadow zone as shown on Figure 4. However, because of the limited number of transducers (6), R2 can only be obtained on the three canonical axis X, Y, Z and can't be freely oriented in other directions. 4 Timbre versus Spatial Control In case of a musical instrument simulation, spatial reproduction also needs to be tuned in the spectral domain. Although the current goal is not a precise reproduction of an instrument directivity, we first supply the system with frequency-dependant gains (fig. 2) so as to control cardioicity and/or orientation with respect to frequency. In this point of view, another important part is to reproduce faithfully the power spectrum of the instrument. Actually, room acoustics tells us that perception of timbre in the audience will be mainly driven by the source power spectrum. For that purpose, we use an analysis method (Jot et al. 1997) based on diffuse field recordings of a chromatic scale played on the instrument. It leads to compute a correction filter that can be applied to a close field recording of the same intrument, providing thus a pseudo-anechoic signal with diffuse field spectral characteristics. This signal may then feed the composite source previously equalized to deliver a flat diffuse field response (fig. 3b). Figure 6 illustrates this procedure in case of the flute. Power(-) and Proximity(-.) spectra; Flute, mf, legato Transfer function (-) and parametric ilter(-.); Flute, mf, legato Frequency (Hz) Figure 6. (up) diffuse [-] / near [- -] field spectra of a flute - (low) Transfer function [-], parametric filter [- -] 5 First Musical Application Since the beginning of 2000, this study has been conducted in collaboration with composer Francois Nicolas in order to explore the musical potential of the new device named Timee with reference to the work of Platon. One of the key points was to verify wether spatial attributes could reach the status of a musical vocabulary. Specific tools (time functions, randomized or constrained triggers, etc.) were developped to control different static or dynamic ways of using the Timee. Numerous examples were created, trying to formalize, and even notify, the most relevant effects. Spatial counterpoint: assigning directivities to rythmic or melodic musical streams; cubist portrait: lighting, thanks to directivity, an instrument under peculiar focuses using constitutive samples; instrumental image: mixing several recording positions with their own spatial feature; etc... This cooperation was concretized by a musical piece (Duelle) performed during the 2001 Ircam festival for piano, violin, voice and Timee. It was composed like a concerto grosso where some of the soloists (voice, piano, harpsichord, violin, flute and organ) were pseudoinstruments played by the Timee. In Duelle, Francois Nicolas has investigated an alternative approach for music spatialization: the conventional immersive situation constrained by distributed loudspeakers surrounding the audience is replaced by a concerted behaviour where instrument-like diffusion provides stronger presence of sources and where sound relief is obtained by playing with the acoustical properties of the room. 6 Conclusion Derived from previous studies devoted to the reproduction of sound radiation with 3D loudspeaker arrays, a new method and prototype have been developped. It consists in creating a subset of canonical directivity functions that can be combined to provide a continuous control on a synthesized pattern. Timbre versus spatial behaviour is also taken into account by frequency-dependant controllers and by a reproduction of the power spectrum. This approach have been explored and validated by a collaboration with a composer and by a stage performance. References Warusfel, O., Derogis, P., Causs6, R. 1997. " Reproduction of directivity patterns using multi-loudspeaker sources" Proc. 103rd Audio Eng. Soc. Convention, New-York, USA. Misdariis, N., Warusfel, O., Causs6, R. 2001. " Radiation control on multi-loudspeaker device" Proc. of Intern. Symposium on Musical Acoustics, Perugia, Italy. Jot, J. M., Cerveau, L., Warusfel, 0. 1997. " Analysis and synthesis of room reverberation based on statistical timefrequency model" Pr. 103rd Audio Eng. Soc. Conv, N-Y, USA.