Sound Spatializations in Real Time By First-Reflection SimulationSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact email@example.com to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 475 ï~~Sound Spatialization in Real Time by First-Reflection Simulation Oscar Ballan Luca Mozzoni Davide Rocchesso Centro di Sonologia Computazionale Dipartimento di Elettronica e Informatica University degli Studi di Padova Email: firstname.lastname@example.org Abstract In the early 80's, F. R. Moore showed how to simulate a moving sound source in a closed environment by means of an explicit model of first reflections. In this work we show that it is possible to go through several simplifications and obtain an algorithm for first-reflection simulation that runs in real time. We used the IRIS-MARS workstation to implement a self-contained system that can be controlled via MIDI. We also developed a graphical user interface for controlling the movement of the sound source in the virtual environment. 1 Introduction The perception of sound position in space is one of the most studied topics in psycho-acoustics. Fundamental cues contributing to our perception of sound location are: inter-aural time delay, headrelated transfer function, and the ratio between direct and reverberated sound [Blauert, 1983]. An attempt to simulate these cues can be very effective, but very impractical when the audience is relatively large [Chowning, 1977]. F. R. Moore  addressed the problem of sound spatialization in a concert situation. He developed the model of the "room within the room", as it is depicted in Fig. 1. The inner room represents the actual performance space. The outer room represents the virtual acoustic space in which the source moves. The basic assumption is that the sound reaches the audience in the inner room passing through holes in the walls. These holes coincide with the loudspeakers and the simulation proceeds computing all the delays and attenuations for each of the direct and reflected paths connecting the source with the holes. Even if we consider only the first reflections, the amount of computation is large. Nevertheless, we show in this work that, with a few simplifications, the model can run in real time on today's equipment. We implemented the two-room model using one of the two custom DSP chips of the IRIS-MARS workstation [Armani et al., 1992, IRIS, 1991b]. Our implementation takes a monophonic input and computes a quadraphonic output. Walt3 Figure 1: The two-room model 2 The Implementation When we tackled the problem of rendering the first reflections in real time, we decided to use the IRIS-MARS workstation using its DSP chips for all computations. This choice was driven by two considerations: the first was to have a selfcontained spatialization system that looks like a regular MIDI expander, the second one was to operate at audio sampling rate, thus increasing the versatility of the tool (e.g. we can move the source at an arbitrary speed). The availability of a complete MIDI implementation allows us to control the model in a live sit ICMC Proceedings 1994 475 Acoustics
Page 476 ï~~uation using one of the many controllers available on the market. In order to facilitate the usage, we developed a graphical user interface for IBMcompatible computers. We wrote the program for the first-reflection simulation using the X20 processor's assembly language [IS, 1991a]. To run the whole simulation in real time at a sampling rate of about 40 KHz required a great deal of code optimization. We had to make some simplifications on the model, some of which are worth noting. The walls and the speakers are enumerated as in Fig. 1. For each of the direct and reflected paths we compute the length of the path and the attenuation due to the distance. Some of the paths collide with the outer walls of the inner room. These paths should be canceled. For the direct paths we simply cancel the signal going to the loudspeaker (q + 2)mod4, where q is the quadrant from which the source is emitting its sounds. For the reflected paths we assume that the signal reflected by the wall w is not audible at the loudspeaker (w + 2)mod4. These are strong simplifications, but the results still give a good spatial impression. We used the lengths of paths to compute the attenuations to be applied to the input signal. We also approximated the inverse relationship between amplitude and distance with the linear expression: amplitude = 1- distance where the distance is considered to be normalized to one. This expression is not physically consistent, but in live performances it is very useful to have a linear and normalized behavior of sound levels. The linear expression is also much more efficiently implemented on DSPs, and the acoustic results are satisfactory. In our implementation we also used lowpass filters to simulate the attenuation of highfrequency components during sound propagation. The various paths are obtained from a single tapped delay line, where the position of taps depends on the position of the source. We used linear interpolation to read the signal from the taps. In this way we have a smooth Doppler effect for sources moving with a radial component of velocity. The IRIS-MARS workstation allows the simulation of outer rooms with sides up to 2214 meters long. In our system, the position of the virtual sound source can be varied by means of a generic MIDI controller. We can control the attenuation of the walls in the virtual room in the same way. In order to supply the system with the information on source localization and room parame ters we also developed a graphical interface running on IBM-compatible computers. In this environment, the user can move the source, record and play the movements back, change the dimensions of the virtual and listening rooms, and change other parameters of the source and the rooms. The sound source is represented as an object having a programmable inertia. This interface is very useful because it provides visual feedback to the user, which is important in non-ideal listening spaces. 3 Summary We developed a system for real-time sound spatialization in live concerts. Despite the extreme simplifications we introduced in the model, the acoustic rendering of the position of the sound source is satisfactory. The simulation of reverberant environments requires the addition of diffuse reverb. The availability of another DSP chip on the IRIS-MARS workstation allows to extend our model in this direction. We are implementing the diffuse part of the reverb in our system using the Circulant Feedback Delay Networks [Rocchesso and Smith, 1994]. References [Armani et al., 1992] F. Armani, L. Bizzarri, E. Favreau, A. Paladin. MARS: MARS - DSP Environment and Applications, International Computer Music Conference, San Jose - California, Oct. 1992. [Blauert, 1983] J. Blauert. Spatial Hearing, The MIT Press, Cambridge, Massachusets, 1983. [Chowning, 1977] J. M. Chowning. The Simulation of Moving Sound Sources, Computer Music Journal 1(3): 48-52, 1977. [IRIS, 1991a] IRIS. ASM2O - User's Guide, 1991. [IRIS, 1991b] IRIS. SM1000 - Documentation, 1991. [Moore, 1993] F.R. Moore. A General Model for Spatial Processing of Sounds, Computer Music Journal 7(3): 6-15, 1982. [Rocchesso and Smith, 1994] D. Rocchesso and J.O. Smith. CTirculant Feedback Delay Networks for Sound Synthesis and Processing, International Computer Music Conference, Aarhus - Denmark, Sept. 1994. Acoustics 476 ICMC Proceedings 1994