Page  00000001 A FLEXIBLE AUTHORING TOOL FOR WAVE FIELD SYNTHESIS S. Bleda DFISTS, SST Group University of Alicante J.J. Lopez iTEAM, GTAC Group Technical University of Valencia J. Escolano, B. Pueo DFISTS, SST Group University of Alicante ABSTRACT Wave Field Synthesis systems achieves a real 3D sound rendering for a wide listening area. However, due to its novelty, there is a lack on the WFS authoring process for mixing engineering. In this paper, a discussion of various strategies to overcome this drawback is presented. Based on this analysis, a compromise solution is proposed, which consists of two modules: a standalone WFS rendering server and a source positioning client in the form of a VST plug-in. Therefore, any current VST compatible editing software tool may be used in conjunction with the plug-in to produce WFS projects. Because of the nature of VST plug-ins, there is an independence from the software editing tool and the operating system used. It also allows scalability in the Render Server to account for a variable number of channels and reproduction configurations used. 1. INTRODUCTION Wave Field Synthesis (WFS) is a recently developed technology that achieves a realistic 3D-Sound scene rendering for a wide area. However, due to its novelty there is a lack of intuitive tools for WFS authoring process. Current tools are designed bearing WFS concepts in mind and that may cause certain inconveniences to sound mixing engineers [1]. In this paper, basis of WFS are introduced followed by the discussion of various design strategies to overcome the mentioned problem. Finally, the selected strategy is implemented in the form of a software for WFS authoring. 2. WAVE FIELD SYNTHESIS BASIS Based on Huygen's principle, WFS is one of the most promising 3D audio reproduction system. Compared with other traditional systems, WFS provides the widest listening area, also called "sweet spot". This means that the listeners perceive a proper sound scene independently of their room position. To simulate the Huygens wave front secondary sources, a set of linear loudspeaker arrays are used [2] (see Figure 1). WFS describes the sound stage from the sound sources point of view following their spatial evolution during time. Since at playback stage the real sound sources are not present, they are called Virtual Sound Sources. This technology allows to reproduce live sound recordings [3] or to render dry sources with pre-recorded impulse responses [4], that is to say, auralization. Furthermore, with suitable software, it is also possible to achieve effective room cancellation effects. Because of the nature of this audio reproduction system, the WFS authoring process is completely different from the current multitrack authoring process since it is "virtual source" oriented instead of track oriented. On the contrary to multitrack process, in WFS, a given track number, -or alternatively, a given channel number-, is defined at the playback stage, not at the production stage. ' i Primary / Source )Ij Figure 1. Loudspeaker array technology applied to Wave Field Synthesis rendering systems. 3. CURRENT WFS AUTHORING TOOLS Nowadays, there is a limited variety of software authoring tools for WFS production. Like any other tool, there are three sorts of implementations: commercial, open source and research oriented. At the moment, commercial implementations are more user friendly than open source and, of course, even more than research oriented ones. However, each implementation differs significatively in the functions provided. In the following, the research oriented implementations are not considered since these are not intended for general use. All these implementations have a common problem: they are designed with the only purpose of WFS authoring tasks. Consequently, any sound engineer being accustomed to use well known multichannel software must

Page  00000002 be trained again in the new authoring process. To carry out a typical surround mix with a WFS authoring tool could be a complete nightmare if one is not familiar with WFS basis. All the provided facilities, menus, screens, etc. are not designed for the previous style mixing. This is clearly seen by comparing the interfaces showed in Figures 2 and 3. Furthermore, the remastering of an already mixed sound track must be completely performed again, unless a simple stereo or 5.1 setup simulation with WFS is good enough for a given requirement. Figure 2. Example of a current multitrack authoring software interface. 4.1. Standalone authoring software In this strategy, a completely new software is designed for WFS production. This software is specially oriented to the new virtual source concept needed by WFS, keeping in mind more or less effectively the previous multichannel concepts. This approach requires a lot of effort if the goal is to be a serious alternative to the current software. In addition to the WFS tasks, all the current edition facilities, filters, effects, plug-ins, etc. should be also included. Besides, since WFS requires a lot of CPU resources, including also the multi-track tasks will raise this CPU requirements to a level which will require the use of dedicated hardware. This is the strategy followed by current authoring tools. 4.2. Enhancing current software A second approach could be to enhance the facilities of the current software with some sort of plug-ins, as proposed Pellegrini and Kuhn [1]. These must perform all the WFS processing without interfering the software host tasks. In principle, this approach is very user friendly since all the computation is performed inside a well known software. However, there are some drawbacks which are almost impossible to overcome because of the current software nature. Current software is track oriented instead of virtual source oriented. By this reason, the virtual source concept must be provided by the plug-in. Each time a new virtual source has to be introduced on the scene, a plugin instance must be interleaved among the desired track plug-in chain. Therefore there will be as much plug-in instances as virtual sources needed. The latter constitutes an important problem since the plug-in instances must be synchronized, due to the fact that only one instance must use the sound hardware at a time. Thus, one instance must be promoted as master with full rights over the hardware, the rest of the instances only pass their data to the master. Another drawback is that current authoring tools final aim is to produce a master. But, WFS does not have a master since the sound scene is rendered (mixed) during reproduction in real time. Therefore the authoring software must go with the project to render the scene at reproduction time. This has nonsense for mass production. As in the previous strategy, the last downside is the computational cost. There are two different software tools processing at the same time in the same computer, the plug-in and its host. Therefore, CPU requirements will be raised dramatically. 4.3. Hybrid strategy The third approach is an hybrid between the two previous strategies and consists of two modules: a WFS rendering tool and a virtual source positioning client in the form of a plug-in. Both modules could be connected by means of a given protocol. The WFS rendering tool is designed following the requirements of the first strategy. But, as shown in the next Figure 3. Presented WFS authoring software interface. 4. STRATEGIES FOR DEVELOPING WFS AUTHORING SOFTWARE To overcome the mentioned inconveniences, three different work strategies are proposed and stated below. These strategies depend on which software is created/modified to allow WFS and multi-track projects at the same time.

Page  00000003 section, these requirements can be relaxed. This tool works as a WFS rendering server, as the master plug-in of the second strategy did. Consequently, the plug-in now must act in the same manner as the slave plug-ins in the second strategy, that is to say, they only send the data to the WFS rendering tool. Hence, from the software host point of view, the plug-in is virtually a bypass and thus not interferes with its processing tasks. Not only the two modules work in conjunction to create a WFS project, but also the WFS rendering tool can perform a rendering project in a standalone way. However, utilities provided by the plug-in software host are not available. If the communication of both tools is made using a network protocol [1], each tool could reside on two different computers and that has the advantage of allowing double the CPU power. As a result, the advantages of the two previous strategies could be accomplished and, at the same time, its drawbacks could be minimized. 5. DEVELOPED AUTHORING SOFTWARE Once the previous strategies has been discussed, the hybrid approach is selected for this work due to their advantages. Thus, two different tools were developed: a WFS rendering server based on an author's previous work [5], and a plug-in which communicates the WFS render with the plug-in software host. At present, a mature and widespread plug-in technology is the Steinberg VST~ system. The majority of the current audio authoring software in the market is VST compatible and this will be the plug-in technology used in this paper. for L loudspeakers and M virtual sources are easily rendered. Mainly, L is limited by the sound hardware and M is limited both by the sound hardware and the CPU power, in our implementation both L and M are 96. The Plug-in is installed on the client computer and can be used by any editing software that can host a VST plugin. Its interface provides a two dimensional panel which defines the virtual source relative position (see Figure 4), allowing motion spatial effects in real time. As stated before, the plug-in must be interleaved among the desired track plug-in chain. The sound signal that feeds the respective virtual source is defined by its track chain position. The received sound signal with its spatial coordinates and time stamp are transmitted through the network to the WFS rendering server (see Figure 5). If necessary, the two computer sound hardware reference clocks could be synchronized. VST Host ~ Plug-in WFS Render Figure 5. Plug-in to WFS Rendering Server Connection Setup. The source position is considered as an input parameter of the VST standard so it can be stored automatically within the audio project in course, allowing a complete integration with the host software. Besides, the selected software editing tool is not critical and it is totally independent both on the number of channel of the WFS array and on the array geometry. Additionally, with current edition tools, this work scheme allows to produce a WFS and a multichannel 5.1 mix in the same project. On the other hand, the WFS Rendering Server defines all the parameters referred to the reproduction stage and sound hardware as well as the virtual sources motion and room auralization. Regarding its usage, the software has two operating modes: server and normal mode, i.e., with and without the VST plug-in respectively. When used in normal mode, the project is managed entirely by the WFS render, including sound data acquisition and virtual source positioning. Once described the reproduction stage -number Figure 4. Developed VST Plug-in interface. Following the directions of the third strategy, the WFS Rendering Server contains all the sound processing algorithms needed to produce the driving signals. These are used to excite the array loudspeakers for multiple virtual sources, as shown in Figure 3. A complete sound stage

Page  00000004 and position of loudspeaker arrays, auralization data, virtual sources, etc-, the playback is started. Then, virtual sound sources can be moved freely or following predefined paths. The definition of these paths can be stored during the playback. The whole set of operations can be stored on a project for future use. When used in server mode, the playback thread initiates and the network thread waits for the plug-ins connection. As plug-ins are being connected, consecutive virtual sources are created with the provided position and sound data. Virtual sound sources are positioned and fed according to the network received data until the playback ends. Again, if indicated, the server stores all the performed operations on a project and that allows from now on reproducing and/or altering the project in normal mode. Owing to the fact that both programs are connected by network, each one may reside in different operating systems. This gives even more flexibility as it is possible, for example, to connect an editing software under Linux with the WFS Render under Windows. 6. SYSTEM IMPLEMENTATION The WFS render has been developed for Windows operating system and developed under C++ Builder using the Steinberg ASIO~ library for sound management. The plug-in is developed using the Steinberg VST~ SDK and VSTGUI library under MS Visual C++ (due to the difficulties encountered using VSTGUI under C++ Builder). However, the plug-in could be easily portable with very little modifications. Regarding the network capacity, with 100 Mbps fast ethernet interfaces, it is possible to transport over fifty 48 kHz audio channels. Using a firewire IEEE-1394 interface, the capacity is raised to about two hundred channels [6], and using a 1 Gbit/s interface to nearly five hundred channels. A wireless network connection is not advisable since its throughput is more reduced and it can not be reliable. 7. CONCLUSIONS The developed software provides an intuitive tool to carry out WFS projects using two different approaches: the common WFS approach and a more intuitive one using the VST plug-in. The advantages of this strategy are summarized as: * Scalability. The scalability is achieved due to the software implementation. The plug-in has almost no interference with its host and the rendering server is multi-thread designed to fully exploit the benefits of a multi-processor system. * Plug-in software host independent. Any VST compatible software can be used to host the plug-in, and that assures total independence. Moreover, the plug-in can be programmed on different plug-in standards. * Reproduction stage independent. Due to the design, the plug-in software host needs no knowledge of the reproduction stage setup to perform its work. * Cross operating system. Thanks to the plug-in portability, it may reside on a different operating system than the rendering server. * Upmixing simplified. A given multitrack finished project can be upmixed to a new WFS project easily by using the plug-in. 8. ACKNOWLEDGEMENTS With our most sincere thanks to German Ramos from Polytechnical University of Valencia for his inestimable advice on the software implementation and to Miguel Roma from University of Alicante for his advice on usability issues. This work has been supported by Spanish Ministry of Science and Technology (MCYT) under Project ref. TIC2003-06841-C02-02. 9. REFERENCES [1] Pellegrini, R. and Kuhn, C. "Wave Field Synthesis: Mixing and Mastering Tools for Digital Audio Workstations", Proceedings of 116th Audio Engineering Society Convention, Berlin, Germany, May, 2004. [2] Berkhout, A. J.; de Vries, D. and Vogel, P. "Acoustic Control by Wave Field Synthesis", J. Acoust. Soc. Am., vol. 93, 1993, pp 2764 -2778. [3] Teusch, H.; Spors, S.; Herbdordt, W.; Kellermann, W. and Rabenstein, R. "An Integrated Real-Time System for Immersive Audio Applications", IEEE Workshop on Applications of Signal Proccesing to Audio and Acoustics (WASPAA '03), New York, USA, 2003. [4] De Vries, D. and Huselbos, E. "Auralization of Room Acoustics by Wave Field Synthesis based on Array Measurement of Impulsive Responses", XII European Signal Processing Conference (EUSIPCO'04), Vienna, Austria, Septiembre 2004. [5] Bleda, S.; Lopez, J.J.; and Pueo, B. "Software for the Simulation, Performance Analisys and Real-Time Implementation of Wave Field Synthesis Systems for 3D-Audio", 6th Int. Conference on Digital Audio Effects (DAFX03), London, UK, September, 2003. [6] "The Software Studio in the Age of Audio Networking", J. Audio Eng. Soc., Vol. 53, No. 1/2, 2005 Jan/Feb, pp. 124-129.