ï~~Proceedings of the International Computer Music Conference 2011, University of Huddersfield, UK, 31 July - 5 August 2011
SOUND ELEMENT SPATIALIZER
Ryan McGee, Matthew Wright
Media Arts and Technology
University of California, Santa Barbara
ryan@mat ucsb. edu
[email protected]. edu
ABSTRACT
Sound Element Spatializer (SES) is a novel system for
the rendering and control of spatial audio. SES provides
simple access to spatial rendering techniques including
VBAP, DBAP, Ambisonics, and WFS via OSC. Arbitrary
loudspeaker configurations and an arbitrary number of moving sound sources are supported. SES operates as a crossplatform C++ application that can spatialize sound sources
from other applications or live inputs in real-time. Also
included are means for decorrelated upmixing.
1. INTRODUCTION
Computer music composers have carefully crafted the spatial dimension of sound for decades [2]. It can contrast
and animate sounds in space in a way comparable to the
use of pitch in the frequency domain. Spatialization can
clarify dense textures of sounds, make greater numbers
of simultaneous sound elements perceivable, and choreograph complex sonic trajectories [13]. For effective spatialization one must be able to precisely control the trajectories of spatial sound elements, i.e., their spatial positions
as functions of time. Dramatic effects can be achieved
when the movement of sounds is very fast- the "whoosh"
of a high-speed vehicle driving by or the sounds of several
insects flying around one's head, for example. With computers we can simulate moving sound sources beyond the
scope of reality. For instance, one may simulate a violinist flying around a room at 100mph or decompose a sound
into constituent elements to be individually spatialized.
The goal of SES was to operate as a robust, crossplatform, standalone application that would be easy to integrate with any audio application and scale to support
spatialization of an arbitrary number of sound sources over
an arbitrary loudspeaker configuration, with dynamic selection among spatialization rendering techniques including Vector Base Amplitude Panning (VBAP) [10], Distance Based Amplitude Panning (DBAP) [5], Higher Order Ambisonics (HOA) [6], and Wave Field Synthesis [1].
Control of sound trajectories should not limited to a proprietary GUI or to programming in a given environment,
but instead accomplished via Open Sound Control (OSC)
[16, 17].
1.1. Digital Audio Workstation Software
Popular digital audio workstation (DAW) software packages such as Logic, ProTools, Live, Digital Performer,
etc., though useful for many aspects of layering and shaping sound in time, offer impoverished spatialization functionality. While DAWs sometimes include spatial panning interfaces or plug-ins, these panning methods are
limited to sounds in a 2-dimensional plane and are not
scalable to accommodate loudspeaker configurations beyond consumer formats such as stereo, quadraphonic, 5.1,
7.1, and 10.2 [8]. DAW packages may include integrated
automation editors for the time-dependent control of spatialization parameters, but the automation of spatialization
becomes cumbersome when implementing complex geometric trajectories for several sound sources or wanting to
specify intricate trajectories procedurally.
1.2. Related Work
Outside the world of DAWs, there are several spatial audio
platforms including BEASTMulch [15], Zirkonium [11],
Jamoma [8], Spatialisateur [3], and OMPrisma [14]. While
these systems do greatly extend the compositional capabilities for spatialization well beyond those of DAWs, all
currently lack in at least one area of usability, scalability,
flexibility, or control, as shown in Table 1.
Features of.
Current 03 U""."" 03 -1
Spatialization -
Systems. o a
DAWs /
BEASTmulch / / /
Zirkonium / /
Jamoma / / / / /
Spatialisateur / / / /
OMPrisma / / / /
SES1 / / / / / / /
Table 1. Comparison of Current Spatialization Systems