ï~~Proceedings of the International Computer Music Conference 2011, University of Huddersfield, UK, 31 July - 5 August 2011
2.3. Mixed-Reality Performance
The combination of live musical performers on
traditional instruments with virtual performers in
rendered environments has been seen in a series of
mixed-reality works by Stanford's SoundWire [3] and
Music in Virtual Worlds research groups, combining
high-quality and low-latency audio streaming software
[2] with fully-rendered OSC enabled environments
supporting ensembles of virtual performers. Musical
spaces built within the Sirikata [6] engine were used to
control rich spatialized sound synthesis engines in a
number of musically rich mixed-reality musical
performances including a series of performances at the
2009 MiTo Festival in Milano, Italy [8].
3. UNREAL DEVELOPMENT KIT (UDK)
The Unreal Development Kit (UDK) is a nextgeneration professional development framework for
creating visually rich networked gaming and simulation
environments. Built upon the Unreal Engine 3 - the
engine powering current commercial gaming titles such
as Gears of War, Bioshock and Unreal Tournament - the
UDK offers tools for building fully-rendered complex
virtual architectures within the Windows operating
system (XP/7), designing AI and animation workflows,
a robust multi-user networking layer and a powerful
object-oriented scripting language called UnrealScript
[12]. The UDK is available at no cost for educational
and non-commercial use and is updated monthly with
new features ranging from enhancements to the lighting
and modeling tools to integration with Apple's iOS,
allowing works created in the UDK to be published to
iPhone, iPad and iPod touch devices.
4. UDKOSC SYSTEM OVERVIEW
UDKOSC combines an Open Sound Control
implementation with a customized UDK codebase
featuring enhanced user, environment and object
controls and behaviors in an effort to afford users an
enhanced level of control over motions and actions that
can be used to drive musical systems. OSC messages
are encapsulated as UDP packets and passed along their
own network threads, not piggy-backing on existing
network calls made by the engine itself.
4.1. Sound and Spatialization
Avatar motion and gesture in virtual space in UDKOSC
is rendered and subsequently spatialized across a multichannel sound field using a second-order ambisonics
encoder and decoder built in the SuperCollider
programming language [10]. When using a multichannel sound system powered by ambisonics, all
speakers are used to spatialize individual sounds,
creating a stable and localized sound field surrounding a
given space or audience. Sounds moved through the
virtual space are correlated to movements through the
physical concert halls: as an avatar traverses a rendered
environment interacting with objects in the environment,
sound is perceived by the audience as moving across the
physical hall.
4.2. OSC Communication
UnrealScript has a provision for binding Windows
Win32.dll's to UnrealScript classes, enabling code such
as the OSCPack C++ implementation of Open Sound
Control to provide bi-directional communication
between game engine and sound-server. This custom
Windows oscpack 1 0 2.dll was compiled with specific
extern methods and mirrored data structures to
communicate with the game engine, both to stream
specific game data out as OSC formatted messages over
UDP (including Pawn XYZ coordinate positioning, XYZ
coordinate tracking for in-game projectiles, state and
XYZ coordinate info for Static Mesh and Interp actor
classes) and to send control data from iPad controllers
and from the audio-engine itself back into the game to
control Global GameInfo parameters such as world
Gravity and GameRate, as well as specific positioning
information for projectiles and static actors.
4.3. Client/Server Architecture
The Unreal Engine makes use of a client-server
architecture: there exists a "server" game instance with
knowledge of position and state for every connected
client actor and environmental event. As latencies
between connected clients have the potential to vary
widely, each client's local view of the current game
state is constantly being readjusted through
communication with the server, potentially offering
slightly different views of a given environment across
client instances at any given time. OSC communications
are initialized from within the server game instance
rather than within each client instance so that an
authoritative and shared view is used to generate the
outgoing OSC data stream.
4.4. OSC Mappings
At this point, OSC hooks have been written for various
actor and event classes within the UDK framework
allowing for a number of control mappings.
4.4.1. Actor motion in coordinate space
As user avatars move through three-dimensional
coordinate space, each avatar's X, Y and Z location data
is streamed over OSC to the ambisonic sound-server.
This location data can be used to control positional
ambisonic spatialization of a continuous sound-source
around a multi-channel speaker setup, to trigger prerendered audio files, or to signal large-scale sectional
divides in musical structure.
4.4.2. Projectile/Environment tracking and collision
While projectiles launched from a client avatar position
are typically associated with violent action in the
context of video-games, modified projectile code can be
repurposed as an enactive interface for sound creation.
Points of collision in coordinate space between
718
0