~Proceedings ICMCISMCI2014 14-20 September 2014, Athens, Greece
The Procedural Sounds and Music of ECHO::Canyon
Robert Hamilton
Stanford University, CCRMA
rob@ccrma.stanford.edu
ABSTRACT
In the live game-based performance work ECHO:: Canyon,
the procedural generation of sound and music is used to
create tight crossmodal couplings between mechanics in
the visual modality, such as avatar motion, gesture and
state, and attributes such as timbre, amplitude and frequency
from the auditory modality. Real-time data streams representing user-controlled and Al driven avatar parameters
of motion, including speed, rotation and coordinate location act as the primary drivers for ECHO::Canyon's fullyprocedural music and sound synthesis systems. More intimate gestural controls are also explored through the paradigms
of avian flight, biologically-inspired kinesthetic motion and
manually-controlled avatar skeletal mesh components. These
kinds of crossmodal mapping schemata were instrumental in the design and creation of ECHO::Canyon's multiuser multi-channel dynamic performance environment using techniques such as composed interaction, compositional
mapping and entirely procedurally-generated sound and music.
1. INTRODUCTION
From a creative musical standpoint, the relationship between physical motion and action in space and the production and manipulation of musical sound have been one of
necessity as for most pre-digital musical systems, physical
Copyright: @2014 Robert Hamilton et al. This is an
open-access article distributed under the terms of the
which permits unrestricted use, distribution, and reproduction in any medium, provided the original
author and source are credited.
gesture was an inherent component of instrumental performance practice. From the sweep of a bow across strings, to
the swing of a drumstick, to the arc of a conductor's baton,
action and motion in space were directly coupled as physical or intentional drivers to the mechanical production of
sound and music [1].
The introduction of computer-based musical systems has
removed the necessity for such direct couplings, allowing
abstract data-analysis or algorithmic process to both instigate and manipulate parameters driving musical output.
However artists seeking to retain some level of humandirected control within the digital context often develop
and employ mapping schemata linking control data to musical form and function. Such mappings provide interfaces
between human intention and digital process that range
from the simple to the complex, from the distinct to the
abstract.
Choreographies of music and action found in dance and
film commonly make use of a reactive association between
gesture and sound: dancers' reactions - spontaneous or
choreographed - to a musical event or sequence of events
often form physical motions or gestures with direct temporal correspondence to the onset, duration or contour of
a sounding event [2]. In the same way, events in static visual media such as film, music video and some computer
games are often punctuated by the synchronization of visual elements with auditory or musical cues, linking the
audio and visual in our perception of the event without any
causal relationship existing between the two modalities.
Interactive virtual environments and the tracking of actor motion and action within those environments affords
composers and sound designers another approach to the
mapping of physiological gesture to parameters of sound
- 449 -
0