Managing Complexity with Explicit Mapping of Gestures to Sound
Control with OSC
Matthew Wright, Adrian Freed, Ahm Lee, Tim Madden, Ali Momeni
CNMAT, UC Berkeley, 1750 Arch St., Berkeley, CA 94709, USA
email: { matt,adrian,ahm,tjmadden,ali} @ cnmat.berkeley.edu
Abstract
We present a novel use of the OpenSound Control (OSC)
protocol to represent the output of gestural controllers as
well as the input to sound synthesis processes. With this
scheme, the problem of mapping gestural input into sound
synthesis control becomes a simple translation from OSC
messages into other OSC messages. We provide examples
of this strategy and show benefits including increased
encapsulation and program clarity.
1. Introduction
We desire expressive real-time control of computer
sound synthesis and processing from many different
gestural interfaces such as the Boie/Mathews Radio Drum
(Boie, et al., 1989), the Buchla Thunder (Buchla, 2001),
Wacom Tablets (Wright, et al., 1997), gaming joysticks, etc.
Unlike acoustic instruments, these gestural interfaces have
no inherent mapping between the gestures they sense and
the resulting sound output. Indeed, most of the art of
designing a real-time-playable computer music instrument
lies in designing the mapping between sensed gestures and
control of the sound generating and processing.
We believe that OpenSound Control (OSC) (Wright and
Freed, 1997, Wright, 1998) provides many benefits to the
creators of these gesture-to-sound-control mappings. It is
general enough to represent both the sensed gestures from
physical controllers and the parameter settings needed to
control sound synthesis, so it provides a uniform syntax and
conceptual framework for this mapping. The symbolic
names for all OSC parameters make explicit what is being
controlled and can make programs easier to read and
understand. An OSC interface to a gestural-sensing or
signal-processing subprogram is a powerful form of
abstraction that can expose all of the important features
while hiding the implementation.
We will present a paradigm for using OSC for this
mapping task and give a series of examples culled from
several years of live performance with a variety of gestural
controllers and performance paradigms.
2. OpenSound Control
OpenSound Control (OSC) was originally developed to
facilitate the distribution of control structure computations
to small arrays of loosely coupled heterogeneous computer
systems. A common application of OSC is to communicate
control structure computations from one client machine to
an array of synthesis servers. The abstraction mechanisms
built into OSC-a hierarchical name space and regular
expression message targeting-have also proven to be
useful in implementations running entirely on a single
machine. In this context we have discovered a particularly
valuable application of the OSC client/server model in the
organization of the gestural component of control structure
computations. The basic strategy is to:
* Translate all incoming gestural data into OSC
messages with descriptive addresses
* Make all controllable parameters in the rest of the
system OSC-addressable
Now the gestural performance mapping is simply a
translation of one set of OSC messages to another. This
gives performers greater scope and facility in choosing how
best to effect the required parameter changes.
3. An OSC Address Subspace for
Wacom Tablet Data
Wacom digitizing graphic tablets are attractive gestural
interfaces for real-time computer music. They provide
extremely accurate two-dimensional absolute position
sensing of a stylus, along with measurements of pressure,
two-dimensional tilt, and the state of the switches on the
side of the stylus, with reasonably low latency. The styli