Sparkler: An Audio-Driven Interactive Live
Computer Performance for Symphony Orchestra
Tristan Jehan, Tod Machover, Mike Fabio
MIT Media Laboratory
email: {tristan, tod, revrev} @media.mit.edu
Abstract
This paper describes the design, implementation,
and application of an audio-driven computer-based
system for live and interactive performance with an orchestra. We begin by describing the compositional and
musical goals, emphasizing the electronics aspect. We
then discuss the challenges of working with an orchestra and integrating the electronics in a concert hall. We
describe the hardware setup and then detail the software implementation, from audio analysis and "perceptual parameter" estimation to the generative algorithm.
1 Introduction
The orchestra is a musical institution that has the
potential to take the lead in experimenting with the integration technology and new music. However, it has
not been at the forefront of the computer-music community. We have therefore endeavored to develop a
computer-based system that interacts with a full symphony orchestra in the context of a live performance.
This system was developed for Tod Machover's
piece Sparkler for orchestra and interactive electronics
(Machover 2001). Sparkler was premiered on October
14, 2001 by the American Composers Orchestra, conducted by Paul Lustig Dunkel in Carnegie Hall, New
York City. It was commissioned by the American Composers Orchestra for its Orchestra Tech program, and
was designed to be the opening work of a larger project
called Toy Symphony that was premiered in Berlin
on February 24, 2002 by the Deutsches SymphonieOrchester, conducted by Kent Nagano.
2 Musical Goals
Sparkler was written to explore many different relationships between acoustic orchestral sound and electronic sound, sometimes contrasting the two, sometimes complementing them, and at other times blending the two into a new whole. Most previous work
involved specially designed electronic instruments to
complement the orchestra (Madden, Smith, Wright,
and Wessel 2001), or musical events synchronized to a
score follower (Puckette 1992) or solo instruments that
were enhanced in some way (e.g. amplified or electronically processed). Our piece uses microphones to capture the audio of the entire orchestra which is then analyzed in real time and formulated into perceptual parameters through software. These instrumental sound
masses - which are performed with a certain freedom
by players and conductor - generate and control complex electronic extensions, turning the whole ensemble
into a kind of "hyperorchestra" (Whiting 2002).
The musicians play their traditional acoustic instruments without any modifications or additions. There
are only a few carefully placed microphones used to
capture large sections of the orchestra sound. We generate the electronic sounds through flexible algorithms
that take in streams of analyzed features from the audio and create complex sound textures that evolve over
the course of the piece. At the climactic section, the
orchestra shapes a kind of "texture blob" by bringing out different instrumental timbres and creating dramatic accents. To both the players and audience it is
quite clear that the orchestra is directly controlling the
electronics and is dramatically shaping this expressive
enhancement of its own playing.
3 Challenges
There are practical, logistical, and technical challenges that come up when working with orchestras.
Concert halls are not always set up with amplification
and microphones, and it can be difficult to incorporate
even the simplest piece of equipment on stage. Amplified synthetic sounds are not easy to mix with an
orchestra. The acoustics of a concert hall and the dynamic range of an orchestra does not necessarily fit
well with an electronic setup. It is definitely not easy to
mike an orchestra and accomplish accurate instrumental group differentiation with the amount of reverberation present. Fortunately, we were able to experiment
with several types of microphones and placements with
the local MIT Symphony Orchestra in the early stages
of our work. Rehearsal time was extremely limited, so
reliability and flexibility was key. There were several
parameters and ranges that could only be set during a
258