ï~~Proceedings of the International Computer Music Conference (ICMC 2009), Montreal, Canada
August 16-21, 2009
MUBU & FRIENDS - ASSEMBLING TOOLS FOR CONTENT BASED
REAL-TIME INTERACTIVE AUDIO PROCESSING IN MAX/MSP
Norbert Schnell, Axel Robel, Diemo Schwarz, Geoffroy Peeters, Riccardo Borghesi
IRCAM - Centre Pompidou
Paris, France
ABSTRACT
This article reports on developments conducted in the framework of the SampleOrchestrator project. We assembled
for this project a set of tools allowing for the interactive
real-time synthesis of automatically analysed and annotated audio files. Rather than a specific technique we present
a set of components that support a variety of different interactive real-time audio processing approaches such as beatshuffling, sound morphing, and audio musaicing.
We particularly insist on the design of the central element of these developments, an optimised data structure for
the consistent storage of audio samples, descriptions, and
annotations closely linked to the SDIF file standard.
1. INTRODUCTION
The tools provided by past and current research in music information retrieval provide manifold descriptions that permit the transformation of recorded sounds into interactive
processes based on signal models, sound and music perception, musical representation, as well as further abstract mechanical and mathematical models. A variety of interesting
approaches to interactive real-time processing are based on
the transformation of certain aspects of a sound (e.g. its pitch
or rhythm), while preserving others.
The SampleOrchestrator project', has allowed for the
convergence of several music information retrieval and audio processing techniques into a set of complementary tools
summarizing research carried out over the past ten years.
While our work has been focused on experimentation with
the use of audio descriptors and annotations for the interactive transformation and hybridization of recorded sounds,
other parts of the project aimed at the development of novel
techniques and tools for automatic orchestration as well as
for the organization of large data bases of sounds.
Once given the availability of an adequate set of audio
processing tools, the necessity to consistently represent audio data with audio descriptions and annotations within realtime processing environments such as Max/MSP appeared
as a critical issue and led us to the development of the MuBu data structure.
1http: //www.ircam. fr/306.html?&L=l1
2. THE MUBU MULTI-BUFFER
The MuBu data structure has been designed as a generic container for sampled sounds as well as any data that can be extracted from or associated to sampled sounds such as audio
descriptors, segmentation markers, tags, music scores, and
motion tracking data. MuBu is a multi-track container representing multiple temporally aligned (synchronized) homogeneous data streams similar to data streams represented
by the SDIF standard [2] associating each element to a precise instant of time.
The implementation of MuBu overcomes restrictions of
similar existing data structures provided in Max/MSP such
as the buffer~ and SDIF-buffer [9] modules as well as the
data structures provided by the Jitter and FTM [3] libraries.
Special attention has been paid to the optimization of shared
RAM based storage and access in the context of concurrent
multi-threaded real-time processing. All access to the data is
thread-safe while the methods used in real-time signal processing methods have been implemented in lock free way.
A particular challenge for the design of MuBu has been
the idea to provide a generic container that can represent and
give access to any kind of data, but that also allows for the
implementation of specific access and processing methods
for tracks of particular data types. For example, an additive
synthesis module connected to a MuBu container should be
able to automatically find and connect to a track of partials
or to a track of a fundamental frequency and another track of
harmonic amplitudes. The synthesis module can be automatically configured depending on the additive representation
loaded into one or multiple tracks of the container.
So far, MuBu implements import methods for audio files of various formats, SDIF files, MIDI standard files, and
generic XML files.
We have developed a Max/MSP module integrating the
MuBu library and implementing a set of messages to manage and access the data. In addition to the audio processing
tools described below, we defined a set of Max/MSP modules referring to the MuBu module and accessing its data
copying to and from Max/MSP data structures.
423