IRIN: Micromontage in Graphical Sound Editing and Mixing Tool
Carlos Caires
CICM - Centre de Recherche Informatique et Creation Musicale
Universite de Paris VIII
carlos.caires @wanadoo.fr
2, Rue de la Liberte 93526 Saint Denis Cedex 02, France
Abstract
Micromontage technique allows the composer to work a
musical figure point by point, shaping each sound particle
with microscopic precision. The software presented in this
paper combines graphic and script editing with algorithmic
generation and manipulation of sound sequences. It provides several tools to enhance both creation and organic development of the musical material under this compositional
paradigm using a user-friendy visual environment
1 Introduction
IRIN is a composition tool implemented as a Max/MSP
standalone (Zicarelli 1998), designed to enhance several composition operations enclosed in the micromontage paradigm1.
It uses a comprehensive user interface containing several editing windows allowing the composer to assemble, view, playback and modify several kinds of sound objects.
Generally speaking, micromontage technique consists of
the extraction of sound samples from sound files and then
rearranging them in time. Each extracted sound sample can
be multiplied and transformed through operations like speed
variation, filtering, panning and amplitude envelope editing
(Roads 2001). These very simple sound processing operations are typically enough to build a wide-ranging sound catalogue. Chosen groups of samples from the pre-composed
catalogue are then arranged in time into more complex sound
structures on a higher time scale that can subsequently be also
transformed and multiplied through several variation operations (Vaggione 1995; Vaggione 1996). This kind of compositional approach calls for a working environment that is able
to keep track of all important sound operations. Taking into
'IRIN was presented for the first time as "Mixage" (Caires 2003). Besides introducing several new features, this version now runs under MacOs
X.
consideration that sound transformation, far from a mere "effect", is an act of composition, a memory of all actions and respective data involved within the process of creating the tiniest sound particle is needed so that a consistent proliferation
of the musical material can be achieved. All data concerning
sound manipulation is therefore stored and accessed through
IRIN's graphic interface. IRIN was designed to offer control
over micro-time and macro-time levels of composition, allowing the composer to browse smoothly between them with
an analogous outlook towards musical material.
2 Basic IRIN features and use
The functioning of IRIN can be summarized as follows:
1. Load up to 16 Sound files of any size depending on the
available RAM.
2. From each sound Buffer, select a region and edit it in
several ways to obtain a Sample.
3. Place samples on any one of the 4 available tracks,
knowing that tracks are polyphonic and sound parameters are a track independent feature.
4. Store edited Samples in a sample library. Every stored
Sample (in a track or in the library can be retrieved for
later use or further manipulation).
5. Encapsulate sound sequences into a Figure object and
submit them to several variation operations. Store Figures in a library.
6. Encapsulate up to 8 Figures into a Meso-structure object and submit it to several variation operations. Store
the Meso-structures in a library.
7. Add customizable shapes and colours to every sample.
Shapes are used for display in the Timeline window on
"shapes view" mode.
Proceedings ICMC 2004
0