The BBCut Library
Nick Collins
Rm 159, Middlesex University, Cat Hill, Bamet, Herts, EN4 8HT, UK
email.
[email protected], http://www.axp.mdx.ac.uk/~nicholas 15/
Abstract
To facilitate work on automated breakbeat cutting it
was expedient to establish a general framework
promoting better code reusability. This framework is
a publicly released collection of SuperCollider
classes and help files called the BBCut Library.
Whilst notionally for the cutting of breakbeat
samples, its remit is much wider, into the use of
algorithmic composition techniques to cut up any
source audio.
The library is based upon a specific hierarchy of
phrase/block/cut sufficient to implement a wide
variety of cut procedures. Hierarchical information
allows cut aware effects which can update
parameters in coordination with rhythmic events.
The benefits of the library include the
interchangeable use of any type of synthesis and
source with any cut procedure. This makes it much
simpler to write a new cut procedure which is
immediately able to cut any target signal.
1 Background
The BBCut Library began out of work on an
algorithm to simulate the automatic cutting of
breakbeats in the style of early jungle or drum and
bass [Collins 2001a]. As the present author began to
write further types of cut sequence generator they
realised that they were continually repeating the basic
nested spawn structure and cut synthesis code. The
separation of cut synthesis from cut choice gave a
much better software reuse, and a better paradigm for
thinking about cut decisions.
The library is publicly available under the GNU
General Public License and is a collection of
SuperCollider 2 [McCartney 1998] classes with help
files available from the author's web site quoted
above. This paper cannot possibly go into great detail
on every assumption and method of the library, but
should provide a technical introduction in
combination with the many help files and the
commented code itself.
To set this work in the context of algorithmic
composition research, let us borrow terms from
[Pierce 2001], a paper which attempts to use neural
nets to fashion some basic semiquaver resolution
drum and bass drum loops. Existing cut procedures
implemented in the library [Collins 2001 b] are those
of an 'active style synthesiser' rather than an
'empirical style modeller'. However, the library itself
is neutral as regards what algorithmic composition
methodology is utilised in cut procedures. It is a tool
to assist research and composition, and one could
imagine an implementation of a neural net trained on
cut patterns from drum and bass classics as a
NeuralNetCutProc. There is little academic research
on dance music or electronica, which are fast
evolving current styles rather than 'dead' musics for
dissection. Hence the practical experimental approach
of this work. It is not enough to model early drum
and bass without allowing extrapolations of new
techniques.
The sort of audio cutting assumed here is usually
at haptic or human rhythmic rates, with the obvious
capability to reach inhuman speeds. In the main then,
there is a macro level structural view rather than the
microrhythms of granularisation [see Roads 1996 pp
180-184 especially]. That there is some overlap with
granular techniques is evident though a very familiar
effect of current electronica (AphexTwin,
SquarePusher et al), that of using extremely fast
iterated repeats of a small chunk of source
(implemented in the BBCL's WarpCutProc). These
repetitions played at audio rate speeds translate to
specific pitches from a noisy wavetable. Further, set
at faster inhuman tempi any cut procedure begins to
lose its rhythmic sense and become more like textural
granularisation. Human perceptual limits for this are
around semiquavers at 250 bpm, see for instance the
table of discrimination of the ear in [Pierce 2001].
2 A Summary of capabilities
This section lists some features and philosophies
of the BBCut Library.
(i) Support for composers who wish to experiment
with their own automatic cutting algorithms.
(ii) Separation of effects and synthesis from the
algorithmic composition routine so as to allow any
audio source to be cut by any cut procedure..
(iii) Effects processing on cuts is responsive to the
hierarchical levels.
(iv) Everything works in realtime (this is
SuperCollider after all!). The code has been tested
over many months in live situations.
(v) SuperCollider is a very beautiful language;
functions are easily passed as arguments making
313