spobooks bbv9810.0001.001 in

    Chapter 16: Composing Using Dynamic Musical Structure

    16.1 Dynamically Creating Musical Structure

    The Common Music function sprout may be used to create musical structure. The template for sprout is:
    (sprout (object))

    object represents a Common Music object. In the following examples, we use containers as objects. Each container has its own initialization parameters that follow the object name.

    Example 16.1.1 uses the Common Music function sprout to conditionally create one of two unnamed threads. The threads are unnamed because the container class thread is followed by nil. One thread is created when the value of the note slot is either 60 or 64. The other thread is created when the value of the note slot is 62. When the generator is mixed, the C and E are harmonized by a C major triad and the D is harmonized by a G7 chord. Although this example is very simple from a melodic and harmonic standpoint, it illustrates a very powerful feature of Common Music.

    Example 16.1.1: harmonize.lisp

    (generator harmonize midi-note (start 0 length 3 channel 0 rhythm 1 amplitude .7 duration .75)
    (setf note (item (items 60 62 64)))
    (if (or (= note 60) (= note 64))
    (sprout (thread nil (start time)
    (dolist (chord-member '(48 52 55))
    (object midi-note
    note chord-member
    rhythm 0
    duration 1
    amplitude .5
    channel 0)))))
    (if (= note 62)
    (sprout (thread nil (start time)
    (dolist (chord-member '(47 50 53 55))
    (object midi-note
    note chord-member
    rhythm 0
    duration 1
    amplitude .5
    channel 0))))))

    audio file harmonize.mp3

    16.2 Encapsulation

    Encapsulation is the act of placing one thing inside another. In the case of Common Music, we use the term encapsulation to mean that a container is created inside of a Common LISP function.

    In Example 16.2.1, DEFUN is used to define a Common LISP function called ENCAPSULATION. The function has two required arguments (THE-NAME and REPETITION ) and an optional argument START-TIME. The optional argument START-TIME has a default value of 0. The variable THE-NAME is used in conjunction with the Common Music function name to name the generator. When REPETITION is greater than one, the Common Music function sprout is called which in turn calls the Common LISP function ENCAPSULATION. The function ENCAPSULATION is called inside of itself making this an example of recursive encapsulation.

    Example 16.2.1: encapsulation.lisp

    (defun encapsulation (the-name repetition
    &optional (start-time 0))
    (generator (name the-name) midi-note (start start-time
    length 15)
    (setf note (* (item (items 46 42 36 57 in heap)) repetition))
    (setf amplitude (random 1.0))
    (setf rhythm (item (rhythms s s s s. s. s. e e e. e. q. in heap)))
    (setf duration rhythm)
    (setf channel 0)
    (when (> repetition 1)
    (sprout (encapsulation nil (decf repetition) time)))))
    Stella [Top-Level]: (encapsulation 'test 2)
    #<GENERATOR: Test>
    Stella [Top-Level]: mix test 0

    audio file encapsulation.mp3

    Example 16.2.3 is taken from the Common Music Stella Tutorial "Describing Music Algorithmically." This example gracefully demonstrates recursive encapsulation in the creation of a musical fractal.

    Example 16.2.2: sierpinsky.lisp

    (defun sierpinski (nam dur key amp rep &optional (tim 0))
    (algorithm (name nam) midi-note (start tim rhythm dur amplitude amp)
    (setf note (item (intervals 0 11 6 from key) :kill t))
    (when (> rep 1)
    (sprout (sierpinski nil (/ rhythm 3) note amp (1- rep) time)))))
    Stella [Top-Level]: (sierpinski 'main 12 'c2 .5 5)
    #<Algorithm: Main>
    Stella [Top-Level]: mix main 0

    audio file sierpinski.mp3

    16.3 Compositional Environments

    Quite often, the output of an algorithm does not result in the creation of an entire composition. Higher-level compositional environments such as MIDI sequencers or multi-track digital audio workstations may be used to edit, process, or assemble the output of your algorithms. Post processing of the output of compositional algorithms implies that these algorithms themselves are a part of a whole. For this reason, the composer must carefully think about the formal structure of a composition and how the output of an algorithm relates to the composition as a whole.

    Algorithms that output MIDI data may be positioned onto the tracks of a MIDI sequencer. By positioning the data in a time-domain representation, the composer can readily experiment with the placement of events in time and the density of those events. A MIDI sequencer allows for graphic editing of MIDI data so small changes to the output of an algorithm are simplified.

    Figure 16.3.1 shows an example of the output of two algorithms positioned in the time-domain representation of a MIDI sequencer.

    Figure 16.3.1

    Because Common Music outputs MIDI data as well as data that may be used as input to sound synthesis languages such as Csound, the composer may wish to assemble a composition using a digital audio workstation that imports MIDI. Similar to Figure 16.3.1, the user-interface of a digital audio workstation generally uses a time-domain representation of audio and MIDI allowing the composer great freedom in the organization of musical events.

    16.4 Suggesting Listening

    The 2nd movement of American Miniatures by David A. Jaffe uses a drum pattern derived from Congolese music, combined with an algorithmic drum improvisation. The latter was done by systematically performing random perturbations on the drum pattern, with the perturbations becoming denser and denser, along with an increase in tempo. The output of this Common Music program was a Music Kit scorefile that was used to drive the Music Kit "mixsounds" program. Each "note" in the file was an individual drum sample. [Jaffe, 1992]

    Eulogy by Mary Simoni integrates processed speech and algorithmic processes to create a tribute commemorating the funeral Mass of her father. Csound was used to process the speech written and spoken by her siblings. Common Music was used to generate a recitative-like accompaniment to the processed speech. The composition was assembled using a MIDI sequencer that supports digital audio. [Simoni, 1997]