ï~~A Framework for Developing Signal Processing
and Synthesis Algorithms for the Motorola 56001
Kurt J. Hebel
Symbolic Sound Corporation, P. O. Box 2530, Champaign, IL 61825-2530
Tel: (217) 328-6645 Electronic Mail: symbolic.snd@applelink.apple.com
The developer's version of the Kyma System provides an object-oriented
framework for interactively developing and testing digital signal processing
and synthesis algorithms written in the assembly language of the Motorola
56001 digital signal processor. There are several advantages to developing
code within this framework. The framework handles memory allocation, input
and output functions, and task scheduling; the programmer develops short
code segments accomplishing a single function and plugs them into the
framework to test them. The multiprocessor hardware (the Capybara) provides the computational power to develop and test these segments interactively. The large set of code segments already contained within the framework
allows the programmer to quickly test a new algorithm on a variety of input
signals and in a variety of contexts.
Introduction
The Kyma System is a highly flexible, open-ended
environment for sound computation. Among its
strengths are its direct manipulation user interface,
and real-time software (direct) synthesis capabilities [1, 2, 3, 5].
The Kyma System is composed of software - the
Kyma language - and hardware - the Capybara.
The Kyma language combines software synthesis,
digital recordings, real-time processing of A/D inputs, MIDI, and algorithmic composition in one
environment. The Capybara is a high performance
parallel processor containing from two to nine Motorola 56001 digital signal processors.
The Kyma language is based on objects called
Sounds that represent streams of digital audio samples. Sounds are analogous to functions - all
Sounds are either 0-ary functions (and are therefore
called atomic Sounds), or are functions of one or
more other Sounds (therefore called composite
Sounds). Because of their functional nature, Sounds
can be combined or shared with other Sounds to
construct complex networks that can describe any
level of detail from signal processing to compositional processes [6]. This functional representation
also makes it possible to partition the sample
stream computations for execution on the multiple
processors of the Capybara.
The generality of Sounds allows them to represent
any stream of digital audio samples including entire
compositions [4].
Every Sound object is an instance of a specific
class. The class defines the structure and behavior
shared by all of its instances; the structure contains
the parameters of the Sound, and the behavior describes, among other things, how a Sound of that
class produces its stream of samples.
One can create new classes of Sounds from a
combination of other Sounds. In this manner, signal
processing and generation algorithms can be defined in terms of pre-existing Sounds [7].
There are times, however, when one would like to
develop algorithms directly in the DSP assembly
language: Combinations of small general purpose
Sounds are not as efficient as a highly specialized
monolithic Sound, and it may not always be possible to construct an arbitrary algorithm out of the
built-in Sounds.
This paper describes extensions to Kyma that provide an object-oriented framework for interactively
developing and testing digital signal processing and
synthesis algorithms written in the assembly language of the Motorola 56001.
The Kyma Virtual Machine
In the Kyma language, the signal processor is
treated as a virtual machine, that is, a computer
whose "machine language" consists of digital signal processing and synthesis algorithms (e.g. Sum,
Product, Oscillator, SecondOrderFilter, etc.).
When a Sound is played on the Capybara, it is
compiled into a program consisting of sequences of
these machine language instructions.
ICMC 509
0