A Framework for Developing Signal Processing and Synthesis Algorithms for the Motorola 56001Skip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact email@example.com to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 509 ï~~A Framework for Developing Signal Processing and Synthesis Algorithms for the Motorola 56001 Kurt J. Hebel Symbolic Sound Corporation, P. O. Box 2530, Champaign, IL 61825-2530 Tel: (217) 328-6645 Electronic Mail: firstname.lastname@example.org The developer's version of the Kyma System provides an object-oriented framework for interactively developing and testing digital signal processing and synthesis algorithms written in the assembly language of the Motorola 56001 digital signal processor. There are several advantages to developing code within this framework. The framework handles memory allocation, input and output functions, and task scheduling; the programmer develops short code segments accomplishing a single function and plugs them into the framework to test them. The multiprocessor hardware (the Capybara) provides the computational power to develop and test these segments interactively. The large set of code segments already contained within the framework allows the programmer to quickly test a new algorithm on a variety of input signals and in a variety of contexts. Introduction The Kyma System is a highly flexible, open-ended environment for sound computation. Among its strengths are its direct manipulation user interface, and real-time software (direct) synthesis capabilities [1, 2, 3, 5]. The Kyma System is composed of software - the Kyma language - and hardware - the Capybara. The Kyma language combines software synthesis, digital recordings, real-time processing of A/D inputs, MIDI, and algorithmic composition in one environment. The Capybara is a high performance parallel processor containing from two to nine Motorola 56001 digital signal processors. The Kyma language is based on objects called Sounds that represent streams of digital audio samples. Sounds are analogous to functions - all Sounds are either 0-ary functions (and are therefore called atomic Sounds), or are functions of one or more other Sounds (therefore called composite Sounds). Because of their functional nature, Sounds can be combined or shared with other Sounds to construct complex networks that can describe any level of detail from signal processing to compositional processes . This functional representation also makes it possible to partition the sample stream computations for execution on the multiple processors of the Capybara. The generality of Sounds allows them to represent any stream of digital audio samples including entire compositions . Every Sound object is an instance of a specific class. The class defines the structure and behavior shared by all of its instances; the structure contains the parameters of the Sound, and the behavior describes, among other things, how a Sound of that class produces its stream of samples. One can create new classes of Sounds from a combination of other Sounds. In this manner, signal processing and generation algorithms can be defined in terms of pre-existing Sounds . There are times, however, when one would like to develop algorithms directly in the DSP assembly language: Combinations of small general purpose Sounds are not as efficient as a highly specialized monolithic Sound, and it may not always be possible to construct an arbitrary algorithm out of the built-in Sounds. This paper describes extensions to Kyma that provide an object-oriented framework for interactively developing and testing digital signal processing and synthesis algorithms written in the assembly language of the Motorola 56001. The Kyma Virtual Machine In the Kyma language, the signal processor is treated as a virtual machine, that is, a computer whose "machine language" consists of digital signal processing and synthesis algorithms (e.g. Sum, Product, Oscillator, SecondOrderFilter, etc.). When a Sound is played on the Capybara, it is compiled into a program consisting of sequences of these machine language instructions. ICMC 509
Page 510 ï~~All signal processing and generation algorithms must be made up of some combination of these instructions. Each instruction corresponds to a Sound class in Kyma; the instruction, known as a MicroSound class definition, implements an algorithm that computes the next sample of the Sound's output stream. By using the virtual machine model of the hardware, the Kyma language remains independent of the actual signal processing hardware. Since the virtual machine instructions correspond to Sound classes, compilation is a translation of one network of Sounds into another network containing only Sounds that have corresponding MicroSounds. The Virtual Machine Implementation,. User MicroSound Classes.,]= class Cube class MagnitudeSquared class OneZeroFilter class Polynomial class Square ExponentialDecay classDef Decay the initial value by a constant factor. Parameters: amplitude decayFactor Computes amplitude:= amplitude * decayFactor. move x:(paramPtr)+,xO move x:(paramPtr)-,yO mpy r xO,y04,a move a,x:(paramPtr) move xO,x:(outputPtr) move x0,y:(outputPtr) rts endClass 12] Figure 2. The user MicroSound browser. The top portion of the browser lists all of the known MicroSound definitions; the bottom portion is a text editor on the assembly language of the selected MicroSound class definition. Shown here is a definition for a simple exponential signal aET natftw One of the main benefits of using a virtual machine is the insulation of large portions of the software from the specific details of the signal processing hardware. Since Kyma builds programs in the virtual machine's instruction set, adding new signal processing and generation algorithms is the same as extending the instruction set of the virtual machine. The virtual machine interpreter can be divided into two parts: the Executive, and the virtual machine instruction set (see Figure 1). The Executive is responsible for handling communications among the various DSPs and input/output devices (such as MIDI, and the D/A converters). It performs task Smalltalk-80 Executive task scheduling memory management interprocessor communication device i/o virtual machine interpreter Instruction Set subroutines implementing each instruction of the virtual I r machine instruction set Figure 1. The Kyma virtual machine contains the Executive and the virtual machine instruction set. scheduling and memory allocation as well as fetching and decoding the virtual machine instructions and their operands. The virtual machine instruction set is a collection of 56001 subroutines; each subroutine corresponds to a virtual machine instruction. Adding New MicroSound Classes There are several decisions to be made when designing new signal processing or synthesis algorithms. First, one must decide on how to partition the problem. The built-in Sounds can often be combined to yield a solution. It may be that only one small portion of the algorithm needs to be coded in assembly language. The object-oriented nature of Sounds and MicroSounds encourages greater reusability of small, general purpose MicroSounds, while larger, more specialized MicroSounds offer more efficiency. Additionally, the computation of an algorithm made up of many small MicroSounds can occur on different processors in parallel, whereas the computation of a monolithic algorithm must occur on a single processor. All of these considerations must come into the decision of how to factor the algorithm. Second, the assembly language portion of the algorithm must be coded. This involves deciding on the number, kind and order of parameters in the MicroSound. The user MicroSound browser (see Fig ICMC 510
Page 511 ï~~primitive eHponential decaI *R1 DSPPr. ram *ntle* tial dec Pr.Vain.Name Dratims Epntalea 1raonTaA...ss Vavetabl. TabbkStart TabboEmd 10 sape 49 ittiaWaluwsCed.StrImg I FmaxAwf factor I ma~pSigr Processor maxiwmAnpl fatr (m*Amp ratsedTo: (?duration ~Si~s-1) recirocal) " max Amp. s.f ta.hAt: 0 xPut: maxAm p yPut:I s.f tta.hAt: I xPut: factor yPut: 0. 3I lc I Figure 3. The DSPProgram Sound provides an inter between Kyma and the MicroSound class in the Capyl The fields in this editor specify the MicroSound class r the duration of this instance, the number and type wavetables, and the Smalltalk-80 code for creating the i values of the MicroSound's parameters. ure 2) contains a text editor and 56001 assembler for entering the program, obtaining program listings, and detecting syntax errors. Third, the parameters in the Kyma language must be provided to the new MicroSound class when an instance is played. The Sounds DSPProgram (for atomic Sounds) and DSPProgramWithInputs (for composite Sounds) provide ways to instantiate any MicroSound class by setting the initial values of the new MicroSound's parameters (see Figure 3). The initial parameter values are set through a short Smalltalk-80 program that can draw upon any of the features of Smalltalk and Kyma to map user specified parameters into the values needed by the assembly language subroutine. An Example One way to implement an exponential generator is to repetitively multiply some initial amplitude value by a decay factor. For example, if the initial amplitude value is 1 and the decay factor is 0.5, the sequence of output amplitudes will be 1, 0.5, 0.25, 0.125, 0.0625,0.03125,... A straightforward, though inefficient, implementation in 56001 assembly language  is: ME This algorithm is shown in the MicroSound browser in Figure 2. We have decided that the two parameters, the current amplitude value and the amplitude decay factor, occupy consecutive X memory locations in the 7U)] MicroSound. The Executive calls the subroutine with pointer registers set to point to the parameter values (called paramPtr), any input sample streams, (called inputptr) and the address to tf., write the output sample (called outputPtr). 0. Lines 1 and 9 of the assembly language rface are required to name and delineate the bara. algorithm definition. Lines 2 and 3 fetch lame, the current amplitude and the amplitude es of decay factor values from the parameters nitial and line 4 computes the next amplitude value. Line 5 saves the amplitude value back into the MicroSound's parameters, and lines 6 and 7 output the left and right channel values. Since this MicroSound is atomic, i.e. it does not operate on an input sample stream, we will use the DSPProgram (rather than DSPProgramWithinputs) Sound. The assembly language expects the amplitude value in the X half of the first parameter word, and the multiplication factor for the exponential decay in the X half of the following word. To generate a decaying exponential that starts at 1 and decays at the rate of 50% per sample we could use the following Smalltalk-80 code in the DSPProgram Sound to specify the initial parameter values: I maxAmp factor I maxAmp:= SignalProcessor maximumAmplitude. factor (0.5 * maxAmp) rounded. self initialValueAt: 0 xPut: maxAmp yPut: 0. self initialValueAt: 1 xPut: factor yPut: 0. An intuitive way to set the parameters of the exponential would be to specify that the amplitude should decay from the value 1 to some minimum amplitude over some duration. The decay factor f can be found from: 1 f =mId-1 where m is the desired final amplitude of the exponential, and dl is the desired duration in samples of the exponential. The following Smalltalk-80 code, which uses variables for the duration and the minimum amplitude ExponentialDecay classDef move x: (paramPtr) +,xO move x: (paramPtr) -, yO mpyr xO,yO,a move a,x: (paramPtr) move x0,x: (outputPtr) move xO,y: (outputPtr) rts endClass ICMC 511
Page 512 ï~~value could be used to specify the initial parameter values (Figure 3): maxAmp factor I maxAmp:= SignalProcessor maximumAmplitude. factor (?minAmp raisedTo: (?duration inSamples - 1) reciprocal) * maxAmp) rounded. self initialValueAt: 0 xPut: maxAmp yPut: 0. self initialValueAt: 1 xPut: factor yPut: 0. This exponential could be combined with a Product Sound to form an exponential envelope, or used in conjunction with any other Sound in Kyma. A DSPProgram Sound with this code could also be used as the basis for a lightweight exponential signal generator class; instances of the lightweight class would have a graphical editor with fields for the name, duration, and minimum amplitude . Summary There are three tools for adding assembly language algorithms to Kyma. The user MicroSounds browser is used to enter and debug the assembly language algorithm. The DSPProgram and DSPProgramWithlnputs Sounds are used to map user specified parameters into the parameters needed by the assembly language subroutine. Finally, the lightweight class editor is used to create a new Sound class from the DSPProgram Sound; the new class then has its own icon and visual editor, making it indistinguishable from the built-in Sounds. Conclusion sis." In Proceedings of the 1988 Conference on Object-Oriented Programming Languages and Systems. New York: Association for Computing Machinery.  Scaletti, C. 1989. Kyma: An Interactive Graphic Environment for Object-oriented Music Composition and Real-time Software Sound Synthesis Written in Smalltalk-80. Technical Report. Urbana: University of Illinois Computer Science Department.  Scaletti, C. 1989. "Composing Sound Objects in Kyma." In Perspectives of New Music, vol. 27, no. 1.  Scaletti, C. 1991. "The Kyma/Platypus Computer Music Workstation." In The Well-Tempered Object: Musical Applications of ObjectOriented Software Technology, S. Pope, editor. Cambridge: MIT Press.  Scaletti, C., and K. Hebel. 1991. "An Objectbased Representation for Digital Audio Signals." In Representations of Musical Signals, G. De Poli, A. Piccialli, and C. Roads, editors. Cambridge: MIT Press.  Scaletti, C. 1991. "Lightweight Classes Without Programming." In Proceedings of the 1991 International Computer Music Conference. San Francisco: ICMA.  Motorola, Inc. 1988. DSP56000/DSP56001 Digital Signal Processor User's Manual. Austin, TX: Motorola, Inc. There are several advantages to developing code within the framework just described. The framework handles memory allocation, input and output functions, and task scheduling; the programmer develops code segments and plugs them into the framework to test them. The multiprocessor hardware provides the computational power to develop and test these segments interactively. The large set of built-in Sounds within the Kyma language allows the programmer to quickly test a new algorithm on a variety of input signals and in a variety of contexts. References  Scaletti, C. 1987. "Kyma: An Object-oriented Language for Music Composition." In Proceedings of the 1987 International Computer Music Conference. San Francisco: ICMA.  Scaletti, C., and R. E. Johnson. 1988. "An Interactive Graphic Environment for Object-oriented Music Composition and Sound SyntheICMC ---512