Page  509 ï~~An Implementation of Real-time Granular Synthesis on a Multi-processor Network Takebumi ITAGAKI#& Peter D. MANNING# Alan PURVIS& Alan.Purvis Durham Music Technology Group # Department of Music, University of Durham & School of Engineering, University of Durham Palace Green, DURHAM DH1 3RL, UK South Road, DURHAM DHI 3LE, UK phone +44 191 374 3221 FAX +44 191 374 3219 phone +44 191 374 2570 FAX +44 191 374 3838 ABSTRACT This project is a part of on-going research into real-time audio*synthesis using a multi-transputer based audio processor. An implementation of real-time granular synthesis on the 160 transputer network is presented. An interface for control information of granular synthesis using a MIDI keyboard is under development with a view to developing a flexible performance tool. 1. INTRODUCTION 1.1 The 160 Transputer Network This research group has presented a series of papers concerning multi-transputer audio processors. At the 1990 ICMC, a prototype module, consisting of 16 x T800, was demonstrated [Bailey et aL]: A T800 transputer has four communication links that permit to form a ternary tree. Four transputers form a single element and four of these are situated on a single 3U printed circuit board, 16 transputers in total. The original rationale behind this design was that a real-time system should not require a large amount of memory for storage, therefore, there is no memory other than 4k-byte integrated to each transputer, 64k bytes per board in total. The prototype was subsequently developed to an audio processor using 10 of these cards; 160 transputers. An application of real-time additive synthesis, using recursive sine oscillators, for the network was reported and demonstrated at the 1994 ICMC [Itagaki et aL]. 1.2 Granular Synthesis Granular synthesis was originally proposed as a representation of "acoustical quanta" [Gabor]. More recently applications in digital form and granulation of sampled sound are proving of increasing interest to composers. These have led to at least one commercial company manufacturing a suitable custom-designed audio platform for its wider exploitation [Bartoo et al.]. The signal processing requirements of this application set tough demands for both hardware and software engineers. In particular mapping suitable algorithms onto a variety of parallel architectures provides an interesting challenge. 2. IMPLEMENTATION 2.1 Preliminary Experience Our research group has been studying granular synthesis as a part of on-going investigations into real-time audio synthesis using a multi-transputer based audio processor. This has been inspired by the multiprocessor application of granular synthesis [Bartoo et al.]. In the initial phase of this project, the granulation of a short fixed length sound sample has been implemented on a segment of our tree-structured network. Parameters that describe a grain; its amplitude, frequency, frequency range, ramp (amplitude envelope), duration and duration range are controlled from a PC keyboard. From this experiment we have ascertained that it is possible to implement nine voices of real-time granular synthesis, using a set of short fixed sound samples, on a sixteen-transputer network. As a first step towards improving control flexibility, two coefficients; amplitude and frequency, can be communicated to the network through a MIDI-to-Transputer interface. Possibilities exist for controlling other key parameters through MIDI using programme change or systems exclusive messages. * Durham Music Technology Group is a collaboration between the School of Engineering and the Department of Music at the University of Durham. I C M C PR O C E E D I N GS 199549 493

Page  510 ï~~It is anticipated the final configuration of this system will allow the real-time performance of at least 13 voices of granular synthesis on the 160 transputer network, controlled either via MIDI or the host PC, with the option to process source data dynamically as a stream of digitised sound information rather than as a fixed length sample, as is the case at present. 2.2 Revised Implementation In a prototype configuration, some of the transputers are assigned for sound sample storage. This has not proved a particularly efficient way of usage, since only 800 samples in 32-bit floating point format can be accommodated to a transputer. This is also against the original rationale: a large memory storage should not be required for real-time systems, however it has become clear that the minimum memory requirements for granular synthesis are greater than that provided internally via the configuration of 16-transputer cards. For the revised configuration, a transputer with 128k- or 256k-byte of external memory is connected to three transputers at the tree-bottom level that share this extra memory. At a moment, 9 of these transputers with external memory are available that cover a third of the network and provide at least 27 voices. The sound samples stored in the memory are refreshed through the other end of the network as shown below. Transputer without external memory HIDI keyboard-HOST * Transputer with external memory ' extent of a PCB,,D - -DAC figure 1: Diagram of Granular Synthesis System 3. SUMMARY A real-time granulation of sound is implemented onto a part of the 160 transputer network. A preliminary configuration, some transputers assigned for memory storage, works with fixed short sound samples. The revised configuration, using transputers with external memory, would allow more flexible applications, such as time-shifting and time-stretching of long sampled sound. 4. ACKNOWLEDGEMENTS The authors acknowledge the generous donation of processors from INMOS Ltd., UK and financial support from the University of Durham. REFERENCES [Bartoo et al.] Bartoo, T., Murphy, D., Ovans, R. and Truax, B. 1994. "Granulation and Time-shifting of Sampled Sound in Real-time with a Quad DSP Audio Computer System." Proceedings of the 1994 International Computer Music Conference, pp. 335-337 [Bailey et al.] Bailey, N.J., Bowler, I., Purvis, A. and Manning, P.D. 1990. "An Highly Parallel Architecture for Real-time Music Synthesis and Digital Signal Processing", Proceedings of the 1990 International Computer Music Conference, pp. 169-17 1 [Gabor] Gabor, D. 1947. "Acoustical Quanta and the Theory of Hearing." Nature 159 (4004): 591-594 [Itagaki et al.] Itagaki, T., Purvis, A. and Manning, P.D. 1994. "Real-time Synthesis on a Multi-processor Network." Proceedings of the 1994 International Computer Music Conference, pp. 3 82-385 494 4ICMC PROCEEDINGS 1995

Page  511 ï~~A MAX Counterpoint Generator for Simulating Stylistic Traits of Stravinsky, Bartok and Other Composers Malcolm E. Bell Prairie Bible College P.O. Box 4116, Three Hills, Alberta, Canada TOM 2N0 (403) 443-5511 ABSTRACT A real-time, interactive MAX program has been developed to simulate the contrapuntal characteristics of compositions by numerous composers including Palestrina, Bach, Stravinsky and Bartok, and to also explore new contrapuntal styles. While the program user performs a melodic line on a MIDI keyboard, the MAX program generates the desired contrapuntal line to accompany it. (Alternatively, the program may receive the melodic line from a MIDI sequencer.) The program allows the user to specify vertical and horizontal intervals, rhythmic patterns, and all occurrence probabilities. Any of these parameters may be altered during a real-time performance. INTRODUCTION While following a melodic input, from either a live MIDI synthesizer performance or a stored MIDI sequencer file, this interactive MAX program can generate an accompanying contrapuntal line, in real-time. The user can specify musical parameters for vertical intervals between the melodic input and the generated counterpoint line, horizontal intervals between adjacent pitches in the counterpoint line, and rhythmic patterns in the counterpoint line. If the melodic input and the user-specified intervallic and rhythmic parameters all adhere to a particular style of counterpoint (ie. Palestrina, Bach or Bartok), then the program's output will be characteristic of the desired contrapuntal style. The successful simulation of a particular contrapuntal style is first dependent on the proper contrapuntal analysis of the style, in order to provide the musical parameters necessary to imitate it. Any contrapuntal musical example may be analyzed to determine its most frequently-used vertical and horizontal pitch class intervals. For example, in Bartok's Sixth Quartet, my. I, measures 214-221, the most frequent vertical interval classes are 3, 8, 0, and 6 (semitones), while the most frequent horizontal interval classes are 1, 2, 3, and 4 (semitones). In Stravinsky's Sonata for Two Pianos, measures 1 - 13, the most frequent vertical interval classes are 7, 11, 2, and 3 (semitones), while the most frequent horizontal interval classes are 2, 4, 1, and 3 (semitones). Rhythmic patterns are also a distinguishing characteristic of a contrapuntal style. For example, Stravinsky's Sonata for Two Pianos, measures 1 - 13, employs four successive sixteenth-notes 17% of the time, two successive eighth-notes 27% of the time, two sixteenth-notes followed by an eighth-note 21% of the time, and a dotted eighth-note followed by a sixteenth-note 8% of the time. PARAMETER SPECIFICATION Prior to the real-time generation of the counterpoint line, the user must specify 8 rhythmic patterns (for the free counterpoint mode of operation), 4 vertical pitch intervals, and 4 horizontal pitch intervals. Any or all of these parameters may be changed at any time during program operation. For example, the program might first be loaded with values to generate counterpoint in the style of a Bach invention. While the program is generating Bach counterpoint, the user can instantly change all parameters, immediately switching to a Bartok contrapuntal style, or the user can gradually switch one parameter at a time, to slowly transform into a Stravinsky contrapuntal style, or a newly-discovered contrapuntal style. IC MC PROCEEDINGS 1995 495 495

Page  512 ï~~intervals weighted:i V V3V4RP2 R3 R4 rhythmic patterns {:1 R R6 R7 R8 linear weighted interval horizontal intervals note-against- tree counterpoint note mode mode no Pitch meetsL. criteria? yes Pitch -> <- Rhythm Counterpoint Output Figure 1. Block Diagram of MAX Counterpoint Generator PITCH AND RHYTHM GENERATORS The vertical and horizontal intervals, along with their distribution probabilities are used to generate the contrapuntal pitches for a specific style. Each new pitch from the input melody triggers one of the four, weighted, vertical intervals. The chosen interval is added to (or subtracted from) the input melody pitch to generate the next contrapuntal pitch which is next checked against the four allowable horizontal intervals. If this pitch does not meet both vertical and horizontal intervallic criteria, it is discarded and replaced by a second pitch calculated from one of the remaining three, weighted, vertical intervals, and likewise tested for horizontal compatibility. If this second pitch does not meet intervallic criteria, it too, is discarded, and replaced by a pitch calculated from one of the two remaining vertical intervals. As soon as a newly-generated pitch meets both vertical and horizontal intervallic criteria, it is passed on to the output, where the rhythm generator governs the exact moment when the pitch will sound. If no pitch exists for which there is both vertical and horizontal compatibility, the user may opt to allow the program to momentarily ignore the horizontal interval parameter, allowing a pitch to be generated, or the user may instead opt to have the program insert a rest at that point in the counterpoint line. The user may choose between note-against-note counterpoint or free counterpoint. For note-against-note counterpoint, the counterpoint line is given the same rhythm as the input melody line. In the free counterpoint mode, the program will generate a variety of contrapuntal rhythmic patterns in accordance with the specified style. For this mode of operation, the program contains a table of 52 rhythmic patterns employing quarter, eighth, sixteenth, triplet and sextuplet patterns. The user may add additional patterns to this table. Up to eight weighted rhythmic patterns, characteristic of the style to be generated, may be specified. SUMMARY The counterpoint line is generated in real-time and output for playback on a MIDI synthesizer. The rhythm generator governs the timing and duration of each pitch and the pitch generator governs the value of each pitch. The velocity (volume) of each counterpoint pitch will match the velocity of its companion pitch in the input melody line. This program has realistically simulated contrapuntal styles characteristic of Palestrina, Bach, Stravinsky and Bartok, and has proven valuable for exploring new contrapuntal relationships. REFERENCES Clark, Thomas, Arrays, Win. C. Brown Communications, Inc., 1992. Puckette, Miller and Zicarelli, David, An Interactive Graphic Programming Environment MAX, Opcode Systems, Inc., 1990-91. 496 6IC MC PROCEEDINGS 1995

Page  513 ï~~PPP - a Framework for Algorithmic Composition Thomas Neuhaus Institut fur Computermusik und elektronische Medien, Folkwang-Hochschule Essen Phone (49)-201-4903170 Abstract PPP is a language designed as a framework for the evaluation and testing of compositional algorithms. It is based on a table-oriented model of abstract musical scores, the output can thus easily be converted into formats like CSound-scorefiles or MIDIstandardfiles. It is easily expandable on the source-code level (C++) through the use of simple parser tables. This approach implies the need to distinguish between the implementation-language and the application-language for a given compositional algorithm. On the other hand it helps the programmer to concentrate on the design of compositional algorithms without having to bother with more complicated programming issues e.g. scanning and parsing of input text. The Score Model A score in PPP is regarded as a collection of sequences, each starting at a virtual time of 0. A sequence is a time-ordered collection of events of similar inner structure. Each event consists of a time field indicating the entrydelay to its successor and an arbitrary number of parameterfields describing the behaviour of these events. Events in the same sequence have the same underlying structure, i.e. the same number of parameterfields, the position of which indicating similar semantics. A PPP input file describes 1. the structure of the events in each sequence 2. how the events of these sequences get their exact values Several such descriptions may exist which result in different sequences. The first part of such a description consists of a declaration of parameters and their (symbolic) values which the corresponding parameterfields of the resulting events may have. The second part references a sequence of algorithms which are to be carried out in order to actually generate the events. Parameter- vs. Eventoriented Algorithms Algorithms supported by PPP can broadly be divided into two classes. One class regards the succession of values in a single parameter, independent of whatever happens in the other parameters,as the main compositional focus. This approach resembles the polyphony of parameters in classical serialism (the algorithms themselves need not be of serialistic nature though). Examples of this class are permutations, tendency-masks, weighted distributions of values and the like. The other class of algorithms regards the whole event (or subsequences thereof) as the main entity and thus always-generates complete and selfcontent events. Examples are sequencing, markovchains, vector maiipulations and the like. IC M C PROCEED I N GS 199549 497

Page  514 ï~~Both approaches are supported in PPP, although the different nature of these classes of algorithms does not permit them to be used in parallel. In both classes it is possible, to refer to previously generated values or events and use them as arguments for subsequent algorithms. The Parser Interface The language PPP itself does not permit a composer to formulate algorithms of his own. It was designed to simplify the use and to evaluate the usefulness of algorithms in conjunction or combination with others. The inclusion of self-written algorithms however is supported and encouraged on the source-code level. Great care has been taken in the design of the language-parser which is completely table-driven. The table, that denotes all available algorithms has been even more simplified, so that the addition of a simple datastructure into this table (and a recompilation of course) is enough to integrate any user-defined algorithm into PPP. (Space is too short here to give a more detailed description of this interface) As PPP is available in source code and expected to run on several architectures, the author expects the number of algorithms to grow over time. Eventually PPP will then appear as a big toolbox of compositional algorithms. Converters The general approach of the score-model (and thus the output) of PPP makes it necessary to provide versatile and flexible converter programs that can transform such an output into various formats recognised by standard tools which in turn use them to produce sound, midi-data, common western music notation or other representations of the intended music. At the time being, two such converters exist, one produces standard midifiles, the other produces CSound scorefiles. These converter programs share a lot of code and are written in quite a modular way so that extending them or adding new backends for other output formats should not be too difficult (though not as simple as adding algorithms to PPP due to the peculiarities of the different output formats). Educational Issues PPP was developed at the Institut fur Computermusik und elektronische Medien at the FolkwangHochschule in Germany. This institute offers a program for the education of composers, many of whom have no prior knowledge of computer science or programming. It is the intent of the institute to supply these students with at least some basic knowledge of these topics. But trying to make practical use of some basic programming-techniques to compose music is hardly possible and often leads to frustration on the students' side. Within the framework of PPP however, the students may concentrate on the developement of the algorithms themselves without having to bother about the environment into which they should be embedded. This enables the students to directly evaluate and incorporate their algorithms in concrete compositional projects. Final Remarks By the time of this ICMC PPP and the above mentioned converter programs will be freely available via anonymous ftp (URL: or... /pub/unix) under terms and conditions of the General Public License of the Free Software Foundation. The author likes to thank everyone who made this developement possible, especially Prof. Dirk Reith for his critical advises and helpful hints. 498 I 8CMC PROCEEDINGS 1995

Page  515 ï~~Graphical Control of Unit Generator Processes on the MIDAS System: A Digital VCS-3 Demonstrator Ross Kirk, Paul Whittington, Andy Hunt, Richard Orton Music Technology Group, University of York, YO1 5DD, UK Email: ABSTRACT: The system described in this paper extends the unit generator concept to incorporate graphical unit generators which can be freely integrated into networks of audio unit generators. These new elements can therefore associate graphical operations with electroacoustic processing, for image output or user input, extending composition and performance into the multimedia domain. Based on the MIDAS system, the networks of unit generators can be partitioned across heterogeneous multiprocessor networks, some nodes specialised for image generation, some for audio processing. A screen based emulation of the VCS-3 analogue synthesiser, forming a demonstrator of the concept is also described. Introduction The MIDAS system (Musical Instrument Digital Array Signal processor) is based on the unit generator paradigm familiar from the Music 'N' languages. MIDAS extends this concept into a real-time performance medium by providing facilities to distribute a network of unit generators (known in MIDAS as ugps -unit generator processes) across the nodes of a multiprocessor network (Kirk, Orton, 1990). These nodes provide the computational power needed to support the data throughput necessary for real-time operation. Mechanisms are provided to synchronise the ugps so that samples are processed across the network in a coherent manner. Nodes intercommunicate by means of standardisedprotocols. These are network messages which allow ugp structures to be built, modified, controlled and to exchange data at run time. Musical applications (in the form of ugp networks) can be defined at run time, entirely through the use of the protocols. Applications are not compiled before being run, as in other systems. MIDAS is a heterogeneous system: Processor nodes can be of different types, perhaps specialised for particular tasks (eg graphics), as long as they can support the ugp types allocated to them, and as long as they can interact correctly with the protocols. The current implementation of MIDAS exists firstly as a prototyping environment running as an application under UNIX on Silicon Graphics (SGI) machines. The intention is that this would normally be used to develop musical applications (ie protocol structures) before they are loaded onto a multiprocessor system for performance, although limited real-time operation is available on the SGIs. A multiprocessor environment based on the use of transputers has been produced, and work is in hand to provide a DSP multiprocessor hosted on PCs. This paper describes work which extends the unit generator process concept to include graphical functions, and illustrates they way in which these may be used to provide screen-based control panels for audio applications. An accompanying paper (Kirk et al 1995) describes the use of graphical unit generators to provide visual output within multimedia compositions. Implementation of Graphical Unit Generator Processes The graphical ugps are based on a portable graphics Library which has itself been used for graphical applications running on a number of machines, including PCs, Atari Falcons and Silicon Graphics workstations. The library provides common graphical objects (lines, rectangles, mouse click buttons, mouse co-ordinate functions etc) which are defined in terms of a minimal set of pixel based primitives. To move the graphical environment to another plaform, it is only necessary to rewrite the primitive functions; the higher order graphical objects will then transfer directdy to the new machine. We have encapsulated some of the higher order graphical objects within the standard ugp data structure format, so that these objects can be created and integrated within a ugp network, just like any other unit generator. Ugp ICMC PROCEEDINGS 199549 499

Page  516 ï~~networks can thus provide sonic and visual output, forming aspects of dynamically variable electroacoustic instruments. The 'slider' unit generator is a simple example of a graphical ugp created in this way. Its ugp inputs control the X and Y position of the slider and the maximum and minimum output values. The output of the ugp is the numeric value obtained by controlling the slider position with the mouse. Because the ugp adopts the standard ugp format, the output can be connected into other ugp inputs (eg oscillators) by the use of appropriate protocols, to control various associated parameters (eg amplitude). Protocols can be used to create a bank of such sliders by instantiating multiple copies of the slider ugp, each with a unique X and Y position. Like any other ugp network, this network of graphic ugps is dynamic. A slider can be moved around the screen using the mouse, and the overall appearance could be changed by (dynamically) deleting some slider ugps, and creating new ones. A hierarchical set of control panels could be selectively displayed in this way, controlled by mouse clicks on graphical control buttons. A number of these ugps is used in the VCS-3 sample application described below. These graphical ugps presently run on the SGI prototyping environment. We plan to integrate SGIs as specialised graphics nodes into the multiprocessor network, so that SGI based graphics applications can interact with sound synthesis and transformation ugp networks running on DSP nodes. A MIDAS Digital VCS-3: a Demonstrator for Graphical UGPs The intention of this demonstrator is to emulate the operation of the VCS-3 analogue synthesiser, and thus prove the concept of the integration of graphical and sound generation ugps. A set of graphical ugps has been produced which provide the major elements of the elements of the VCS-3 front panel on the graphics screen. In addition to the slider ugp described above, we have produced a patch-bay ugp which allows a mouse click to place a 'connection' at the intersection of a row and column on the patch bay to connect a signal source to an output channel (for instance). There is also a joystick pad ugp and various switch and button ugps. A screen-dump of the panel is shown below. Rplittude Sliders Osc 1: O sine o Oas lo' 0 Sr 10 Osc 2: 0 Tri 10,ad [1110K Osc 1:Sine Sa F1 Cutoff Osc 2: Sqr 111 Slider Tri Lu Osc 3: Sqr 1 0 Noise 11OK [10D Filter Output I I I Ring lod Out I ing Mod Pod 1F2 i Rtt.nution Pod V I I Slider F1 L I1 Slider F 1 /1 Slider F3 500 [ sCutoff Slider ii lilt Slider 01 N3 oii is,,ue Slider 02 F3Contrals! I i " Output Ch. 0 O Ring od Input R tn ax Ring tod Input 8 01 02 Filter Input 100 100Input freg asc 1 Input f req asc 2 Input frq OSc 3 Filter Cutoff Input U U Output level, Output level 2 In keeping with the wish to emulate the VCS-3, we have used sine/square/triangle waveform sound synthesis ugps and ring modulator and filter sound transformation ugps. It would be a straight-forward matter to extend the functionality of the VCS-3 to extend any aspect of the panel (bigger patch bay, more sliders etc), to redefine the oscillators so that they consist of complex networks of unit generators, implementing AM/FM based voices for instance, and to construct more s op hs ti cate d transformation algorithms. We also have a MIDI ugp which would allow any parameter on any ugp to be mapped to MIDI control. Conclusion The MIDAS system has provided a robust and flexible framework for integrating graphics and sonic applications on multiprocessor systems. The demonstrator has successfully emulated the operation of the VCS-3, and thus proved concepts which could be applied to other functionally comparable systems. For instance, a screen-based mixing desk whose graphical and signal processing configuration could be defined dynamically at run-time. References: Kirk, P R; Orton R (1990). MIDAS: A Musical Instrument Digital Array Signal processor. Proceedings of ICMC, Glasgow. Kirk, Hunt, Orton (1995). Audio-Visual Instruments in Live Performance. Proceedings of ICMC, Banff 500 0ICMC PROCEEDINGS 1995