ï~~An Open Multiprocessing Architecture for Realtime Music Performance Camp Techrnsche Urversitat Berlin H 51 StraIBe des 17. Wru 135 1000 Berlin 12 West-Germany Phone: (0)30/314-25681 UUCP:...yrarnmid!tub!tubvax!carnmo by Rupert C. Nieberle and Paul Modler Abstract We present a hardwaredesign of a multiprocessor architecture for realtime audio-signalprocessing. The goal of the device is to process interactively sounds in music-performance and speech. With a flexible operating-system and high-level music language (HLML) the system is able to distribute various musical events to the dedicated processing units. Special efforts are made to provide full interactive control of all relevant musical parameters. To reach that goal, the extrem processing power and high throughput can only be achieved by a multisignalprocessor architecture (MSP). Making things afordable we used standard, highly integrated hardware components. 1. Introduction Over the last ten years dramatic changes took place in the Irea of electronic musical instruments. The evolution from former analog devices (e.g. Mini Moog) to todays lowcost digitalsynthesizers (e.g. DX7, D50) and sample devices brought advantages in complexity of sound generation but also deficiensies in sound-control abilities. Although we are working with various commercialy available Mididevices we can't overcome the restrictions of the Midi-standard and the limited use of synthesis algorithms. We found that special efforts should be made to solve the following deficiencies of existing music composing and performing systems: - realtime soundprocessing and control for timevariant parameters - multidimensional variations in timbre (e.g. piano) Recent progress in digital signal processors (DSP) encouraged us to design a multi-signalprocessor architecture. At the moment most of the available devices use one or two VLSI chips beside specially dedicated hardware. ICMC Proceedings 1988 258
Top of page Top of page