MixNet: A Comprehensive Digital Audio Production SystemSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 247 ï~~-MixNetA Comprehensive Digital Audio Production System Peter Otto, Rick Bidlack, Stephen Master Computer Music Studios, SUNY at Buffalo 222 Baird Hall, Buffalo, NY 14260 email@example.com firstname.lastname@example.org ABSTRACT The authors are developing an optimized realtime multi-channel digital audio mixing and production environment based on the NeXT Computer and IRCAM Signal Processing Workstation (ISPW). Current implementation allows for real-time mixing of up to 16 channels of digital audio signals using an external hardware interface controlling an automated mixing application running on the NeXT computer, which hosts one or more ISPW cards. The MixNet system also currently supports limited sampling, synthesis and hard-disk recording/playback. The authors plan to expand the system to allow for a minimum of 24 channels of full digital audio mixing and processing with a complement of sonic processing utilities running on a single NeXT cube. Powerful, large scale networked systems are also envisioned. 1. INTRODUCTION We are intrigued by the potential of the NeXT/ISPW I Lindemann, e al. 1991] as a powerful platform for a suite of audio engineering tasks, including mixing, editing, signal analysis, denoising, sample-rate conversion, time compression and expansion, filtering, mix automation, pitch shifting and other "effects" such as reverberation, localization, etc. In addition to providing the already acknowledged flexibility of the ISPW for userprogrammable live, interactive musical performance generation and processing, an integrated digital audio workstation based on the NeXTJISPW holds several advantages over existing Macintosh- and PC-based systems. These include process control, backgroundable tasking, greater ease of software customization, higher throughput. a less tiring visual display, intermachine resource sharing and networking, and other advantages available under NeXT's MACH operating system (read: UNIX). Such a platform would offer a cost-effective alternative for those musicians who are committed to the NeXT platform for computer music creation but whose work can often benefit from the advantages offered by an optimized realtime multi-channel digital audio mixing and production environment. 2. CURRENT IMPLEMENTATION As a first step along this path, we chose to implement a 16-channel automated mixer. Each input "slice" includes a fader, stereo panner, one band of EQ, four aux sends, mute and solo switches, and an input select structure that can access a tone generator, a noise generator, ADC input, and one of four hard disk playback channels. A master section includes left and right faders, and four aux masters (each with send, return and pan controls), and can write to a stereo soundfile on hard disk. The automation section allows all control changes on the mixer to be recorded, overdubbed, and played back with accuracy. Several additional MixNet configurations also exist; one is an eight channel mixer with several FM synthesis and sampling modules simultaneously available under MIDI control. Localization and other processing modules have been implemented in other eight-channel MixNets. A crucial part of MixNet is the physical interface, for which we use the Audiomatica Contact Fader Panel [Otto, Cavalli, 1989]. The Fader Panel is a general purpose, highly programmable MIDI controller of professional quality furnished with a large number of sliders, knobs and buttons. Although all graphical objects on MixNet's "virtual mixer" are mouse-controllable, the Contact Fader Panel offers obvious advantages in tactile feedback and bandwidth of data input. The MixNet software is functionally divided into two major parts. The "front end" is constructed as a normal NeXT application, using Appkit objects. Musickit objects are used for scheduling and MIDI I10. The front end presents an attractive visual analog of a mixer, and contains all the logic pertaining to automation recording and playback. The "back end" of MixNet is written in MAX, and runs on the ISPW board. It is here that all signal processing tasks are carried out. The MAX implementation of MixNet allows the potential user to "pop the hood" and customize details of the signal path and processing algorithms. Communication between the front and back ends is carried out directly over the system bus. 3. AUDIO I/O ISSUES The experience of writing MixNet has brought two very important issues to our attention, both of which must be addressed before an application such as MixNet will be fully useful in a professional environment. One is the need for higher bandwidth hard-disk I/O than is currently possible with the ISPW. Although there are clear drawbacks to having a separate file system on the ISPW distinct from the NeXT's own file system, this probably 247
Page 248 ï~~represents the best way of obtaining wide bandwidth multi-channel hard disk throughput. The other urgent necessity is for a multi-channel (minimum eight) audio interface for the ISPW, with corresponding multi-channel MAX objects. There has been considerable agreement among interested parties that these hardware deficiencies shotd be corrected; thus far no actual implementation exists. 4. USER CONFIGURABLE MIX ENVIRONMENTS Our exploration of mixing and processing on the ISPW has also given rise to new concepts concerning the implementation of mixing consoles in software. The notion of a customizable console, in which users may build their own input slices, choosing modules from an on-screen palette containing faders, panners, aux sends and returns, EQ sections, user-defined MAX "patchers", etc., is completely feasible given the programming environment now offered by MAX and NeXTStep. In this manner, signal processing resources may be allocated where they are needed (ten bands of EQ and eight aux sends on a single channel should be allowable in a truly open software-based mix environment), and signal paths may be arbitrarily constructed or dynamically reconstructed as the user sees fit. Perhaps even more important, however, are the control capabilities and processing linkages offered by a mixer composed of configurable parts. We envision a mix environment in which it is easy to specify that the directives for a process control come from a scaled remote fader (perhaps referencing a transfer function) or from a panner, an audio signal, or a user-supplied function, as well as from a standard MIDI device or mouse. 5. NETWORKING MixNet may be particularly will suited for multiuser audio workstation installations. Mixing and processing fourchannel soundfiles spooled via EtherNet in MixNet (and writing a stereo soundfile onto the host hard drive) was easily implemented. Multiuser background tasking and central soundfile archiving is also straightforward. MultiCube DSP resource deployment is a longer-term priority for exploration, but seems feasible with hardware enhancements. 6. CONCLUSIONS The digital audio workstation of the future will easily accomodate mixing, processing, recording, real-time interaction, sampling, and synthesis - simultaneously and in real time. All of these separate activities can be integrated into a flexible environment which will allow for a complete re-configuration of functionality from user to user and from application to application. Early experiences with MixNet indicate that it is not unrealistic to expect such an integrated, fully digital production and composition environment in the relatively near future. In the more distant future we would like to see a fully networkable audio processing and performance environment with hundreds of channels of deployable real-time processes available to users across a local network where considerable archival and soundfile resources can be pooled and accessed for projects of large or small magnitude. 7. ACKNOWLEDGEMENTS Tony Agnello, David Felder, Kerry S. Grant, Paul Lansky, Stephen Manes, Miller Puckett and Zack Settle have all contributed support or encouragement for this project. 8. REFERENCES Lindemann, E. et al. 1991. "The Architecture of the IRCAM Music Workstation." Computer Music Journal 15(3), pp. 41-49. Moore, F. R.. 1987. "What is a Computer Music Workstation?" Proceedings of the Audio Engineering Society 5th International Conference on Music and Digital Technology. Los Angeles. NeXT Computer, Inc., 1990. NeXT Developer's Library. Redwood City, CA. Otto, P., Cavalli, M. 1989. "Contact: A Programmable Interface Panel for MIDI Control." VII Colloquio di Informatica Musicale. Associazione Spaziomusica L.S.R.M. Puckette, M. 1988. "The Patcher." Proceedings of the 1986 International Computer Music Conference. San Francisco: Computer Music Association, pp. 420-429. Puckette, M. 1991. "Combining Event and Signal Processing in the MAX Graphical Programming Environment." Computer Music Journal 15(3), pp. 68-77. Puckette, M. 1991. "FTS: A Real-Time Monitor for Multiprocessor Music Synthesis." Ibid 15(3). pp. 68-77. Smith. B. 1990. "A Universal Recorder for the IRCAM Musical Workstation." Proceedings of the 1990 International Computer Music Conference. San Francisco: Computer Music Association. 248