ICAST: TRIALS & TRIBULATIONS OF DEPLOYING LARGE SCALE COMPUTER-CONTROLLED SPEAKER ARRAYSSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 00000198 monitors. Many composers did not take advantage of the speaker array for diffusion, in part because they were unfamiliar with either (a) the specific layout of the diffusion array, or (b) the general concept of multichannel diffusion. In either case, a 15-30 minute rehearsal window was insufficient for many composers to develop a diffusion plan for their work, so they set a simple static mix that would suffice for the entire piece. N----. --- --,:\,. /,,,\/\,.................. 7<.I~ fader that scales the preset levels of up to 24 audio channels, making it possible to mix between groups of speakers. Currently, a limit of 24 vectors ("audio scenes") can be programmed. Some issues arose with Vector Fading. In order to allow for multiple users to have individual Vector programs, a method of storage is needed. The second issue is performance. Vectors are computationally expensive, and could not maintain real-time performance under a heavy load. During the concert, Max crashed as a result of the expensive vector computations. Because of the client/server architecture model of ICAST, the piece continued with the levels set at the time of the crash, albeit without user control. We intend to explore the vector calculations in Jitter as a possible solution to this problem. S-eakers oln staase at various hights Less than Ba week after EiCMC, LSU hosted Electric La---- Tex, a small regional festival comprised of graduate student works from LSU, Tulane, Rice, UT-Austin and University of North TE DEC S concert employed 26 FouIr 12" 8agE.?.. TA2.Oa.1? "\ speakers plus a o Wed this is the largest. Figure 3. Speaker Configuration for ICMC 2006 2.4. Electric LaTex 2006 Less than a week after ICMC, LSU hosted Electric LaTex, a small regional festival comprised of graduate student works from LSU, Tulane, Rice, UT-Austin and University of North Texas. This concert employed 26 speakers plus a subwoofer. To date, this is the largest array driven by ICAST. The limit on ICAST speaker control is 44 speakers (if external Digital to Analog converters arudience added, this number rises to 52 at 96kHz). 2.5. March 2007 - Vector Fading Jeff Stolet was the guest composer forot of the March 2007 concert held at the Shaw Center. This concert used 24 speakers with a subwoofer, and mimicked the September 2006 concert in speaker placement, with two exceptions; the front center speaker was turned to face away from the audience and the subwoofer was placed in a corner of the room rather than at the foot of the stage. More of the dynamic object-creation code was ported from Max scripts to Java. The reduction in Max objects decreased the startup time of the client by 9.1%. A new type of diffusion mode was introduced during this concert: vector fading. Programmed in Java, vector fading is based on theatrical light board 'scenes', where one fader can control multiple instruments at various preset levels. Thus an "audio scene" consists of a single I Ajjiffi tia:! ^_ ^_ I \ _ f is I:i _ II'ui ^ ^ ^-rije8-- I:jf: Figure 4. Speaker Configuration LSU Recital Hall 3. FUTURE DIRECTIONS One question that commonly arises when discussing ICAST is "why not just use a digital mixer?" Good question. At the time, most digital mixers lacked the flexibility for which we were searching. We also wanted to eliminate as many cables as possible to the mixing console. ICAST's design separates controllers from servers, and modularizes the component parts for easy access, repair and upgrade. By building the application on a general-use CPU, we can construct our own system in order to explore new interfaces, new DSP algorithms, and new performance practices. 3.1. Alternative Controllers In the quest to break from the fader-board paradigm, we are exploring new control devices, such as the Lemur. The client software currently uses the built-in MIDI capabilities of the TASCAM-2400 and Max. Expansion of these to other objects creates a new level of complex 198
Page 00000199 ity for both the programmer and the end user. A technique to utilize multiple controller devices needs to be able to handle simultaneous controller commands without conflicts. Any performer using a non-standard controller needs to have time to practice and understand the response of the controller, making it difficult for guests to use ICAST alternatives. A user might have a device that he/she wishes to use, but without knowing the proper messaging commands, even standard MIDI devices would take time to reprogram for use with ICAST. To facilitate differing control devices, an API is being developed to allow performers to bring in a controller foreign to ICAST and connect with minimal difficulty. 3.2. Ambisonics ICAST can accommodate pieces in many forms without changing the position of the speaker array. ICAST already has two-dimensional controllers in the form of XY joysticks, and while these have not been employed as of yet, they are available for use in Ambisonic performance. Code has been written but not yet fully tested for real-time encoding of mono and stereo inputs into firstorder Ambisonic fields. Accompanying this is a firstorder decoder for an arbitrary speaker array. This code is currently being tested for accuracy. Progressing to higher order encoder-decoder fields and being able to manipulate multiple fields in performance is a priority. 3.3. Simplification of Use We are in a constant process of improving the ICAST audio server, and as a result, we have not focused much on "ease of use." Launching ICAST requires Max/MSP (Client), Audio/MIDI Setup (playback synchronization and control with Logic), and Terminal (to control the server) to be running. End users must also have a strong knowledge of SuperCollider in order to run and monitor scsynth properly on the server. Our goal is to reduce these end user requirements to a single launch command that is called from within Max/MSP. There are other aspects of ICAST that need optimization. For example, storage of user information is cumbersome, and the assignment of operational controls is tedious. One solution is to create user templates for the common venues where the system is deployed. 3.4. Intel Processors and Apple Computers As Apple has moved all of its new computer lines to Intel processors, testing must be done to ensure continued operation on the different processor. The MacBook Pro line currently has multi-core processor, and a faster internal bus, which may alleviate some issues with Vectors and other client-side as well as server-side performance issues. 4. CONCLUSION Sound diffusion is the performance practice of electroacoustic music, and ICAST is one of a growing collection of performance instruments in the United States. ICAST is an attempt to create a class of "diffusion instruments" that facilitate complex sound mixing through traditional and novel interfaces. As others begin to develop similar systems, we look forward to sharing our experiences and collaborating on developing similar approaches with the hope that a collective approach will spawn more and improved instruments. We gratefully acknowledge the support of the Center for Computation & Technology at LSU and its Laboratory for Creative Arts & Technologies for supporting this project. 5. BIBLIOGRAPHY Austin, L. (2001). "Sound diffusion in composition and performance practice II: An interview with Ambrose Field." Computer Music Journal 21(4), 21-30. Beck, S.D., J. Patrick, K. Malveaux, B. Willkie (2006). "The Immersive Computer-controlled Audio Sound Theater: Experiments in multi-mode sound diffusion systems for electroacoustic music performance." Proceedings of the 2006 International Computer Music Conference, New Orleans, LA Chadabe, J. (1997). Electric sound: the past and promise of electronic music. Prentice Hall. Clozier, C. (2001). "The gmebaphone concept and the cybernephone instrument." Computer Music Journal 21(4), 81-90. Gerzon, M. (1983). "Decoders for Feeding Irregular Loudspearker Arrays." United States Patent 4,414,430. Harrison, J. (1999, 9). "Diffusion: theories and practices, with particular reference to the beast system." eContact 2.4. Malham, D. G. and A. Myatt (1995). "3D sound spatialization using ambisonic techniques." Computer Music Journal 19(4), 58-70. McCartney, J. (1996). "SuperCollider: a new real time synthesis language." In Proceedings of the International Computer Music Conference. Moore, F. R. (1989). "A general model for spatial processing of sounds." In C. Roads (Ed.), The Music Machine. MIT Press. Roads, C. and J. Strawn (1985). Foundations of Computer Music. MIT Press. Stampfl, P. and D. Dobler (2004). "Enhancing threedimensional vision with three-dimensional sound." In SIGGRAPH Proceedings. Ullmer, B., S. D. Beck, E. Seidel, and S. lyengar (2005). "Development of "viz tangibles" and "viznet": Implementation for interactive visualization, simulation and collaboration." NSF MRI Award CNS-0521559. Wright, M. and A. Freed (1997). "OpenSoundControl: A new protocol for communicating with sound synthesizers." In Proceedings of the International Computer Music Conference. Zicarelli, D. (1997). Max/MSP Software. San Francisco. 199