Interactive Immersive Environments: A Composer's JourneySkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 00000001 Interactive Immersive Environments: A Composer's Journey Anne Deane Berman Virtual Reality Applications Center and Department of Music, Iowa State University email@example.com Abstract This paper explores the research by the American composer/media artist Anne Deane Berman. It addresses two of the author's recent collaborative works that offer dramatic interactive experiences to any participant regardless of musical knowledge or talent. The research demonstrates that music serves as the primary art form enriched through a unique blending of supporting elements: visual art and narrative with emerging technologies. The paper illustrates two models, one using a transparent sensor-based tracking system in a real-world environment and the other using a handheld navigational device and head-mounted display to experience a 3-D Virtual Reality environment. 1. Introduction As emerging technologies redefine the arts, there are new models of applications being developed that augment reality using interactive sound systems. Interactivity is defined differently in the field of electronic music versus electronic arts. In music, gesture-based interactive instruments and experiences have been created which generate sound. In most cases the user must be musically trained in order to 'compose' completely new sounds by deploying real-time synthesis through gestural control. (Impett, 1994 and Andrews, 2001) In the visual art world, the term tends to refer to 'individually-determined navigation' that does not require special training to navigate through an experience whether using video, Virtual Reality or physical media in sculpture and/or architectural spaces. (Waters, 2000) In his paper, "The Musical Process in the Age of Digital Intervention" Simon Waters (2000) presents a detailed analysis of individuals who have defined music interactive systems, referring to earlier research by Xavier Berenguer, Franca Garzotto, et al, John Bowers and StenOlof Hellstrom, among others. Additional individuals who have defined music interactive systems include Todd Winkler (1998), Trevor Wishart (1996), Robert Rowe (1994), and Denis Smalley (1986). Beryl Graham in his 1997 thesis summarized certain theorists of interactive visual art, including John Stevenson (1993), Steve Bell (1991), Brenda Laurel (1989b), Roger Malina (1988), Myron W. Krueger, (1983), Stroud Cornock and Ernest Edmonds (1973, 1977) and Roy Ascott (1967). The author's interactive art/music installations are unique because 1) the interactivity is designed in such a way that the novice is able to create unusually sophisticated musical experiences (creating new timbres as well as structural sections), and 2) at some critical point in the experience, all interaction ceases, and the composer takes control over the drama, just as she would in composing a traditional score. During the interactive sections of Deane Berman's work, participants actuate the unfolding of the piece on both the micro and macro levels in a seamless and transparent manner. At the micro level, participants 'compose' different timbres depending on their navigation. At the macro level, they decide when and if various sections of the work occur, with the exception of certain pre-determined climatic moments. When the interactivity stops near the end of the work, the user is free to experience the climatic moment without any encumbrances. Since the participants are in conversation with the environment for most of the piece, they perceive that they are partners in creating dramatic and poignant moments as they navigate through the space. By combining interactivity and a dramatic score, the work is a hybrid form from both the art and music worlds. 2. Beloved Mnemosyne (2000 and 2002) For her first interactive sonic experience, Deane Berman collaborated with UCLA's Film and Television Department's Hypermedia Studio and a visual artist from Santa Barbara's Perch Contemporary Art & Design Studio. The prototype combined wireless participant tracking technology, embedded wireless sensors, and computercontrolled production equipment creating an interactive piece about memory. Lighting and sonic events were triggered as participants walked through the space and manipulated the objects they found there. These technologies were being explored by Fabian Wagmister and Jeff Burke at the UCLA HyperMedia Studio. (Burke, 2002 and Wagmister, 2002) ~ 2004 by Anne Deane Berman Proceedings ICMC 2004
Page 00000002 The technology enabled an unprecedented level of flexible, rapid control over the production environment, which was transparent to the user. The participant wears a wireless microphone that picks up ultrasonic frequencies from hidden speakers at each of the four corners of the space. Other sensors are invisibly mounted on objects so that when the participant touches these objects, different sound and lighting events unfold. In addition to touch sensitivity, proximity to objects could trigger different sound and lighting events. As participants walked through the space and moved toward each object, the stories associated with the objects unfolded along with various lighting cues. Depending on the level of interactivity by the participant, more information was discovered. If an object was touched or picked up, new material offering a transitional moment in the story was revealed, offering a kind of climax or moment of closure (figure 1). Figure 1: The participant drinks fountain to trigger a climatic moment. ocean version, ocean sounds were mapped into every square so that no matter where someone walked, they would hear the welcoming, lush ocean sounds. Different foreground music and memories were triggered in each version. The foreground material was made up of more complicated sounds that worked as a counterpoint to the spoken memories. These sounds were made from both synthesized and processed mechanical sounds of water (faucet dripping, turning on and off faucets, water fountains, boiling water, etc.). Often these sounds formed cadences in relationship to the memory phrases, since they would occur when the user moved from one square of the grid to another, moving from spoken memory to sound. Spoken memories were short stories or partial stories extracted from the interviews. In order not to overwhelm the user with these stories (if they moved through the space quickly), memories were mapped on only three or four squares of the grid for each version of the piece. Once a sound file was played, it would not repeat, so that if the user moved into the same grid a second time, a new story would be told. However, if the sound file had not finished before the user moved away, and then, the user moved back into the grid space, they would hear the remainder of the sound file. This gave the user the experience of sometimes being able to recall a story, and sometimes not, depending on where she moved and the length of the sound file. Sculptor Bill McVicar built the physical environment. With each new mounting of the work, McVicar engineered an analog "cave" environment complete with lights aimed at water trays with dripping bottles back lit on a scrim that surrounded the audience member. Performances took place at the Ex'pression Center for New Media (MB5 2000 Conference); a global technology and content conference presented by the University of California (DIGIVATIONS); San Jose Tech Museum's Art & Technology Network conference, presented by GroundZero and New York's The Kitchen; the Santa Barbara Perch Contemporary Art & Design Studio; and the 2002 International Computer Music Conference in Sweden's Gdteborgs Konstmuseum. This experience led Deane Berman to take a faculty position at Iowa State University where she would work with CAVE software developer Professor Carolina CruzNeira, Associate Director of ISU's Virtual Reality Applications Center (VRAC). (Cruz, et al, 1993) A portable virtual reality system consisting of four self-contained display modules was used for Ashes to Ashes. The piece was implemented using a wireless head-mounted tracker, which maintains the driver's correct perspective as one moves into and around objects that appear within the virtual space. The 2. 1 Sonic Design Centered around interviews with nine brothers and sisters whose birth dates span over three important decades of American history (the 1950's, 60's and 70's), the piece uncovers their relationships with the deceased parents. Their stories reveal the illusive memory of the father for the youngest child, Billy, who grew up with a father who had no memory: for in the last years of his life, the father lost his memory to Alzheimer's disease. The interviews of the siblings were used to create 15 to 30 second sonic tone poems that can be arranged over time in multiple ways so that the more a person interacts with the object, the deeper into the story they go with very little repetition. The music was designed in three levels (background, foreground and spoken dialogue of memories) so that when applied to the invisible grid of the space, the mixing of soundfiles will make sense to the listener even when more than one sound is triggered at a time. Background music made up of processed water sounds found in nature (lake lapping, ocean waves crashing, streams rushing, rain falling on leaves, etc.) was used to distinguish which version of the piece was played. This way, if someone entered multiple times, she would hear a different set of stories told by the adult children. In one particular version, for example, the Proceedings ICMC 2004
Page 00000003 system also tracks the location of various input devices, such as wands and gloves. (Figure 2) Figure 2: The portable mini cave showing the 3-D image of the user interface for Ashes. The driver stood on the virtual platform wearing a head-mounted display and used the wand to select stories represented by floating spheres. 3. Ashes to Ashes (2002) Ashes to Ashes combines music, narrative, design, and engineering to provide a real-time interactive immersive Virtual Reality environment. (Cruz-Neira, Deane, and Williams, 2003) The historical theme of Ashes to Ashes is about hope and pain. The composer invited those who endured the tragedy of the September 11th attacks to tell their stories of survival. The intention was to create a cathartic experience for both the informants and the users of the immersive piece. The anonymous interviews were catalogued and recorded. Once the stories and ordering were identified, the composer experimented with multiple sound processing methods to build various new "instruments" that were combined to create musical excerpts that enhance the emotional aspects of the stories. Words and phrases were processed with sounds of subway cars, buses, and busy streets. Sometimes one word could create a dramatic 45 -second backdrop that was used in variation among the different navigable stories within an Act providing continuity. For example, one rescue worker said, "they know their digging in a graveyard." The word "graveyard" was processed beyond recognition, offering an ominous musical soundscape for other memories of the rescue effort. Purely musical episodes allow for moments of reflection to the collage of memories. After the first building falls in the piece, Deane Berman adds a musical interlude, which resembles thousands of young voices, rising out of the rubble. The final Act is accompanied by the poignant ending of Deane Berman's cello concerto, Reaching Antares where the cello seems to struggle to find resolution. By coloring the bare voices in an interplay of sound images, the listener is encouraged to focus on poignant aspects of the stories. This music formed the basis for the development of the immersive environment that captures the horror of that day and celebrates the resilience of the human spirit as the healing from that horror continues. 3. 1 Interactive Narrative Design Ashes to Ashes uniquely integrated interactive sonic and narrative design, how the story and music unfolds through navigation. The integration of narrative and interactivity is part of a general development in the arts called "digital storytelling" (Miller, 2004). Larry Tuch, Head Writer and Creative Consultant for the Paramount Simulation Group, collaborated with the author in creating the narrative design made from the source interviews. A ten to 20-minute coherent narrative was created that would work interactively taken from more than 30 hours of recorded interviews of more than 20 witnesses. The viewer had control over how many stories they heard in certain acts. Alternately, some stories are mandatory, giving the piece its dramatic shape. The first performance of the prototype for Ashes to Ashes was held at the 2002 International Super Computing Conference with a subsequent installation at 2004 Mundo Internet Conference in Madrid, Spain. The movie version of the piece is on display at the 2004 International Computer Music Conference. 4. User Response to Both Works: In both prototypes visitors were not burdened with a description of the underlying technology before they began the pieces. Rather, as in the case of Beloved, they were presented with a shoulder strap for the wireless microphone and a set of wireless headphones, and informed that their movement would be tracked throughout the space. They were told they would be hearing various memories of grown children from a family of nine. When they wanted to end their experience, they could either sit in the red chair, or take a drink from a water fountain and listen for the dramatic moment of the piece. They did not have to control the client device other than by moving from place to place. The aim was for the technology to disappear into the background, leaving the user free to take in the experience. In the case of Ashes to Ashes, the best view of the piece was through the driver experience. However, the most engaging experience was from the onlooker's perspective that was not encumbered by the user interface; they were free to watch and listen. The driver had to learn to navigate from story to story through a complex set of triggers using a hand held device, similar to a computer game interface. At the time of the first prototype, the interface was more of a distraction then an enhancement to the experience. With Ashes, the collaborators hope to add a gestural interface with a haptic glove in order to create a more seamless interface for the user. It was observed that both installations did indeed attract and maintain interest, although, Beloved seemed to offer the most intimate experience for the user. This was true for Proceedings ICMC 2004
Page 00000004 several reasons: First, the user was alone in the darkened space with headphones, allowing the user to experience the stories at their own pace. Visitors moved in a variety of ways: Some would move mechanically through the space, listening for how their direction and speed affected the audio; others seemed to almost dance in the space, compelled by the quality of the music. Secondly, participants did not have to learn a new interface, making navigation through the space intuitive. In fact, many participants commented on how the technology was a perfect metaphor for the concept of loss of memory: while walking through the space, memories could sometimes be retrieved by moving back into a sound zone, and other times not. This physical manifestation of retrieving memories - or not - gave the participant first-hand experience with memory loss. 5. Conclusion The research uniquely applies sensing and information technologies to the compositional process to provide new experiential and deeply moving sonic environments. This is accomplished by using music as the primary vehicle supported by elements such as interactive sonic and narrative design, visual and lighting cues in both Virtual and real-world spaces. Together, these elements offer dramatic immersive experiences to the participant, regardless of musical knowledge or talent, transforming audiences from passive to active participants. Applying the innovators' tools to the content-rich audio environments provides a testing ground for new computing devices. This testing enables identification and capture of invaluable aspects of human interaction, communication, and thought. The creation of new media-rich musical interactive spaces, deliberately designed to offer audiences stirring experiences, will eventually emerge as a dominant art form through the invention and provocative adaptation of new technologies. 6. Acknowledgments For Beloved Mnemosyne: Visiting Assistant Professor Jeff Burke, UCLA's Department of Film and Television's HyperMedia Studio; and Bill McVicar, Founder Perch Contemporary Art and Design Studio. Funded by: UCSB's CREATE, UCSB's Humanities Division, Department of Music and UCLA's HyperMedia Studio, Iowa State University's Division of Letters of Arts and Sciences, and Berman & Co. Deepest appreciation to the McVicar family for sharing their memories with the author and allowing her to use their stories as the basis for the piece. For Ashes to Ashes: Associate Professor Carolina CruzNeira, Associate Director, Virtual Reality Applications Center; and Valerie Williams, Director of the Co'Motion Dance Theatre. Student team: David Kabala, Andres Reinot, Evan Rothmayer, Jenny Brooks, Brian Christianson, Chad Jacobsen, and Yifei Wang. Special thanks to Larry Tuch for his help to define the script and his many creative discussions. New York support: Dawn Haines, Ken Locker, Mayra Langdon Riesman, Sue Pinco, and Ed Ruppert. Photo of Ashes to Ashes were taken by Carolina Cruz-Neira and were used by permission. Funded by: Iowa State University: Department of Music, Virtual Reality Applications Center, SPRIG Grant, and College of Engineering; Iowa Arts Council; Iowa State University Foundation, Edgar Fund; Berman & Co., Proctor & Gamble, University of New Hampshire, Centers for Humanities and Department of English; University of California at Santa Barbara's CREATE; and Ames Commission on the Arts. The author wishes to extend deepest appreciation and respect to all the WTC survivors who had the strength and will to share their stories for the creation of this memorial. Gratitude to my husband, Steve Berman, for inspiring me to pursue this line of research, his vision with production issues for both works, and in particular, for his connections leading to many of the New York interviews for Ashes to Ashes. References Burke, J. (2002) "Dynamic performance spaces for theater production." Theatre Design & Technology (U.S. Institute of Theatre Technology), 38(1), 26-35. Cruz-Neira, C., Deane, A., and Williams, V. (2003) "Ashes to Ashes - Dance Driving: Documenting Historical Events Through Virtual Experiences." In Proceedings: Hawaii International Conference on Arts & Humanities. Cruz-Neira, C. Sandin, D., DeFanti, T. (1993) "Surround-screen projection-based virtual reality: the design and implementation of the CAVE," In Proceedings of the 20th annual conference on Computer graphics and interactive techniques, 135-142, Graham, C.E.B. (1997) "A Study of Audience Relationships with Interactive Computer-Based Visual Artworks in Gallery Settings, through Observation, Art Practice, and Curation." Doctoral dissertation: University of Sunderland. Miller, C.H. (forthcoming 2004). Digital Storytelling: A Creator's Guide to Interactive Entertainment. Focal Press, Burlington MA. Wagmister, F. and Burke, J. (2002) "Networked multi-sensory experiences: Beyond browsers on the web and in the museum." Museums and the Web Conference, Boston: MA. Waters, 5. (2000). "The Musical Process in the Age of Digital Intervention." From http://www.ariada.uea.ac.uk. Proceedings ICMC 2004