THE SUICIDED VOICE - THE MEDIATED VOICESkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 00000001 THE SUICIDED VOICE - THE MEDIATED VOICE Mark Alexander Bokowiec Department of Music University of Huddersfield Huddersfield HD1 3DH UK ABSTRACT The Suicided Voice (2004) is an interactive multi-media electroacoustic music piece in five movements. The piece explores the various and fluid identities of voice, the notion of voicing and the digital embodiment of the voice beyond the larynx. In this piece the acoustic voice of the performer is processed and manipulated live on the limbs of the performer via the Bodycoder System the first generation of which was developed by the artists in 1996 . The Bodycoder System is a sensor array worn on the body that sends data generated by switched commands and movements of the performer to an MSP environment via radio. In The Suicided Voice extended vocal techniques are coupled with live sound processing and manipulation to expose the multiplicity, potency and depth of the voice's digital Otherness. The paper provides detailed information about the composition of the piece, the technology used, the nature and consequences of the interactivity employed and the artists' use of the Bodycoder System. 1. THE BODYCODER SYSTEM The Bodycoder System is a sensor array designed to be worn on the body of a performer that combines switches and movement detection sensors to provide the performer with decisive and precise control of an original interactive MSP environment. The Bodycoder is a flexible system that can be reconfigured according to creative and aesthetic requirements. Input is limited to 16 channels, but this can be any combination of switched or proportional sensor information. For pieces such as Lifting Bodies (1999/2002) sensors were placed across the body in order to work with large levers and to manipulate sound across the whole of the body while switch navigation was limited to one hand to step through presets and control the expressivity of the sensors. For The Suicided Voice, four sensors are located on the torso focusing physical expression in the top half of the body (see figure 1.). The more intimate axis of the neck and the wrists are coupled with the larger levers of the elbows. This immediately changed the emotional quality of the physicality. This is particularly powerful in the final movement of the piece where reverb is operated on the neck sensor; tilting the Julie Bokowiec EDT email@example.com www.geocitices.com/marekbuk/ edt.html neck back increases reverb, broadening the acoustic of a sound and giving the illusion of the sound expanding in space as the physical consequence of looking up, perhaps in supplication to the sound. In The Suicided Voice, twelve switches are used on two gloves to give the performer greater navigational and decisive control over a more complex MSP environment as well as having the obligatory control over the proportional sensors which is always a feature of our work. 1.1. Sensors and Switches The Bodycoder System uses small resistive bend sensors backed with spring steel, these are placed on the performer's joints. The bend sensors are accompanied by four - twelve switch elements that are housed within a pair of gloves (see Fig. 1). The switches can be assigned a variety of functions from piece to piece and from software patch to patch or from preset to preset. Similarly, the expressivity/sensitivity and range of each of the bend sensors can be changed, pre-determinately or in real-time, from patch to patch during the course of a piece. Switches provide the performer with the means of orchestrating and determining the composition of the work, to initiate live sampling, to access sound synthesis parameters and to control and move between MSP patches from inside the performance. 1.2. Radio System Because the performer requires maximum mobility, a radio system is employed to convey data generated by the sensors and switches to hardware and computer systems. The radio transmitter/receiver utilizes licenseexempt 433MHz circuitry. A sixteen channel transmitter and sensor interface (worn as a small belt pack) is used to accept switch inputs and/or proportional resistive information from up to sixteen sensors. Various interfaces have been constructed which can accommodate a large variety of switch/proportional options. The interface/transmitter has a range of approximately 100m, is decoded at the receiver and uses the Ethernet UDP protocol to communicate to the host computer - a Powerbook hosting MSP. Each sensor channel has a resolution of 10bits and the receiver communicates to the host computer at 10MB/s. The UDP data is received and decoded using the OTUDP and Open Sound Control objects within MSP.
Page 00000002 / CMSP pa^: ONE| // / I; \....., >2 Figure 1. Sensors/Switches, MSP Function/Control 2. THE SUICIDED VOICE The Suicided Voice was developed during a three weeks residency at The Banff Centre, Canada in 2003 and completed in the electro-acoustic music facilities of the University of Huddersfield in 2004. In The Suicided Voice the Bodycoder is used to remotely sample, process and manipulate the live vocals of the performer using a variety of processes within MSP. In this piece the acoustic voice of the performer is 'suicided' or given up to digital processing and physical re-embodiment. Sometimes dialogues are created between acoustic and digital voices, gender specific registers are subverted and fractured. Extended vocal techniques make available unusual acoustic resonances that generate rich processing textures. Aboriginal modes of vocalization such as open throat and overtone singing are transformed and spiral into new acoustic and physical trajectories that traverse culturally specific boundaries crossing from the human into the virtual, from the real to the mythical and from the human to the inhuman. In The Suicided Voice the voice, transformed and re-embodied within the interactive medium, becomes a fluid originality that is defined only by its own transmutations. There are no pre-recorded sound files used in this piece and no sound manipulations external to the performer. The piece is fully scored with few moments of improvisation. There is no unconscious data acquisition, instead the performer is fully aware and in control of all data/movement and sound generation/manipulation. The piece also features live video streaming from a miniture radio camera focused on the mouth that can be switched on and off by the performer, as well as still images and QuickTime movies which are mapped to sensor control. The performer is required to multitask; generating acoustic vocal sounds, manipulating processed sound on the body while navigating a complex live and interactive MSP environment. The piece requires a very distinct aural, technical and cross-model perception on the part of the performer. 3. PROTOCOLS AND FUNCTIONS The fluidity of the composition, the ability to access different types and ranges of processing by moving freely between patches and sub patches in a non-linear manner is made possible through a combination of hardware abilities and by building dimension into the design of the MSP environment. Removing the rigidity of linear progression through a single patch and series of sub patches coupled with the ability to freeze and/or sample and loop sequences while moving between sub patches to extemporize and add other processing layers to the soundscape is once again both a hardware and software design strategy. Moving through presets with inherently different sensor ranges also produces distinct qualities of physical expression so that there is a palpable change in focus and performance modality. The Suicided Voice MSP environment consists of a main control patch which receives the UDP input from the radio receiver and routes the switch and proportional data to one of three sensor sub patchers holding the calibration, scaling and mapping needed for each of the three audio processing patches. The main control patch also receives the radio mic signal that is processed with a comprehensive filter patcher. This filter receives preset variables set up for each preset within each audio processing patch so that the live signal is timbrally optimised for each effect process in the piece. The three
Page 00000003 E stpret couners rp Ii2QP. 1-5:ih i Figure 2. Main MSP Control Patch. sensor subpatches each hold a subpatcher that sends midi note and controller information to a computer hosting the visuals. The main control patch also routes both audio and midi exclusively to one of the three audio processing patches, these are: * A Granular synthesis patch with live sampling capabilities with two looping buffers. * A second patch that uses several pitch changing abstractions in conjunction with another set of looping buffers, * A third patch using a set of Stutter objects and a sampling reverb and delay. Each audio processing patch contains a variety of preset states containing elements such as delay times and filter frequencies, these can be accessed and controlled in real time by the choices and actions of the performer. Once a preset is recalled then selected variables can be controlled within ranges set for each preset. For example in preset one of patch one the first pitch variable can be changed between 0.7 -and 1.28 and in preset 2 of patch one the same variable can be changed between 0.5 and 0.88. Sensors are used to manipulate variables such as: * Loop pitch - buffers 1 and 2 * Loop level - buffers 1 and 2 * Reverb time * Reverb balance * Filter frequency The eight switches on the left hand are used to initiate control of a number of functions and variables depending on the patch in use, these are: * Sample into a looping buffer 1 - 2 * Sample into a granular buffer * Freeze / Un-freeze Reverb. * Recall one of processing patches 1 - 3 * Advance a preset within the active patch * Toggle - enable/disable live head cam video stream. The four finger switches on the right hand glove are used to enable/disable individual proportional/bend sensors. 4. CONSEQUENCES The Bodycoder gives one the sense of hearing sound in the bend of an arm, seeing and hearing reverb in the movement of the neck. While this can be explained as simple movement mapping to sound processing the psychophysical sensation for the performer is undeniable. The re-embodied processed voice is 'felt' as a psychophysical resistance. This is not the resistance of the steel backed bend sensors, but something that happens in the mind - it is the conscious perception of Sample scrubbing. Sample playback speed. Sample pitch. Random playback speed of sample grains.
Page 00000004 feeling sound in areas of the body. This type of crossmodel perception is experientially synaesthesic and because it is an abnormal perceptual state it is disorientating and sensually overwhelming without study and practice. Working within such a unique quality of perception is one of the features of interactive systems like the Bodycoder. The development of particular psychophysical skills and perceptions is an important part of the emerging practice associated with interactive systems. In terms of the Bodycoder System, the perception and sensation changes in subtle but significant ways each time the Bodycoder is re-configured for a new piece. A processing/sound/physical bias in one piece (for instance placing melody in the area of the right elbow) may be radically changed within the context of another where the same type of operation could be assigned to a knee. In each case the performer must adjust themself to the change in sensation and the reconfigured field of perception. This may seem strange, after all within the context of classical music, musicians are not normally required to radically reconfigure the physical design of their instruments. Traditionally musicians learn to master the fixed protocols of their instrument then the technical and expressive virtuosity of the music composed for their instrument. The Bodycoder System is not a fixed protocol instrument but a flexible interface that enables the performer to embody in various ways audio and other media expressions. Developing a flexible approach; relearning protocols from piece to piece, adjusting to and re-orientating qualities of physical expression and focus according to the re-location of sensors are some of the skills required of the Bodycoder performer. Both the system and the electro-acoustic music and multi-media pieces created for the Bodycoder present the performer with expressive challenges that continue to confront both the senses and the intellect, challenging the performer to stretch further their musical, physical and perceptual skills. The Bodycoder, like all good interface and interactive system designs, engages the senses as well as the intellect and grows in complexity as the users skill base and mastery of the system increases. Unlike previous works for the Bodycoder that have either utilized soundfiles or computer generated synthesis, the sound source for The Suicided Voice is the performers own acoustic voice. The close interactive relationship between live and processed; the single sound that comes from the larynx and the multiple sounds that are reembodied, processed and manipulated on the limbs, requires precision across a range of perceptual, physical, technical and musical skills. Because the processed voice or voices come back to the performer sometimes instantaneously, at other times seconds later depending on loop buffer/sample length (the longest sample length in this piece is twelve seconds) the performer is effectively working across a range of response times which may or may not correlate to the tempi and rhythms of the actual sound composition that the audience is witnessing. The performer is very much the architect at the center of the sound construction, generating and manipulating sound cells and layers often in advance of the audience hearing them - vocalising and construction new sonorities while manipulating foreground sounds on the body. Because of the interdisciplinary nature of the work the compositional and rehearsal processes are one and the same. Compositional ideas can only be fully realized with the performer in the space with the system. The compositional/rehearsal process focuses not only on the over-arching compositional structure of the piece, but on the interior qualities and the physical and acoustic movement of the sounds. Filter and pitch range mapped to the physical gestures of the performer have to be defined and refined alongside more conventionally composed passages of vocalization. This is a painstaking process that often entails working through each strand and quality of sound often within complex multi-phonic landscapes. Ultimately we are interested in exploring the interiority of sound alongside the physical gesture that interacts, generates and manipulates it. 5. REFERENCES  Our development work with regard to both the Bodycoder System, creative collaborative processes and the aesthetics of our work to date, is well documented in performances and publications: Bokowiec, M and Wilson-Bokowiec, J. (2003) "Spiral Fiction". Organised Sound 8 (3) Cambridge University Press. UK Wilson, J., and Bromwich, M. (2000). "Lifting Bodies: interactive dance - finding new methodologies in the motifs prompted by new technology - a critique and progress report with particular reference to the Bodycoder System". Organised Sound 5 (1) Cambridge University Press. UK. Wilson, J., and Bromwich, M. (1999). "Inside/Outside - The Bodycoder System for real-time manipulation of sound and images within an electronic theatre environment". Global Village - Global Brain - Global Music. KlangArt Osnabrueck. Germany. Hemment, D (1998). "Corpus of Sound". In Mute 10. Bromwich, M. (1998). "Bodycoder: a sensor suit and vocal performance mechanism for real-time performance". Proc. Int. Computer Music Conference. San Francisco. Bromwich, M. (1995). "A single performer controlled interface for electronic dance/music theatre". Proc. Int. Computer Music Conference. San Francisco.