Page  00000479 USING TECHNOLOGY TO RIDE THE REFLEX/VOLITIONAL CONTINUUM IN IMPROVISED MUSICAL PERFORMANCE Dr Tim Sayer University College Plymouth St Mark and St John, UK ABSTRACT Modem performance contexts provide environments in which highly complex functionality has to be mediated through bespoke interfaces which can be configured to respond to the performer's musical gesture. In such mediated environments there is the potential for multiple mapping stages to exist. The calibration and representation of functionality can be manipulated on the performers' side of the interface while the grouping of parameters and functional response can be manipulated on the other. It is also possible to have environments where the relationship between the performer and the interface is configurable by manipulating semantic or metaphorical meaning. The question I am exploring in this paper is, how can human computer interaction be used to bear influence on a performer's internal behavioural mapping by exploring reflex and volition in improvised musical performance? Theoretical material relating to the experiences of performers, composers, systems developers and the field of cognitive science will be presented in the context of an experimental computer based performance environment. The environment, called Milieu [1], employs an expanded notion of a performance parameter space to include the responsive and auto-responsive behavioural activity of the performer. 1. INTRODUCTION This paper reflects an ongoing interest I have in the design of human computer interfaces for improvising musicians who perform using acoustic instruments. I am specifically interested in the construction of conceptual tools rather than physical tools such as tracking devices, sensors and the like. This paper will draw together three themes which have emerged in my work; the concept of eco-systemic design, the notion of a conceptual tool and positioning the practice of improvised musical performance on a continuum between reflex and volitional behaviour. 2. ECO-SYSTEMIC DESIGN In acoustic musical instruments, physics drives the mapping between motor skills and sound. Acoustic instruments are typically implementations of mechanical systems. As Hunt and Wanderley observe, "The art of mapping is as old as acoustic instrument design itself, but it is only since the invention of real-time electronic instruments that designers have had to explicitly build it into each instrument." [2] The introduction of computer technology into the field has now meant that it is possible to build interfaces that are active, not just reactive. In this sense they can respond to a performer and performance environment in accordance with a predetermined parameter map, in ways that the performer may or may not be consciously aware. Many computer based interfaces continue to evolve tightly coupled gestural mapping using a variety of peripheral devices such as data gloves, motion detectors, velocity sensors etc. There are also, however, opportunities, afforded by computer based technologies, to explore the relationship between performer and sound source with the construction of responsive environments in a manner Di Scipio refers to as 'eco-systemic' [3]. In this environment the performer and computer both exist in a relationship of 'ambient coupling', that is, the computer is responsive not purely to the performer but to the performer in the context of the performing environment. Di Scipio's model of a performance environment in some ways breaks down the traditional notion of agency in an electroacoustic performance setting. His vision however, is not one of denying a performer agency in their relationship with technology, but mediating that agency through the performance environment. Such a performance paradigm is particularly appropriate in the context of my work as it allows for the indiscriminate mediation of behaviour fuelled by reflex as well as volitional processes. As Di Scipio asserts, this design concept does not preclude unmediated agency but subsumes it within the greater whole. "Direct man machine interactions (via control devices) are optional to an ecosystemic design, as they are replaced with a permanent indirect interrelationship mediated by the ambience." [3] I have adapted and augmented the notion of eco-systemic design in constructing performance environments for improvising musicians. Although Di Scipio's original concept was developed as a means of exploring man - machine interaction, it is a suitable framework in which experimental work examining performers' reflex and volitional response systems can be undertaken. The causal relationship between conscious behaviour, whether volitional or not, and environmental sensory information can be explored effectively using this architecture. There is no requirement on the part of the performer to contend with sensors, triggers or switches, merely to look, listen, feel and play. 3. REFLEX AND VOLITION IN IMPROVISED MUSICAL PERFORMANCE The debate surrounding volitional and reflex behaviour has been around since at least the seventeenth century when Descartes put forward his mechanistic notions of 479

Page  00000480 behaviour. Various theories over the years have tried to offer a neurological explanation for terms such as volition, the soul, consciousness and the mind. Until the early nineteenth century the most commonly accepted theory in regard to voluntary and reflex behaviour was that they involved two completely separate brain functions. This was superseded by Charles Bell and Francois Magendie, whose work led to a unified theory which classified all brain functionality as an interface between sensory input and motor output [4]. According to this interpretation all behaviours are responses to stimuli, all that changes is the stimulus which initiates the behaviour. "The reflex/voluntary distinction derived from the sensorimotor hypothesis of neuroscience is not absolute; all behaviours fall on a continuum from purely reflex to purely voluntary and none is purely one or the other." [5] The defining characteristic of a behaviour at the reflex end of the spectrum is that the stimulus is likely to be recent and the ensuing behaviour predicable, brought about by a simple neuronal connection. Picking up something very hot could produce such behaviour. At the voluntary end of the spectrum behaviours are much more difficult to predict and could be initiated by a combination of recent and historical stimuli. Even a dramatic stimulus such as witnessing a robbery will produce unpredictable behaviour, based on a large body of historically referenced stimuli using long and complex neural connections. Orthogonal to the reflex/volitional continuum I suggest a scale exists, which maps the extent to which behaviour becomes manifest in conscious awareness, a scale which positions behaviour between 'biomechanical' and 'cognate-mechanical'. The biomechanical extreme representing a collection of physically embedded systems such as those that handle digestion, blood supply etc., moving through those that are more disposed to external monitoring such as motor control. Processes at this end of the continuum tend to be initiated and monitored outside of personal awareness. Cognate-mechanical behaviour on the other hand is presented to personal awareness for monitoring and feedback, and as such has a greater impact on cognitive load. These include behaviours with an element of perceived choice such as coughing and laughing through to those that are perceived to be personally initiated and controlled, as in the case of improvised music making. Viewing the human activity of playing a flute in this way would reveal bio-mechanical processes taking oxygen to the lungs, maintaining the balance of the instrument and sending blood to the muscles, while the cognate-mechanical processes determine choice of notes, timbre and speed of execution etc. It seems likely, given that the human cognitive capacity is a finite resource, that there is an efficiency incentive for cognate-mechanical processes to migrate to the biomechanical domain once they no longer require monitoring and feedback, thus freeing resources. The imperative for this to happen becomes more acute as the number of cognate-mechanical processes happening simultaneously increases. There are a number of practical reasons in improvised instrumental music why this transference needs to take place. The speed at which acts are executed has significant implications for which type of mechanical behaviour they are governed by. Consider that a semi-quaver played at a moderate speed of 120 beats per minute only takes around 100 milliseconds to execute. It seems that individual actions at this speed exist on the edge of what a person can be aware of and feel control over. Pressing states that "speeds of approximately 10 actions per second and higher involve virtually exclusively pre-programmed actions. An informal analysis of jazz solos over a variety of tempos supports this ball-park estimate of the time limits for improvisational novelty" [6]. Evidence from other fields supports the notion of diminished volitional control in human beings when undertaking processes at speed, as provided by the work of Cabrera and Milton. Analysing the movements of a hand balancing a stick they concluded that the compensating hand movements were so fast that conscious intervention in the process was not possible. They established that 98% of the hand movements required to keep a stick balancing were faster than the 100 milliseconds it takes for a human to respond to a visual stimulus. The compensating movement of the hand comprises a number of small random movements which keep the stick in a constant state of instability. [7] Within the realm of improvisation this facility is of course a double edged sword. A trade-off exists which has a significant effect on the improviser's ability to undertake improvisation. Transferring acts from the cognate-mechanical to the bio-mechanical sphere will undoubtedly increase processing speeds and in turn motor facility. What it also does, however, is render each act a fixed entry into a library of pre-constructed components. As speed increases the level of control perceived by the player diminishes, replaced by a stream of learnt material which is less likely to be context specific. It is interesting that this dichotomy hardly exists outside the realm of improvised music, except perhaps for speech. Pressing points to the speed of the biological processes that underpin improvised music making as evidence of the fact that the field of music and the act of improvisation are natural allies. "Processing speed seems to be greatest for audition and touch/kinaesthesia, of all the possible sensory systems. These are precisely the elements involved in musical improvisation and provide a vivid psychological interpretation for the historical fact that music, of all art and sport forms, has developed improvisation to by far the greatest degree." [6] On a pragmatic level the need to reduce the cognitive burden of conscious monitoring at high speeds is clear, but it is still nevertheless a problematic notion for many improvising musicians. It is probably fair to say that for some musicians there is almost an ethical principle at stake in calling a musical activity improvisation that overtly draws on 'learnt' 480

Page  00000481 material. The saxophonist Steve Lacy, in conversation with Derek Bailey he expresses his frustration accordingly. "Why should I want to learn all those trite patterns? You know, when Bud Powell made them, fifteen years earlier, they weren't patterns. But when somebody analysed them and put them into a system it became a school and many players joined it. But by the time I came to it, I saw through it - the thrill was gone. Jazz got so that it wasn't improvised any more." [8] 4. CONCEPTUAL TOOLS Betty Edwards has, for many years, been involved in the development of conceptual tools to improve people's ability to draw. The theoretical principles that underpin her work were first developed by the Nobel Prize winning psychobiologist Rodger W. Sperry. He undertook research into brain-hemisphere functions in 1968. She said of him: "His stunning findings, that the human brain uses two fundamentally different ways of thinking, one verbal, analytic, and sequential and one visual, perceptual, and simultaneous, seemed to cast light on my questions about drawing. The idea that one is shifting to a different-from-usual way of thinking/seeing fitted my own experience of drawing and illuminated my observation of my students." [9] Inspired by this research Edwards developed a range of conceptual tools for use in her classes. She developed observational techniques which avoided the creation of symbolic representations of visual information. Thus allowing the artist to draw what they saw from primitive visual information rather than pre-constructed units. A side-effect of these techniques was to often induce a trance like state. Edwards describes her students experiencing an unawareness of the passing of time and if voices were present, being able to perceive their sound but not their meaning. The conceptual tools developed by Edwards are predicated on an unproven assumption that her techniques can move cerebral processing from the left to the right hemisphere of the brain. This simplistic interpretation of the results she experienced does not detract from the fact that her students experienced a significant improvement in their ability to draw. Invoking 'visual disorientation' in this way is obviously more suited to visual rather than non-visual tasks. There is however, a certain amount of anecdotal evidence to suggest that the use of distraction can perform a similar function when working with sound. Stan Tracey has for more than six decades been a hugely influential pianist in British jazz and is also revered for his compositions. In a television interview he stated: "I write far better stuff and more logical, watching television. I can watch a television program and I'll drift off the program in my mind onto the music that I've been writing. Because I'm not concentrating so hard on doing it ideas come easier. A lot of the stuff I've written has been done while I've been watching television, it really works." [10] Tracey seems to have intuitively found a technique which lubricates the flow of ideas, based on the diversion of conscious awareness away from the task in hand. Robert Zatorre has undertaken an extensive examination of the auditory cortical regions of the brain using brain imaging techniques to monitor brain activity under a range of conditions. He identified a general level of auditory perception which provokes brain activity in both hemispheres prior to any interpretation of the sound. On the recognition of the sound as speech, processing shifts to the left hemisphere. "Thus, the predominant role of the left hemisphere in many complex linguistic functions might have arisen from a slight initial advantage in decoding speech sounds. The important role of the right hemisphere in aspects of musical perception - particularly those involving tonal pitch processing - might then have been in some sense a consequence of, and is complimentary to, this specialization of language." [11] It seems that for Tracey, giving the left hemisphere something to do prevents it from stifling creative musical processes in the right hemisphere. It is also significant that what he gives it to 'do' is almost certainly going to involve listening to and decoding human dialogue and this could offer an explanation of why the technique is so successful for him. 5. AN EXPERIMENTAL COMPUTER BASED PERFORMANCE ENVIRONMENT At this point I would like to describe my approach to the development of cognitive tools for use by an improvising musician. Tools that can be assimilated into the design of a performance architecture broadly based on the eco-systemic principles discussed earlier. The framework under development is called Milieu, it is a tool to create performance environments for use by a single improvising musician using an acoustic musical instrument and has been prototyped using Director. The key to understanding the type of performances that might emanate from this environment is in the way Environmental Data Audio Visual Output Parameter MaD PerceDtual MaD 4 I11ii 1tt11 Bio-Mechanical Cognate-Mechanical Behavioural Mara e c Figure 1. Performance parameter space 481

Page  00000482 information is mapped between the various components (see Figure 1). The mapping of information between components of the environment is configurable at two points, at the interface between the acoustic environment and the computer (perceptual map) and between sensory information and performer (behavioural map). The software has a simple agent based architecture which spawns instances of agents based on a number of agent classes. The agents' behaviour generates visual content in response to the sound of the performance environment and sonic output which is generated in response to the visual output. A piece constructed using Milieu comprises one or more sections of fixed duration with each section containing up to eight conceptual agents. Each conceptual agent has around twenty-eight audio visual parameters that govern its internal state and behaviour. The visual parameters control the probability of the child object changing size, opacity, colour, spawn rate, their movements and their responsiveness to the acoustic signal etc. At present the sonic output is MIDI and the parameters determine the probability of change in pitch, duration, size of pitch movement, change in volume, which graphic parameter it responds to and its level of responsiveness. In each of the conceptual agents (parent classes) a single parameter is chosen to be responsive to the acoustic signal and another is chained to a single sonic parameter. The parameter responding to the acoustic signal may or may not be the same as the one chained to the sonic output. In accordance with eco-systemic principles the only live data input into the computer is the acoustic activity in the performance space, which is of course primarily but not exclusively produced by the performer. The computer will respond sonically and visually to the information it receives, but although the performer can affect the behaviour of the computer, the relationship between action and response is not likely to be explicit. The software has the capacity to make 'autonomous' decisions and can also vary its behaviour over the course of the performance. The second configuration point within the architecture is the mapping of environmental data onto the performer's behaviour, which takes the form of a hypnotic script, and is also delivered prior to the performance. Experiments at this configuration point are at an early stage. I have taken a systemic approach to hypnotherapy devised by Dylan Morgan [12], and used the notion of metaphoric stories described by Haley [13], based on the work of Milton Erickson. In this mapping component I have not adopted what might be regarded as 'hard-core' post hypnotic suggestion in the scripts I have developed. My reasoning in this has been guided by my belief that the function of my research is to emancipate a performer's musical endeavour from the constraints of their conscious and mechanistic behaviour rather than to impose upon them constraints which I have constructed. To-date I have prepared performances with two different instrumentalists, one tenor saxophone and one shakuhachi. On each occasion the projects were undertaken in four stages. Stage one was an initial meeting at which the performer observed and experimented with the interface and decisions were made about the design of the performance. The second stage was a rehearsal using the configured system. At this stage the performer engaged with the first hypnotic script. The script was related to improvisation in general and introduced the performer to the concept of the piece; it was played on headphones after which a rehearsal was undertaken. The third stage was the actual performance using appropriate sound and projection facilities. The performer listened to the second hypnosis script before embarking upon the performance. The hypnotic script was specific to the piece and provided suggestions about how the performer might engage, influence and generally interact with the visual agents. This is the point at which the performer's relationship with the interface is suggested. In the first performance the hypnotic script was delivered prior to the performance and suggested a subtle and gentle manner in which to engage with the interface. In the second performance the script led the performer straight into the performance and was increased in intensity in an attempt to exert a more profound influence on the performer. This led to far more confrontational behaviour by the performer and was ultimately to the detriment of the performance. The fourth stage was a filmed interview, based loosely on a prepared questionnaire, in which the performer relayed their experience of performing the piece. 6. CONCLUSION The experimental work undertaken to date has produced more questions than answers, which is perhaps indicative of a rich seam of enquiry. I intend to collect quantitative data and anecdotal evidence from continued performances in order to inform the continued development of this performance paradigm. I have made a conscious decision to gradually introduce specific behavioural mappings into the scripts so as to monitor the effect on the performer. My technological interventions to date have therefore erred towards the volitional rather than the reflex end of the performer's behavioural spectrum. The experience of the second performance has made me only too aware that exploiting the potential of technological environments to interact directly with human subsystems can be very unpredictable and has to be handled with extreme care. Perhaps my focus so far has been skewed towards the creation of a technological framework, at the expense of an integrated and coherent working methodology. In realising this I feel a certain philosophical empathy with Marc-Williams Debono when he wrote in Leonardo, "the dynamic interrelation of cognition and art is now a new way to investigate levels of perception or reality and will probably bring to light new epistemological fields." [14] 482

Page  00000483 REFERENCES [1]. Sayer, T., "A Conceptual Tool for Improvisation". Contemporary Music Review, 2006. 25(1/2): p. 163-172. [2]. Hunt, A. and M. Wanderley, "Mapping performer parameters to synthesis engines". Organised Sound, 2002. 7(2): p. 97-108. [3]. Di Scipio, A., "'Sound is the interface': from interactive to ecosystemic signal processing". Organised Sound, 2003. 8(3): p. 269-277. [4]. Clarke, E., Jacyna, L.S., "Nineteenth-century origins of neuro-scientific concepts". 1987, Berkley: University of California Press. [5]. Prochazka, A., Clarac, F., Loeb, G.E., Rothwell, J.C., Wolpaw, J. R., "What do reflex and voluntary mean? Modem views on an ancient debate". Experimental Brain Research, 2000. 130(3): p. 417-432. [6]. Pressing, J., "Improvisation: methods and models", in Generative Processes in Music. The Psychology ofPerformance, Improvisation and Composition, J.A. Sloboda, Editor. 1988, Clarendon Press: Oxford. p. 129-178. [7]. Cabrera, J. and J. Milton, Physical Review Letters, 2003. 89: p. 158-702. [8]. Bailey, D., "Improvisation: it's nature and practice in music". 2nd ed. 1992, London: The British Library Sound Archive. [9]. Edwards, B., "Drawing on the Right Side of the Brain". 2001, London: HarperCollins. [10]. Tracy, S., "Originals - Stan Tracy". BBC4: London.2004 [11]. Zatorre, R. J., P. Belin, and V. B. Penhune, "Structure and function of auditory cortex: music and speech". TRENDS in cognitive sciences, 2002. 6(1): p. 37-46. [12]. Morgan, D., "The Principles of Hypnotherapy". 1996, London: Eildon Press. [13]. Haley, J., "Uncommon Therapy". 1973, New York: W.W.Norton & Company. [14]. Debono, M., "From Perception to Consciousness: An Epistemic Vision of Evolutionary Processes". Leonardo Music Journal, 2004. 37(3): p. 243-248. 483