Page  00000001 HEARING EMERGENCE: TOWARDS SOUND-BASED SELF-ORGANISATION Tom Davis Pedro Rebelo Sonic Arts Research Centre Queen's University Belfast Belfast BT7 1NN Email: tdavis01@qub.ac.uk ABSTRACT A fascination for models derived from natural organisation of organisms has a long history of influence in the arts. This paper discusses emergence as a complex behaviour and its manifestations in the sonic domain. We address issues inherent in the use of visual/spatial metaphors for sonic representation and propose an approach based on sound interaction within biological complex systems. 1. INTRODUCTION In recent years, there have been numerous implementations of algorithmic systems that model the phenomena of emergent behaviour. In an attempt to reproduce the necessary conditions for emergent behaviour to occur, these systems employ bottom-up strategies through the definition of simple local rules for the behaviour of agents. This type of characteristically local behaviour has been paralleled with social interactions at play in live music performance, particularly improvisatory contexts [2], [8]. Musical applications have been widespread and include Swarm Intelligence for the selection of musical parameters [13], emergent behaviours for use in sound synthesis [9], and complex system modelling of human interaction in a performance environment, [1] [2] [8]. Despite the large number of systems in existence that exploit the properties of emergent behaviour there seems to be a lack of systems that actually represent the perpetual novelty and innovation of emergent behaviour within a sonic context. 2. MAPPING AND METAPHOR The systems mentioned above utilise emergent behaviour based on examples in which emergent qualities become apparent through a user's perception of a graphical display. The properties of these graphical displays are then mapped in different ways to try and produce a sonic result that conveys their graphical emergent qualities in audio form. This mapping process often tries to derive a direct correlation between qualities of spatial position in the graphical model and a sonic characteristic in the audio model [2] [13]. This seemingly arbitrary one-to-one mapping of spatial to audio characteristics can be regarded as a sonification of primarily graphic systems rather than the design of emergence in the sound domain. Email: P.Rebelo@qub.ac.uk We propose that a re-thinking of the mapping process is essential in order to address the potential of emergence as a sonic construct. John Holland, in his book 'Emergence: from chaos to order'[6], suggests that for a model to be "successful" it should provide a metaphor for a system that enables us to see new connections with, or add new meaning to, processes in the already existing system. He goes on to say that 'deeper extended metaphors [should] allow for a profound re-conception of the subject matter.'[6]. Holland outlines three postulates for the source (original) and target (modeled) systems. 1. There is a source system with an established aura of facts and regularities. 2. There is a target system with regularities and perhaps facts that are difficult to perceive or interpret. 3. There is a translation from source to target that suggests a means of transferring inferences for the source into inferences for the target. [6]. He suggests that if the source system has common underlying qualities with the target system then it is more likely for this transference of inferences to occur and thus therefore more likely for the model to facilitate a profound re-conception of the subject matter. This change in perception facilitated by the model allows for a greater creative and imaginative exploration of the system space. The proposed relationship between source and target systems suggests a translation of function rather than a mapping of results. Our research addresses systems that seek to exhibit emergence in the sonic domain by referring to common underlying qualities with the original naturederived model. Expanding on the notion of "metaphorical systematicity", Lakoff and Johnson note that "to comprehend one aspect of a concept in terms of another... will necessarily hide other aspects of the concept" [4]. In other words, a metaphorical relationship is dependent on the identification of concepts that will come into focus when considering two different systems. The model of the emergent system needs to act as a metaphor of the original system. Take for example the flocking motion apparent in a graphic representation generated by Craig Reynolds' Boids algorithm [11]. The Boids (flocking agents) in this algorithm follow three simple rules: 1. Collision Avoidance: avoid collisions with nearby

Page  00000002 flockmates, 2. Velocity Matching: attempt to match velocity with nearby flockmates, 3. Flock Centering: attempt to stay close to nearby flockmates. These three simple rules when applied individually to each agent in the system lead to the emergence of complex flocking behaviour such as that found in flocking birds, swarming bees and schooling fish. When viewed in a graphical form the novelty and fascination of this system is derived from the changing position of the many agents in two or three-dimensional space. Although the interactions occur between an agent and its neighbours on a local level it is only from the more removed perception of the scene as a whole that the seemingly complex flocking behaviour, the constant reorganisation yet retention of shape and fascinating navigation of available space becomes apparent. Craig Reynolds described this system in his 1987 paper as follows: "...it is simple in concept yet is so visually complex, it seems randomly arrayed and yet is magnificently synchronized... all evidence indicates that flock motion must be merely the aggregate result of the actions of individual animals, each acting solely on the basis of its own local perception of the world" [11]. 3. SPATIAL AND AURAL PERCEPTION The perception of emergent systems can be said to be intrinsically related to engagement. Whereas we are able to easily detect visual patterns in behaviours such as the flocking of birds due to our ability to perceive all elements of the scene at once, it is unclear how the ear might be sensitive to emergent patterns that depend on this overview. Although much has been written about the differences between the eye and the ear in the context of perception, the way we understand patterns through sound is largely unknown. The notion of one's body as the centre of perception - Edmund Husserl's zero point of orientation [3], is particularly relevant to sound. Whereas one can be a (visual) observer, treating the world in front of us as a spectacle viewed from a certain perspective, aural stimuli are mapped around our own body. This difference in role raises issues of engagement and participation, which suggests that the ear needs to be treated differently from the eye. It is the trajectory performed by the ear, from a subtle tilt to the movement of the whole body, that becomes an active participant in the perception of an auditory scene. In the same way that behaviours such as flocking are better understood from a distance, we argue that sonic emergence can only be perceived when considering the listener as an agent of that very behaviour. The ear does not act as a stethoscope, listening in from the outside, but rather as a participant in a space in which it takes the role of one, of many agents. 4. SWARM LAB The prototype "Swarm Lab" focuses on the maintenance of particular qualities found in the Boids system. The beauty in this model becomes apparent through the agents spatial / temporal relationships. In order to ensure a commonality of underlying mechanisms we opted for a literal correlation of the spatial relationships in this model to those of a model of sound spatialization within a multi-loudspeaker space. This model retains all the aspects found in the graphical format representation of the flock and re-presents them in a sonic format. The spatialisation of the flock or swarm within a space allows you to appreciate the same novelty and fascination in the spatial positioning of the sonic image as found in the graphical representation. The listener is positioned amongst the agents and hence is able to perceive the spatial distribution of the flock, the change from flocking behaviour to random behaviour, the position of the flock in your space and the tightness or the diffusion of the flocking. This model was designed for SARC's Sonic Laboratory using an eight-channel loudspeaker system at floor level. This prototype is implemented using Max/MSP primarily utilising the spatialisation object vbap~ [15]. The interactions between swarm agents are governed by the Boids object created by Eric Singer [12], which is an implementation of Craig Reynolds Boids algorithm for Max/MSP. Our implementation explores the rendering of a sounding swarm by employing Doppler, reverb and volume effects on top of the vbap~ spatialisation. This sonification of the flocking motion of the Boids objects adds new ways of creatively exploring the output produced by the Boids algorithm. The listener becomes an inhabiting agent rather than a voyeur as the sound envelopes the body in the environment more significantly than the graphical display. The spatial perception of the swarm is almost as strong, yet there is an emergence of new interactions not present in the original graphical representation. The Boids themselves now interact on a audio signal level. Each agent emits its own sound wave (a filtered noise impulse), which is separately spatialised according to position; these sound waves interact constructively or destructively depending on their timing and amplitude. This leads to interesting phase effects and a change in sound corresponding to variables such as, the speed of the swarm, the minimum distance between Boids and the flocking strength of the swarm. The sonic exploration of these variables suggests new avenues in the creative exploration of the system and helps promote types of perceptual engagement not present in the original model.

Page  00000003 5. SONIC FROGS 5.2 Formation of Rule Set A system currently under development addresses the modeling of a natural emergent listening environment as found in a frog ecology. This is a system that considers a listener as an agent, relies on aural feedback, some level of social interaction, a system that is innately emergent on a sonic level, i.e. there is no high level mapping between algorithmic and sound parameters. 5.1 A Listening Ecology Through a study of current research into the mating of frogs we have found that female frogs select a mate from within the male frog chorus according to the temporal and spectral characteristics of their calls. Frogs have a complex auditory system that is designed to help them recognise and respond to calls of their own species. They have a variety of different calls for such situations as mating, distress, release, warning, rain and definition of territory. The calls for different species are distinct in temporal and spectral characteristics. This helps the frogs recognise calls of their own species from others within a dense chorus [10]. The mating calls of frog species under study can be characterised by four main parameters: dominant call frequency (the frequency with the highest spectral intensity), pulse rate, call rate and call duration. With dominant call frequency relating to frog size, pulse rate relating to the ambient temperature of the environment, for example the water temperature, and call rate and call duration relating to the preference of each individual animal [7]. The characteristics that were found to have the most effect on female choice of mate, were dominant call frequency and call length [7]. It was found females preferred longer lower calls, the pitch of the calls having a strong correlation to the body size of the males and thus their successfulness in mating. It has been found by Wollerman that a "female frog could detect a single male's calls mixed with the sounds of a chorus when the intensity of the calls was equal to that of the chorus noise" [14]. Given that there is a 6dB fall off in the intensity of the signal with each doubling of distance away this means that for an average spaced chorus (0.08 males per metre2 [14]), the female can only distinctly 'hear' between three and five males at any one time. It has also been proposed that female frogs prefer 'leading' males, i.e. male calls that proceed others in the group and that don't overlap with other males calls. Thus the male frogs within the chorus try not to call at the same time as other frogs. Being subject to the same auditory masking effect of the chorus as their female counterparts they can only hear their nearest neighbours, but will actually change their call rate so as not to temporally coincide with them [5]. We took a distillation of these rules as a type set for our system. We proposed that males could only hear their two nearest neighbours and that they would modulate their call rate so as not to correspond temporally to their neighbours call. This was done using an implementation of a resettable oscillator as outlined in Greenfield et. al. [5]. This type of local interaction is characteristic of emergent systems. The female states a preference for a certain male based on an "analysis" of the male's calls. Due to the auditory masking effects of the chorus the female can only hear the nearest males, dependent on male spacing and loudness of calls, and so has to explore the sonic space created by the male's interactions before making a choice. Thus she is acting as a listening agent within an emergent environment. On choosing a male, the female and male 'mate' utilising a simple Genetic Algorithm (GA) with a one point crossover. The outcome of this GA affects the parameters for granular synthesis based on different recordings of frog calls. Thus a successful mating will result in a change of the frog calls timbre and temporal characteristics. The female also caries her own set of characteristics, some of which affect her search criteria. The outcome of this GA process is at the centre of the exploration of the largescale temporal structure of the model and can be approached in different ways: - Only the males characteristics are affected by the mating procedure, thus the female carries on her search for another mate unchanged. This tends to endear her to revisiting the same frog but as she is occasionally attracted to others and with the probability of mutation set to just 2%, as time passes, there is a noticeable homogenisation of the original wide diversity of calls. - A number of female frogs with different characteristics and preferences competing to mate with the males. The system under these conditions is less likely to stabalise on one sort of call but instead fluctuates as the different females mate with the available males. - The mating process affects female's characteristics such that her preferences change during the Genetic Algorithm. This gives a greater variety of possible outcomes with more uncertain results and thus a likelihood of more interesting results of the development of frogs calls. This system has again been designed for an eight-channel loudspeaker system at SARC's Sonic Laboratory with the implementation of vbap- [15] used in the earlier Swarm Lab model. This implementation enables the listener to perceive the model from the female frog's perspective and thus become engaged in the emergent world and act as a participant within the space.

Page  00000004 chY 2 bm' 'r Yg ~3.Yg; *'"s aci-Bea~tor aiN-,-'' ts as~niit:e torat" Efflacai! lK^ msis Caiing^^l Ci-irmo3speroe l*:nlg ca:~jng "^ iceaS | Ema: CaSI """""""""""r^"""""""""~j [5] Greenfield, M.D., Tourtellot, M.K. and Snedden, W.A., "Precedence effects and the evolution of chorusing." Proceedings of the Royal Society (London), 264, pp. 1355-1361. 1997. [6] Holland, J.H., Emergence, from Chaos to Order. New York: Basic Books. 1999. [7] Howard, R.D. and Young, J.R., "Individual variation in male vocal traits and female mating preferences in Bufo americanus." Animal Behaviour, (55), pp. 1165-1179. 1998. Figure 1. Flow chart of the Sonic frog model. 6. CONCLUSION This paper addresses the notion of emergence by referring to models intrinsically based on sonic interaction. This has suggested possibilities beyond those of high-level mapping between visually oriented models to sonic phenomena. The two models presented here attempt at incorporating non-linearities that are characteristic of natural behaviours. Events such as nonlinear mutations or environmental changes must bem intrinsic to metaphors of emergence and are central to the design of temporal structures which are not only based on smoothed evolutionary patterns but also introduce elements of disruption and intervention. 7. REFERENCES [1] Biles, J "GenJam in Transition: from Genetic Jammer to Generative Jammer." Proceedings of Generative Art, 2002. [2] Blackwell, T.M. "Swarm Music: Imporvised Music with Multi-Swarms," Symposium on Artificial Intelligence and Creativity in Art and Science, 2003, pp41-49. [3] Holenstein, E. "The Zero-Point of Orientation: The Placement of the I in Perceived Space", in Welton, Donn (ed), 1999 [4] Lakoff, G and Johnson, M. Metaphors We Live By. The University of Chicago Press, Chicago, 1980. 1980. [8] Impett, J. "A meta-trumpet(-er)." Proceedings of the 1994 ICMC. 1994. In [9] Miranda, E., "At the Crossroads of Evolutionary Computation and Music: Self-Programming Synthesizers, Swarm Orchestras and the Origins of Melody." Evolutionary Computation, 12(2), pp. 137-158. 2004. [10] Olmsted, D.D., 28/8/2000, "Frog auditory behavior." Available Online: http://www.neurocomputing.org/amphibian_neu robiology/Frog_Auditory_Behavior/body_frog_ auditory_behavior.html [02/2, 2005]. [11] Reynolds, C.W., "Flocks, herds and schools: A distributed behavioral model." Proceedings of the 14th annual conference on Computer Graphics and Interactive Techniques.,, pp. 25 -34. 1987. [12] Singer, E. "Boids for OSX" Available Online: http://www.ericsinger.com/cyclopsmax.html. [03, 05] [13]Spector, L. and Klein, J., "Complex Adaptive Music Systems in the BREVE Simulation Environment," In: Workshop Proceedings ofALife VIII, 2002, pp 17-23. 2002. [14] Wollerman, L., "Acoustic interference limits call detection in a Neotropical frog Hyla ebraccata." Anilmal Behaviour, 57, pp. 529-536. 1999. [15]Pulkki.V "Virtual sound source positioning using vector base amplitude panning." Journal of the audio Engineering Society, 45(6) pp. 456 -466, June 1997.