Khorwa: A musical experience with « autonomous agents »Skip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 00000001 Khorwa A musical experience with ~ autonomous agents > Mikhail Malt Ircam email@example.com Abstract This text will be a discussion and a reflection on the use of a certain category of models of artificial life (autonomous agents) in musical composition and their implementation in the MAX/MSP environment. This work was made in the framework of the development of a real-time sound installation, the <Khorwa> installation, in the MAX / MSP' software computer environment. 1 Introduction One of the main problems which comes in a creative process that uses the computer, concerns the existing difference between the language used by a man to express himself (LEVY 1987), which is the natural, oral and written language, filled often with analogies, with metaphors and with ambiguities, and the formal languages which govern the behavior of computers. The computer belongs to a purely syntactic world governed by strict rules of transformation and calculation. This universe, very particular, does not tolerate any ambiguity. All sent message to a system should be in accordance with a pre-established code. The instructions should be formally clear without ambiguity. The machine is not able to make decisions if it has not been programmed. Therefore, the composer who would like to use the computer in his work is asked, more and more, to clarify and to formalize his thought. What was until now an aesthetic or personal choice, (that is to say formalization) is now a need and even an imperative to be able to communicate with the machine. The novelty in the use of the computer in music, in second half of the XX century, did not consist the formalization of theory, but in the formalization of practice, Knowledge, the traditional kingdom of personal experience. The composer who would like to take advantage of new possibilities that are offered to us by computers should go beyond the know-how and reach to formalize the "how-to-do" knowledge. The composer 1 ~ Cycling74&Ircam, http://www.cycling74.com. should develop a new music Solfege: a <Solfege of models> (Malt 2000). What do we understand by a <Solfege of models>? We are not speaking about a Solfrge in the sense of a catalog of static models, a Solfege resulting from a fixed typology. We are rather speaking about the development of a knowledge, intellectual and cognitive abilities of the composer which would allow him to control and to master either the musical result stemming from a some generative model, or the link between graphic and/or textual representations of specific musical software, with a musical result. The model is to be more than an intermediary between different instances of reality; it is to be more than the material support of a thought music and more than the explanation of a practice. The model would be also a tool allowing the computer to be able to handle a part of this artistic thought. In our case, we wanted to find a model which could be more than a simple simulation of the possible choices of a composer, which could be a control model of the musical material, by bringing us closer to a generative model able to simulate a musical writing. The use of a model based on autonomous agents seemed to us to be an interesting solution, since it proposes a solution based on the evolution of small musical entities, which interact with each other to produce a complex structure. The only available agent model for a musical application at the level of musical writing and in real time, was ~boids> by Craig Reynolds (Reynolds 1987), implemented in MAX by E. Singer. However, as the model had an explicit graphic orientation, it could not be extended to other uses, which led us to develop the present model. 2 The autonomous agents Since the appearance of this concept, in the middle of the seventies in the field of distributed artificial intelligence, the denomination of ~agent> is used to refer either to computer entity software or to material computer entities (such robots or any hardware). There not exist, nowadays, a general consensus on the definition of an Proceedings ICMC 2004
Page 00000002 ~agent>. For the needs of our work, we have used the definition given by Wooldridge: <An agent is a computer system, situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives> (Weiss 2000, 29). We will add that ~autonomy> means that the agent must be able to act, react and interact in its environment without the intervention of other systems (human or data processing). This brings us to the fact that an agent is an entity having a dynamic internal state and behavior rules. 2.1 Why ~ agents>>? One interest in use of agent models for the control of musical events comes from the fact that it is a part of the so-called <individual based models> (Reynolds on 2001). These models are simulations based on the global consequences of local interactions between the members of a population. The action of control is made at the level of the individual, which means on the rules that command the behavior. This has a major importance if one confronts this type of model with the statistical models. Although these two types of models are intended to generate information control for global situations, in the statistical models, the control is done on average characteristics of the population, whereas for the <individual based models>, it is performed at the level of behavior rules of the individual. Among the <individual based models>, there is also the cellular automata model, which is still used by many composers (Miranda on 1994). However, this model is not made to represent complex data (or multidimensional data) and was not adapted to our needs. All the ~cells> are similar, being only able to represent a binary state, and not to represent a ~diversity> of individuals. On the other hand, the agent model (each agent can be viewed as an ~object>>2.) is able to handle multi-dimensional spaces and to represent a large diversity of individuals. From this point of view, this model could simulate the evolution of musical ~entities> in a more complete way, with all the necessary parameters for their representation. 3 The artistic framework 3.1 Artistic concepts The autonomous agents model implementation was made in the framework of the development of a real-time 2 <<Object>>, from a computer point of view. installation in a software computer environment (MAX / MSP3), the <Khorwa> installation. <Khorwa> is a real time sound installation based on a creation of a musical beings society, using a life model, the autonomous agents. This installation was presented in the <<R6sonances> event that took place at Ircam, Paris in October 2003. This project was supposed to be a human / digital reflection on the concept of life; proposing, in real time and without human intervention, a musical evolution of a musical material in the same manner that one cultivates microorganisms in laboratory. Each ~musical being> will be born, will live, will interact with its environment, will be influenced by him, will have a name (a genealogy will be built), will reproduce itself and will die. The unique human intervention is the possibility, for the visitor to record his voice (from a microphone placed in the center of the room). This musical material will be keep in a memory, something like an ~collective unconscious>, to be used later. In this way the installation will evolve also regarding to its sampled sound material. <Khorwa> is a Tibetan word with the following meaning: I: To turn around; to circumambulate, to walk all round; also to elapse, to be completed. II: The world; rotatory existence; the round of transmigration within the six classes of beings (Das 2000). In this installation, the unique visual aspect is the projection of the graphic program used to build the artificial environment (the Max patch). The visual space is constrained at minimum with the intent to polarize the perception of the visitor towards the quadraphonic sound space (Malt 2003a, 2003b). 3.2 The music formalization This project was mainly articulated around three questions 1) Can one imagine a music, which evolves continuously like a live being? 2) Which are the possible relationship between formal models and music? 3) How to simulate in <real time> a musical ~writing~? The generated musical surface should also have as goal the formalization of certain aspects of our musical writing, based mainly on the use of small musical gestures that evolve in time (Malt 1996) and on the concept of global and local time. Regarding the formalization of the concepts that we exposed (global and local time, flow of event and a musical writing based on gestures), we had three basic assumptions that guided our work: 3 ~ Cycling74&Ircam, http://www.cycling74.com. Proceedings ICMC 2004
Page 00000003 1) Any musical process can be analyzed as being the concatenation and the superposition of the evolution of several <flows>, each one having its own evolution, 2) It is possible to represent this kind of evolution by a complex dynamic system model, and 3) The ~autonomous agents> could be a suitable choice for this kind of model. 4 The implementation 4.1 The technical framework The implementation of this model was made in the MAX / MSP environment. The agents' environment is a MAX window (figure 1), and every agent is a <patcher~4" dynamically loaded using the ~newex~ script command. This message takes as parameters the ~newex~ command (as symbol), the relative positions in pixels of the ~patcher> (x and y), the parameters relative to the size of the patcher width and police size, the name of the patcher and the list of parameters (chromosomes from Chl to Ch9) which we want to associate to the agent. I 1 Figure 1: The environment with <<10> agents 4.2 The genetic structure At its <birth> (this means at its creation), each <<agent~ possesses a genetic material which is encoded in nine chromosomes, thus distributed (figure 3): Chl: One gene. a name, to allow its identification and to define the different variables for sending and receiving ~personal~ messages. Ch2: Two genes. A fixed spatial position, to allow virtual marks, and a region of listening, the radius of which restricts the ~cognitive~ ability of every agent. 4 A ~patcher~ is an encapsulation of a process in a graphic object p random irth such as T Ch3: Two genes. A maximal life span, genetically determined5; at each time that an agent becomes older, the probability of death increases. The cyclic time for the repetition of its task. All agents' work repeating it's a task in an irregular cyclic mode. Ch4: Two genes. Gender and a reproducing potentiality. The second third of the maximal foreseen life, is the <fertile period>, at this moment each agent can reproduce by passing its genetic material to its ~offspring>. Ch5: Two genes. A task (a musical gesture). Currently, there are 14 different tasks6, every one being associated to a musical gesture (the simple event, the cloud, a ~proto melody>, elements of synthesis, samples, etc.). A behavior. Each agent is able of knowing the agents that are inside its zone of <listening~, and modify some characteristics of its behavior and its task according to its neighbors. For example, in a mode of ~sociable> behavior, each agent tends to develop its musical gesture towards the average of <central pitches > of his neighbors. On the other hand, a mode of ~anti-social~ behavior will move the agent to the contrary direction of its ~mates>. Ch6: A first set of task parameters, divided in three genes, 6a, 6b and 6c. The meaning of this parameters set (and Ch7, Ch8 and Ch9) will change according with the ~musical task~ (Ch5). Ch7:. A second set of task parameters, divided in three genes, 7a, 7b and 7c. Ch8: First gestural profile (<coll~ name). Ch9: A set of evolution curves (<coll~ name). C9 231 h Ch7 Figure 3: An agent and its < chromosomes > The final result of the interaction among the different agents is the construction of a ~musical surface~. Each agent or <individual~ can be generated in either two ways. The first is called <God generated> and the second is a sexual reproduction, <Self-generated~. The mode <God generated> allows the initialization of the system, the control of the number of agents (avoiding both, the disappearance of the created ~society~ and overpopulation), and also to introduce new elements (genes) by favoring the variety. In this mode the generation is not <<random~, the system takes in account the general state of the This means that the maximal life span is inherited from its "parents". 6 The current task definitions are evolving in a way that it is possible that this number changes very soon. Proceedings ICMC 2004
Page 00000004 environment. The mode <~Self-generated~ allows the agents reproduction perpetuating the names, the tasks, behaviors, positions and various inherited genetic materials. 5. Conclusions According to the preliminary results, we have noticed that the autonomous agent model offers advantages others than just the control possibilities. It is a model that has a memory; it allows the evolution of a musical material by the genetic transmission, thus associating the musical ideas of variation and interpolation. It also incorporates the notion of ~unity> of the musical material. The fact that each agent can ~communicate~ with the other agents allows that the material generated by an entity is always correlated with materials generated by neighbors. This correlation depends in a large measure on the behavior rules that were imposed to the agents. As the action of each agent is always dependent of the other agents, the notion of emergence was fundamental in this experiment. This notion expresses the appearance of a new meaning during the aggregation of elements within a given context. This new meaning, which was explicitly absent in individual elements, is the result of the interaction between these elements. The spatial dispersion of the agents also induces the emergence of several musical layers, characterized by defined ~familiar> groups. From these experiments it is possible to put forth the hypothesis that a ~musical surface~, could be seen as a system to which will correspond an unstable dynamics, driven by a multiplicity of forces in interaction. The composition will then be seen as a process in permanent movement, a permanent search for meaning between the different levels of the considered musical space, with moments of stabilization, moments of destabilization and mainly the phenomena of emergence. To compose it is to create /to weave /to give a meaning to a musical material. The composer needs, in his process, to create or to give a meaning to the musical material. He needs that a meaning comes out from the relation between the multiple dimensions that he is handling (abstract entities like concepts and acoustic material). The various meanings created are stages, of a compositional process, through which the composer needs to pass to arrive at the final work. From this point of view we can deduce that a model (from the composer point of view) is also a support, an assistant of the thought for the generation of meaning during a process of composition. The produced music, which we could call ~the listened musical surface~, is only one aspect of the process of composition. Naturally, it is in the majority of the cases the final objective. Nevertheless, this result, from a global point of view and from the process point of view that has created it, is only a small aspect of the whole. The development of this work continues at present with the research for possible representations for an ~adaptative~ behavior rules, the representation and the coding of different musical materials (MIDI and audio) to study the genetic possibilities of transmission and the search for the links between the generated structures of controls and the musical meaning produced. References Bilotta, E., and Pantano, P. (2001). Artificial life music tells of complexity. In E. Bilotta, E. R. Miranda, P. Pantano, and P.M. Todd (Eds.), ALMMA 2001: Proceedings of the workshop on artificial life models for musical applications (pp. 17-28). Cosenza, Italy: EditorialeBios. Das, C. (2000), Tibetan-English Dictionary, Dehli. Epstein, J. M., Axtell, R. (1996) Growing Artificial Societies, Social Science from the Bottom up, Brooking Institution Press, Washington, D.C., Distributed by MIT Press. Langton, Cristopher C. (1996), Artificial Life, an overview, Cristopher C. Langton editor, MIT Press, 1996. Levy, P. (1987), La Machine Univers, Editions la Decouverte, Paris-France. Malt, M (1996),.<< Lambda3.99 (Chaos et Composition Musicale) >, in Troisiemes Journees d'Informatique Musicale JIM 96, Ile de Tatihou, Normandie, France. http:./! wwx virc rnifr/equipes/repings/ii96/ Malt, M. (2000), << Les mathematiques et la composition assist6e par ordinateur, concepts outils et modeles~, PHD thesis in Music and Musicology in the XX Century, Ecole des hautes Etudes en Sciences Sociales, directeur de these Marc Battier, Paris, France. Malt, M. (2001), << In Vitro, Growing an artificial music Society ~, in Artificial Life Models for Musical Applications, Workshop of the 6th European Conference on Artificial Life, Prague, Czech Republic, September 2001. Malt, M. (2003a), Khorwa-video-extractOl ft -:n/e/fta xircumfrý()-/ i "/ io rniiknhai1/kho a-).n ov Malt, M. (2003b), Khorwa-video-extract02 cf, t:/t/f dircarf ýaioat o /Ped agi ernikhaillkh/rxorN<ip- 0 1 ýn4 Miranda, E. R. (1994) << Music composition using cellular automata >, Languages of Design, Vol. 2, pp. 105-117, USA. Miranda, E. R. (Ed.) (2000), Readings in Music and Artificial Intelligence, Contemporary Music Series Vol. 20, Harwood Academic Publishers, Amsterdam. Reynolds, Craig W. (1987), "Flocks, Herds, and Schools: A Distributed Behavioral Model" in Computer Graphics 21(4) (SIGGRAPH 87 Conference Proceedings) pages 25-34. htt P://x Ax vret odr/Cxit< boidShtrnl Reynolds, Craig W. (1999), "Steering Behaviors For Autonomous Characters" in Conference Proceedings of the 1999 Game Developers Conference, pages 763-782. Reynolds, Craig W. (2001), ~ Individual-Based Models, an annotated list of links ~, httZ://www ied3dcon/ /ib n tr Weiss, Gerhard (2000), Multiagent Systems, a modern approach to distributed artificial intelligence. Edited by Gerhard Weiss, MIT Press. Miranda E., Todd P. M. (2003), A-Life and Musical Composition: A Brief Survey, IX Brazilian Symposium on Computer Music, Campinas, Brazil. Proceedings ICMC 2004