Anitoo: Some Analysis ToolsSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact email@example.com to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 198 ï~~Anitoo: Some Analysis Tools Damian Keller Departamento de Muisica, Universidade de Brasilia E-mail: d3121966@guarany,cpd.tunb. br Abstract Anitoo is a group of Max patchers that extracts pitch, duration and intensity values from a midi note sequence, performs some transformations and provides statistical information of those values. Its main aim is to allow for testing hypotheses on musical structure in a simple and direct way, so it can be described as a first step toward the development of tools to help defining more comprehensive musical models. It is part of a framework in progress which analyzes musical structure resulting from the interaction of sound, perceptual and social systems. Some theoretical issues are discussed and a brief description of the patchers is given. 1. Introduction This paper reports some results of a framework in progress that approaches musical structure as an interaction of three interdependent systems: sound, perceptual and social. These are dynamically interlinked and their interaction provides a transformation of the state of each system [Keller and Silva, 1995]. The sound system is defined as a time series - variations of a chosen variable as a function of time - that is, the unfolding of a sequence of acoustical signals [Boon et al., 1990]. Information provided by the sound system is processed by a perceptual system whose dynamics are described by its states - or representations - which are modified by processes - or operations. Global constrains - established by the social system - set the range of possible perceptual states and thus the probability that the whole musical process will take place in a specified way. In previous works we have defined musical information as a measure of the elements' range and rate of transformation in the sound system [Keller and Silva, 1995]. Periodicity and entropy are the two modeling forces that act on it. Three levels of observation are defined for this system: sound, syntax and morphology [Todd and Loy, 1991; Clynes, 1995; De Poli et al., 1991; Georgescu and Georgescu, 1990]. The task of building a theoretical structure that could account for musical phenomena either in traditional or contemporary forms of organization in western music brought up the need of some quantification tools. [Boulez, 1992; Demster and Brown, 1990]. When the concept of dynamical system is applied to music, the characteristics of the system should be inferred from musical data. Before this has been done, any observation about the behavior of a musical system or any prediction on the results of a specific transformation might not be sufficiently supported [Aiello, 1994; Keller and Silva, 1995]. Max, a graphical programming language for Macintosh, appeared as the best environment to develop some tools for this purpose. Here we will be discussing part of the theoretical background behind Anitoo - although we refer the reader to the paper by Keller and Silva [ 1995] for a complete description of the framework. A short explanation on Anitoo is included. Finally, some implications and perspectives of development are discussed. 2. Musical Process When music is considered as a time series the transformation of a variable in relation to time can be observed. This allows us to look for regularities in the sound, syntax or morphology of a musical signal. If a musical piece is understood as a system in equilibrium, its structure would be a time-invariant representation embodying all states. Their time history represents the dynamic process of the system. A plot of this process in a three-dimensional graph shows the evolution of the system as a trajectory in the phase-space sustained by x, y, z axes. This space equals the range of the system [Boon et al., 1990]. From this perspective a process can be defined as a finite group of relationships in transformation that establish a perceptual unit [Tenney and Polansky, 1980]. Variables in big scale musical processes approximate a 1/f correlation. That is halfway between unpredictable dynamics, as in white noise, and very high correlation not providing enough information to catch a listener's attention [Schmuckler and Gilden, 1993]. The stability of a musical system is proportional to its entropy. Almost completely entropic systems are the most stable ones. So strong processes - highly energetic - are needed to disrupt their equilibrium - as it happens in stochastic music or random noise where foreign events can be introduced without great modification of the system's behavior. On the other hand a system with high periodicity will suffer a Keller 198 ICMC Proceedings 1996
Page 199 ï~~strong impact from small perturbations [Keller and Silva, 1995]. 3. Information filters Correlation has been defined from two different perspectives: statistics and signal processing. From the statistical point of view the distance of the values of X to their mean is calculated to find the standard deviation of X. Here we are dealing with the probability of occurrence for each value of X. A comparison of the standard deviation of X and Y provides a measure of their correlation [Tabachnick and Fidell, 1989]. Cross-correlation can be thought as a similarity test between two waveforms and as a way to find out their common frequency components [Ramirez, 1985]. Using these tools from statistics and signal processing, we can get a measure of the correlation of musical structures at the level of their micro and macro-structure. This way we can establish the behavior of a variable and get a measure of its information content - defined by the range and rate of transformation of its elements [Knopoff and Hutchingson, 1981;Lufti, 1992; Green, 1988]. Once this analysis has been done, musical data can easily be organized by processes that act directly on the range and rate of transformation of each variable. Thus, we can define a musical structure by using filters to control musical information. 4. Parameters The development of extra-musical systems - mathematical, physical, biological - which could accomplish any valid musical structure - new or already existing -, is bound up by the difficulty of defining a direct correlation between extra-musical and musical variables [Widmer, 1995]. The difference between physical variation of parameters and the corresponding perceptual outcome should be taken in count [Aiello, 1994; De Poli et al., 1991; Massaro and Cowan, 1993]. Generally values from a control system are directly matched to variation of a musical variable within a predefimed absolute range [Bidlack, 1992; Gogins, 1990]. This procedure does not consider the influence of context and interaction observed in musical systems. In fact, much experimental work has shown that pitch, intensity, duration and timbre are constantly interacting [Melara and Marks, 1990]. Furthermore, micro-variations in timbre in each sound event serve as important clues to identify instruments, and it is likely that they help construct a mental image of musical structure [Clynes, 1995; Tr6ccoli and Keller, 1995]. 5. Anitoo - Analysis Tools Anitoo's patchers are divided in three functional groups: the sequencer itself where notes are recorded and played back, the analysis-playback group that parses pitch, velocity and duration displaying each parameter on a graph (table), and the statistics group that calculates mean and standard deviation for the values provided by the notes. The difference between the last two groups is that the second is an irreversible process not allowing to recover the original data [Bennet, 1988]. Therefore, transformations of musical parameters are handled by the first group and analytical comparisons by the second. Once parameters have been parsed, the basic framework can be enhanced by introducing new mechanisms of organization and transformation of musical data. 5.1 Materials Anitoo was developed on a PowerPC 7100/AV, using Max version 2.52. The external object LitterStats by Peter Castine was used in some patchers. The midi keyboard was a Roland JD 800. All these resources belong to the Laboratory of Electroacoustic Music of the University of Brasilia. 5.2 Pre-processing Anitoo does not make any previous assumption regarding the size or range of the midi sequence. Therefore some preliminary calculations have to be done before all parameters can be parsed and displayed correctly. This is why it does not work in real time. To calculate duration of notes and to display them on a 127 scale (as the table object allows), we first have to find out what the longest value of the sequence is, set it as equal to 127 and calculate all the other values as a fraction of 127. Once this has been done, all values can be directly sent to table to be displayed. We noticed some difficulty to get values of durations between note-on and note-off and total duration for each midi note - from note-on to the next note-on -when reading SMF with Borax. Thus, we set upon the task of developing a timing mechanism inside Max, to calcuflate these durations from note-on and note-off messages. Nevertheless we should state that its precision is limited to 5 ins., which is the shortest interval measured by the clocker object. 5.3 Pitch, Intensity, Duration and Articulation Pitch values are sent to a table to be shown on a time axis. Thus, a melodic contour equivalent to the succesion of pitch values is displayed on the screen. ICMC Proceedings 1996 199 Keller
Page 200 ï~~This contour information serves as a quick visual reference to establish overall comparisons among melodies. Same procedure was implemented for intensity and duration, although a conceptual differentiation between duration and articulation was established. Time intervals from note-on to note-on are used as a measure of note duration, and articulation is given by note-on note-off intervals. This means that a legato sequence and a staccato sequence can be represented by the same duration values. 5.4 Statistics The simplest way to observe the distribution of values in a sequence is by means of the Histo object. It keeps track of the number of times a value is repeated. When we display its output on a table, we have a rough view of how the values are distributed and which are the most frequent values for the parameter observed. LitterStats object [Peter Castine] provides a measure of the mean and standard deviation of all values given. This is an alternative way to compare the information content of two sequences, centering on overall parameters' variation rather than focusing on each individual value. 6. Summary Anitoo is a group of Max patchers that extracts pitch, duration and intensity values from a monophonic midi sequence, performs some transformations and provides statistical information of those values. Our main aim while developing it was to allow for testing hypotheses on musical structure in a simple and direct way, so it can be described as a first step toward the development of tools to help defining musical models. This work forms part of a broader theoretical framework that can be described as a system that brings together physical, perceptual and social information to musical structure. The sound system is divided in three levels - sound, syntax and morphology. Entropy and periodicity are used to define the stability and behavior of the sound system. Fusion and parsing, as high-level processes, control the structure of perceptual mechanisms. Constrains defined by the social system influence the musical process for a specific environment. Many limitations can be pointed out regarding the use of Midi protocol and automatic musical production; therefore, the tools provided serve only as a starting point toward more investigation. The analysis of real music is suggested as a strong methodology to establish comparisons with algorithmic production and to develop comprehensive and precise musical models. As works in interpretation analysis have shown, micro-structure plays an important roll in musical organization [Clynes, 1995; Todd, 1992]. This micro level should be combined with observation of the dynamics of events and macro-structure to provide a complete description of musical phenomena. Acknowledgments Thanks to Dr. Karlheinz Essl for sending LitterStats. Thanks to Prof. Conrado Silva, Ana Lucia and Nahuel for invaluable help and food. CNPq-UnB provided partial financial support. References [Aiello, 1994] R. Aiello. Musical Perceptions. Oxford University Press, New York, N.Y, 1994. [Bennett, 1988] C.H. Bennett. Notes on the History of Reversible Computation. IBM Journal of Research Development, 32(1): pp. 16-23, 1988. [Bidlack, 1992] R. Bidlack. Chaotic systems as simple (but complex) compositional algorithms. Computer Music Journal, 16 (3): pp. 33-47, 1992. [Boon et al., 1990] J.P. Boon, A. Noullez and C. Mommen. Complex dynamics and musical structure. Intelface, 19: pp. 3-14, 1990. [Boulez, 1992] P. Boulez. Hacia una Este tica Musical. Monte Avila Editores, Caracas, 1992. [Clynes, 1995] M. Clynes. Microestructural musical linguistics: composers' pulses are liked most by the best musicians. Cognition, 55: pp. 269-310, 1995. [De Poli, 1991] G. De Poli, A. Piccialli and C. Roads. Representation of Musical Signals. MIT Press, Cambridge, M.A, 1991. [Demster and Brown, 1990] D. Demster and M. Brown. Evaluating musical analyses and theories: five perspectives. Journal of Music Theory, 34(2): pp. 247-279, 1990. [Georgescu and Georgescu, 1990] C. Georgescu and M. Georgescu. A system approach to music. Interface, 19: pp. 15-52, 1990. [Gogins, 1991] M. Gogins. Iterated Functions Systems Music. ComputerMusic Journal, 13(1): pp. 40-48, 1991. [Green, 1988] D.M. Green. Profile Analysis: Auditory Intensity Discrimination. Oxford University Press, New York, N.Y, 1988. [Keller and Silva, 1995] D. Keller and C. Silva. Theoretical outline of a hybrid musical system. Proceedings of the II Brazilian Symposium on Computer Music. Canela, R.S., 1995. [Knopoff and Hutchingson, 1981 ] L. Knopoff and W. Keller 200 ICMC Proceedings 1996
Page 201 ï~~Hutchingson. Information theory for musical continua. Journal of Music Theory, 25 (1): pp. 17-43, 1981. [Lufti, 1992] R.A. Lufti. Informational processing of complex sound. III: interference. Journal of the Acoustical Society of America, 91(6): pp. 3391 -340, 1992.. [Massaro and Cowan, 1993] D.W. Massaro and N. Cowan. Information processing models: microscopes of the mind. Annual Review of Psychology, 44: pp. 383-425, 1993. [Melara and Marks, 1990] R.D. Melara and L.E. Marks. Perceptual primacy of dimensions: support for a model of dimensional interaction. Journal of Experimental Psychology: Human Perception and Performance, 16: pp. 398-414, 1990. [Ramirez, 1985] R.W. Ramirez. The FFT. Fundamentals and Concepts. Prentice Hall, Englewood Cliffs, N.J., 1985. [Schmuckler and Gilden, 1993] M.A. Schmuckler and D.L. Gilden. Auditory perception of fractal contours. Journal of Experimental Psychology. Human Perception and Performance, 19(3): pp. 641-660, 1993. [Tabaclhnick and Fidell, 1989] B.G. Tabachnick and L.S. Fidell. Using Multivariate Statistics. HarperCollings, New York, N. Y, 1989. [Tenney and Polansky, 1980] J. Tenney and L. Polansky. Temporal Gestalt Perception in Music. Journal of Music Theory, 24(2): pp. 205-241, 1980. [Todd, 1992] N.P.M. Todd. The dynamics of dynamics: a model of musical expression. Journal of the Acoustical Society of America, 91(1): pp. 3540-3550, 1992. [Todd and Loy, 1991] P. M. Todd and G. D. Loy (Eds.). Music and connectionism. MIT Press, Cambridge, M.A., 1991. [Tr6ccoli and Keller] B.T. Tr6ccoli and D. Keller. A funcao da familiaridade no reconhecimento do timbre. Anais da XXV Reuniao Anual da Sociedade Brasileira de Psicologia. Ribeirao Preto, S.B.P., 1995. [Wdmer, 1995] G. Widmer. Modelling the rational,oasis of musical expression. Computer Music Journnal, 19(2): pp. 76-96, 1995. ICMC Proceedings 1996 201 Keller