Generalized Time FunctionsSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 348 ï~~GENERALIZED TIME FUNCTIONS Peter Desain & Henkjan Honing COCO Foundation P.O.Box 1037 NL-3500 BA Utrecht Center for Knowledge Technology Lange Viestraat 2B NL-3511 BK Utrecht A solution is presented to problems that arise when continuous control of parameters is linked to a discrete, note-based composition framework. It is argued that control functions that have multiple times (absolute and relative) as parameters nicely evade the problems and keep the language transparent and parallelizable. Introduction In the early history of computer music composition the systems available took either a monolithically continuous approach, inspired by signal processing (Mathews & Moore, 1970; Berg, 1979), or used a discrete, note or event based technique (Hiller, Leal & Baker, 1966; Koenig, 1970). Although some early work stressed the importance of hybrid systems (Mathews, 1969; Buxton, Sniderman, Reeves, Patel & Baecker, 1978), the division became even more obvious once MIDI had lured designers into building composition systems close to this note-based protocol. With the recent advent of cheap signalprocessing hardware that allows a more natural continuous control over parameters during each note's evolvement, the quest for elegant constructs for composition languages supporting both worlds is on again. During this whole history some researchers foresaw these developments and made attempts to bridge the gap, by stating the problems (Dannenberg, Dyer, Garnett, Pope & Roads, 1989; Honing, 1991) and proposing solutions to them (Dannenberg, McAvinney & Rubine, 1986; Anderson & Kuivila, 1989). In our opinion, most are still unsatisfactory because of several reasons. Sometimes a solution is based only on global time, prohibiting the composer to think in terms of local constructs. Solutions proposed for object-oriented composition systems often suffer from a declarative/procedural confusion whereby transformations and musical objects form no orthogonal sets (each new transformation added has to take all object types and all existing transformations into account). This inevitably leads to the situation whereby some transformations cannot be done twice or some combinations cannot be done in an arbitrary order. Sometimes a way of communicating information between objects and transformations is proposed with which the problems are solved, though in a nontrivial, almost procedural way. But let us first explain a problem that became a touchstone in the descriptions of these systems. The problem A most natural thought, when bored with note-based discrete systems, is passing to each note continuously variable functions of time as parameters instead of constant values. In this way continuous control can be brought to systems based on discrete events. The control functions passed are functions of the actual time, and elegant ways can be given to build and transform them. One can regard these functions as control signals that only depend on one parameter: the time elapsed since the start of the note. When such a control function is coupled to a note that may later change its duration or timing in a transformation, the resulting behavior of the time function is problematic. For example a transformation that stretches the duration of a note several alternatives are possible, as is shown in Figure la. ICMC 348
Page 349 ï~~[ stretch duration [ + shift onset tim I tie.-4 [ I0 Figure 1. Two examples of time transformations on an arbitrary time function: a) stretching the duration, and b) shifting the onset of the discrete structure. The time function might just repeat itself as long as the note last. It could also stretch along with the note. Or it could stay in place and define a zero value for the rest of the time. All these behaviors (and mixtures of them) are sensible in certain musical contexts. If the time function is intended as a vibrato, it should repeat itself. If it has the characteristic of a glissando, an elastic stretch is more appropriate. If it is meant to define an initial ornament or the attack of the note, the third alternative is the correct behavior. This issue is often called the "vibrato problem": a vibrato should not slow down when a note is stretched, but a glissando, defined by the same means, should. A solution A well known approach to this issue defines different stretch transformations, one for each alternative. But instead of defining different transformations on a simple time function it might be better to look for a simple transformation on a more complex time function. The solution we propose is to generalize each control function to a function of more parameters - each parameter reflecting a different aspect of time. Let us look first at functions of two parameters: the duration of the note and the time elapsed since the start of the note. A time function then becomes a surface, as is shown in Figure 2. After a stretch transformation or a setting of the note's duration, a simple look-up of a cross-section of this surface yields the control envelope. A related concern is the use of a relative or absolute start-point for the time-base used. The use of an absolute time scale is sometimes preferred by composers because of the (false) impression of total control. However, it implies that an envelope has to be redefined each time it is used at another point in time. This can be avoided by using a time-base relative to the object under construction - this is what we did when we used "the time elapsed since the start of the note" as one of the time parameters. This does not mean that the notion of absolute time control can be ignored. It is indeed indispensable when time ICMC 349
Page 350 ï~~relations with events outside the musical piece (say the midnight church bells) are to be taken into account, or, as is the case more often, when relations between different musical objects have to be maintained (e.g. a synchronized vibrato between different voices). The question now becomes what happens if a note and its attached time function controlling one of its attributes is shift transformations is that it still works hybrid - or composed time functions. Time function composition The control functions shown so far are stylized examples and rudimentary in their musical value. Much more elaborate envelopes are needed, but they can all be based on the same idea because the solution still works under time function composition. Building a comprehensive set of musically useful time functions can best be done by supplying some simple, basic time functions and ways of building complex ones by transforming and combining them. The most simple ways are plain additive and multiplicative combinations. More elaborate mixtures like interpolation are easily build on top of them. Concatenation - or switching between different time function sections as in the building of a traditional ADSR - envelope, has a straight-forward generalization to multiple parameters. An even richer world of possibilities opens up when time functions accept time functions as arguments, their parameters may then change over time as well. Consider a vibrato that is given a ramp function to control its rate from the begin of the note to the final rate reached at the end of the note. If the note is made longer by a stretch transformation, the evolvement of the resulting vibrato will indeed slow down - reaching its final rate at a later time, but the pitch movements themselves will pass through more cycles instead of being stretched. This composibility of behavior: combinations of time functions preserve -in a compound way- the different ways in which their constituent components deal with time, is an important characteristic of the proposed solution. Conclusion We showed how generalized time functions that have multiple times as parameters nicely evade the problems that arise when transforming them. Transformations acting on time functions are completely independent of the time functions themselves (transformations and time duration t i.e-- '' start Figure 2. Surfaces representing vibrato, glissando, and ornament time functions as a function of duration and the elapsed time since the start of e.g. the note. shifted (or positioned) in absolute time. Again several alternatives make musical sense: an ornament should shift along invariantly with the note, while a vibrato that is intended to synchronize among voices should not (see Figure lb). The same approach applies here - time functions should have one more parameter specifying the absolute start time of the note they are attached to. One of the advantages of generalized time functions over the use of different stretch and ICMC 350
Page 351 ï~~functions form orthogonal sets); their behavior is encapsulated and provided automaticly. The choice of the precise time parameters to use is more or less arbitrary (e.g. absolute start time, absolute end time and actual time could be used instead of elapsed time since start, duration and absolute start time). This allows for an easy integration in the existing computer composition frameworks. From the hardware perspective the approach has the distinct advantage of being easy to adapt to run on parallel architectures: each note can be handled by a different processor, without the need for information passing between them, since the generalized time functions provide the necessary behavior. To enable the reader to check and experiment with the ideas presented, a rudimentary implementation of generalized time functions and more elaborate examples will given in Desain & Honing (forthcoming). The full implementation of it will be part of the COCO composition system, the successor of LOCO (Desain & Honing, 1988). Acknowledgements Thanks to Stephen Pope for the enlightening discussions on music representation during his visit at the Center for Knowledge Technology in the summer of 1990. Roger Dannenberg deserves a special mention for his very useful comments on this work. References Anderson, D. A. & R. Kuivila. (1989) Continuous Abstractions for Discrete Event Languages. Computer Music Journal 13(3). Berg, P. (1979) Pile, A Language for Sound Synthesis. Computer Music Journal 3(l). Buxton.W., R. Sniderman, W. Reeves, S. Patel & R. Baecker. (1978) The Use of Hierarchy and Instance in a Data Structure for Computer Music. Computer Music Journal 2(4). Dannenberg, R., L. M. Dyer, G. E. Garnett, S. T. Pope, & C. Roads. (1989) Position papers. In Proceedings of the 1989 International Computer Music Conference. San Francisco: Computer Music Association. Dannenberg, R., P. McAvinney & D. Rubine. (1986) Artic: A Functional Language for Realtime Systems. Computer Music Journal 10(4). Desain, P. & H. Honing. (forthcoming) Time Functions Function Best as Functions of Multiple Times. To appear in Computer Music lournal. Desain, P. & H. Honing. (1988) LOCO: A Composition Microworld in Logo. Computer Music lournal 12(3). Hiller, L., A. Leal & R. A. Baker. (1966) Revised MUSICOMP manual. Technical Report 13. University of Illinois, School of Music, Experimental Music Studio. Honing, H. (1991). Issues in the Representation of Time and Structure in Music. In Proceedings of the 1990 Music and the Cognitive Sciences Conference, edited by I. Cross. Contemporary Music Review. London: Harwood Press. (forthcoming) Koenig, G. M. (1970) Project 2. Computer programme for calculation of musical structure variants. Electronic Music Reports 3. Utrecht: Institute of Sonology. Mathews, M. V. & F. R. Moore. (1970) GROOVE: A Program to Compose, Store and Edit Functions of Time. Communications of the ACM 13(12) Mathews, M. V. (1969) The Technology of Computer Music. Cambridge, Mass: MIT Press. ICMC 351