SEE - A Structured Event Editor: Visualizing Compositional Data in Common MusicSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 63 ï~~SEE A Structured Event Editor: Visualizing Compositional Data in Common Music Tobias Kunze CCRMA, Stanford University tkunze.stanford.edu Heinrich Taube School of Music, University of Illinois hktcmp-nxt1.music. uiuc. edu http: //www. stanford. edu/-tkunze Abstract Highly structured music composition systems such as Common Music raise the need for data visualization tools which are general and flexible enough to adapt seamlessly to the-at times very uniquecriteria composers employ when working with musical data. The SEE visualization tool consists of an abstracting program layer to allow for the construction of custom musical predicates out of a possibly heterogenous set of data and a separate program module which controls their mapping onto a wide variety of display parameters. The current version is being developed on a SGI workstation using the Xll windowing system and the OpenGL and OpenInventor graphics standards, but portability is highly desired and upcoming ports will most probably start out with the Apple Macintosh platform. 1 Introduction Among the vast variety of ways in which music has been put in relation to the visual senses, only a few have been researched or otherwise developed to a noticeable extent. Today, sound and graphics interconnect most prominently in the audiovisual domain, that is in multimedia applications and-more recently-in the area of data sonification, but also in the more arcane areas of music visualization and graphical user interface design as well as in art. These domains, however, are not unrelated: sonification of data may be taken as an inverse process of music visualization and music visualization itself leads seamlessly to music data manipulation as in some GUIdesigns: it may form half of a genuine music authoring environment. Musical data visualization could, it seems, profit on the other hand from the extensive computer graphics technology and visualization experience scientific data visualization projects today rely upon. In contrast to scientific visualization applications, however, music visualization does not typically deal with an enourmous amount of "flat" data such as data masses acquired by satellite photos, surveys or oceanographic sonic measurements: musical datasets are most often comparably small-but generally include heterogenous and not necessarily commensurable datatypes such as, for instance, notes and rests. They also tend to call for interpretation processes that evolve over time to model the changing belief contexts that characterize musical hearing. In short, music visualization differs from data visualization in that it deals with our understanding of music. And musical data, unlike scientific data, may be ar bitrarily changed according to the aestethic criteria we decide to apply. 2 Visualization Today Although a number of promising approaches to signal visualization to faciliate the process of sound (re)creation exist, research in visualization of compositional data is rare and focuses on musicology as opposed to creative applications. More recent examples include the analysis of features of music by Bart6k and Webern in the graphical plane by A. Brinkman and M. Mesiti  and J.-P. Boon's interesting examination of significant differences between threedimensional phase portraits of selected three-part compositions by Bach, Mozart, and Schumann . The majority of musicological analysis toolkits, however, doesn't go beyond a symbolic representation of their results (cf., for instance ). Alternative approaches to signal visualization leading to graphical representations of higher-order features of sound data that approximate complexer musical predicates in a raw manner have been presented previously by J. Pressing et al. and B. MontReynaud [8, 7] as well as, most recently, by B. Feiten and G. Behles . Finally, I. Choi et al. and Y. Horry document some specific research into the application of musical concepts using graphical controllers to generate both MIDI and digital sound output. Graphical user interfaces for compositional oriented software today has begun to venture into the domain of 3D, with more and less success and not ICMC Proceedings 1996 63 Kunze & Taube
Page 64 ï~~much inspiration. EMAGIC's Notator Audio, for instance, consistently encourages a three-dimensional, albeit conceptual, view of the compositional data. Nevertheless, it sticks with a set of two-dimensional editors to support editing operations and sells a simple two-dimensional control for the independent transposition and stretching of soundfiles in unnecessary perspective 3D look. Visualization is a central field in the analysis of particularly vast scientific data sets and as such features the most advanced visualization solutions found today. 3D rendering techniques, as well as techniques originally intended to simulate environmental data such as transparency, haze, fog and texture mapping, are widely used to represent abstract qualities such as density or velocity. Also, most scientific visualization packages allow for multiple different graph styles to be combined in a single package. In addition, it had and still has a strong influence on the design and development of graphics packages. As a result, music visualization software packages wanting to take advantage of these highly optimized graphic engines have to adapt to a set of primitives that has been mostly designed with the notion of rendering artificial, "virtual reality" worlds. A particularly blatant problem is the missing support in 3D toolkits for seemingly endless scrolling: for a framework that implies the construction of a perspective, simple scrolling doesn't make sense. Most limitations, however, prove in hindsight to be solvable in terms of a different concept, and scrolling (as opposed to panning) then translates into a matter of animation. " 3D animation is believed to be extremely useful but is still too unexplored to be included in this version of the SEE design " extensibility of the 3D object library is highly desired on both a scripting and a programming level " similarly, the 3D toolkit should have provisions to access underlying (2D) graphics libraries " display styles as well as annotations of axes and various grids should be easy to specify " different scenes need support for different viewing modes such as spin, fly-through, etc. through reusable components or external applications The Openlnventor graphics standard meets these guidelines and offers in addition a straightforward interchange file format as well as near compatibility with the current VRML standard. 3 Graphing Paradigms Features and Since using a 3D graphics library like OpenInventor implies such an essential commitment to its underlying paradigms and since these paradigms are not necessarily congruent with the demands of a music visualization tool, it is wise to render an account of what its needs are and what can expect. The current design of the SEE visualization tool followed these major guidelines: " although the 3D paradigm introduces fundamental changes in the representation of music and imposes particularly high demands on the system, it may be "frozen" to a two-dimensional scene by using an orthographic camera from a front view position and kept reasonably efficient by providing the graphics library with optimizing hints regarding the (invisible) 3D information " color is widely available today and thus highly recommended for use in data visualization; a particular visualization, however, does not have to use color (Data Figure 1: SEE Architecture 4 SEE Architecture Figure 1 gives an coarse overview of the steps involved in the process of creating a visualization and the tasks SEE has to perform. For the translation from raw data into a unified lattice of raw data structures SEE provides standard readers as well as an programming interface for adding custom data readers. Whether or not it is desirable to have a higher-level interface is unclear as of yet, but such an interface might be added at any point. Fully programmable and easily customizable style files then control the construction of actual 3D data structures. Graphics tasks, such Kunze & Taube 64 ICMC Proceedings 1996
Page 65 ï~~as global geometry conversion and rendering itself, is taken care of by the OpenInventor toolkit and other custom components, as is the conversion to the Inventor interchange file format. Since major parts of SEE are implemented as an extension to Common Music's graphical interface, Capella, data, and data readers and the visualization style files are written in Lisp. It uses CommonMusic's score representation toolbox and class hierarchy to implement readers with a high degree of polymorphism. p p N Mow; 9 r b e! f f N N 1 1! 1 N p N p p N p V* r- T- qw N N N S7 If t. 1 1 N p N ro!f ~ ( b r a Figure 3: Automaton in traditional notation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 pl 4- 32 p2 4- 37 p3 4- 36 i 4- 0 while i < 300 dop 4- p2-p3 ifp > 0 then p 4- p-li else p 4- p+13 p 4- p+pl ifp<24 or p> 108 then p 4- (p - (36 + mod[p, 12])) + 1 p3 4- p2 4- pl 4- p WRITENOTE(note: p, time: i) return........ ":.. ' r, '/ / " " " " /'..". R, Figure 2: Automaton in pseudocode notation 5 An Example Run To give a complete example of a process of music visualization using the SEE tool, consider the algorithm given in Figure 2. It sets up a history of three pitch variables, initialized to 32, 27, and 36, respectively, to generate 300 notes of a monophonic, pulsating line according to the code in the while loop. For simplicity, amplitude, duration and instrumentation have been assigned default values. Figure 3 gives a rendering of this-deterministic-algorithm's musical output in common music notation. Not obvious from the code but readily readable from the score are the all-intervallic structure of the melodic sequence and the strong reduction of the musical material. In a first step, a default reader translates every event into a cube of unit size and no color, using exclusively the time and pitch information to map it onto the x and y axis, respectively. The camera set to orthigraphic mode, the display resembles traditional piano-roll notation showing 3 different rising patterns and striking symmetries everywhere that were hard to spot in the score (cf. Figure 4). Spinning the model in all three dimensions gives an even better overview of the combined pitch-time structure. Figure 5 zooms in on an angle from the bottom-left corner along the slope of the rising patterns and makes the mechanical aspect of the cellular automaton's repetitiveness visible. Moreover, rotat Figure 4: Automaton in traditional "piano-roll" notation ing the model clockwise and to the left reveals another set of symmetry axes, perpendicular to each pattern's slope (cf. Figure 6). A zoom on the first pattern together with an added headlight shows the slant of the current viewpoint (cf. Figure 7). As a last example, Figure 8 uncovers the common focal point of every other triad around the center of pattern 3 as approximately an octave above the middle note. Using different data reader methods, the same automaton has been colored according to each note's pitch class and grouped with a vertical line indicating each note's interval in relation to the preceding note. To faciliate optical references, a grid of dotted ledger lines, solid piano system lines and vertical reference lines have been added (cf. Figure 9). A bottom view of this model finally reveals the two dovetailed states of the automaton, each of which cycles through all intervals, alternating between positive and complementary negative values. References  Jean-Pierre Boon et al.: "Complex Dynamics and Musical Structure" in Interface Vol. 19, pp. 3-14 (Lisse, the Netherlands: Swets & Zeitlinger B.V., 1990)  Alexander R. Brinkman and Martha R. Mesiti: "Computer-Graphic Tools for Music Analysis" in Proceedings of the 1991 ICMC, pp. 53-56 (Montreal, Canada: McGill University, 1991)  Peter Castine: "Whatever Happened to CMAP for Macintosh" in Proceedings of the 1993 ICMC, pp. 360-362 (Tokyo, Japan: Waseda University, 1993) ICMC Proceedings 1996 65 Kunze & Taube
Page 66 ï~~I " s"s's Â~mu *0 0 **a U ' " w *m.4 3 R" '' 10 A) j *mis:m. Figure 7: Skewedness of the current angle of view Fieiire 5: View from the bottom-left corner a. 0.,' ", " s." % " ' f. M ". w4 s 4i O. ' 41: Avon.:;,. Figure 8: Using perspective: focal point of the rising lines in pattern 3 Figure 6: Alternate symmetry axes revealed  Insook Choi et al.: "A Manifold Interface for a HighDimensional Space" in Proceedings of the 1995 ICMC, pp. 385-392 (Banff, Canada: The Banff Centre for the Arts, 1995)  Bernhard Feiten and Gerhard Behles: "Organizing the Parameter Space of Physical Models with Sound Feature Maps" in Proceedings of the 1994 ICMC, pp. 398 -401 (Aarhus, Denmark: Danish Institute of Electroacoustic Music, 1994)  Youichi Horry: "A Graphical User Interface for MIDI Signal Generation and Sound Synthesis" in Proceedings of the 1994 ICMC, pp. 276-279 (Aarhus, Denmark: Danish Institute of Electroacoustic Music, 1994)  Bernard Mont-Reynaud: "SeeMusic: A Tool for Music Visualization" in Proceedings of the 1993 ICMC, pp. 457 -460 (Tokyo, Japan: Waseda University, 1993)  Jeff Pressing et al.: "Visualization and Predictive Modelling of Musical Signals using Embedding Techniques" in Proceedings of the 1993 ICMC, pp. 110-113 (Tokyo, Japan: Waseda University, 1993) -i j...... i *"'i ' Figure 9: Colored model with lines indicating the note's interval in relation to the preceding note Figure 10: Bottom view shows the behaviour of the automaton's two substates Kunze & Taube 66 ICMC Proceedings 1996