Page  00000515 "Blip, Buzz, Blurp": The Challenge of Teaching New Ways to Listen Bonnie Miksch (1), Leon W. Couch m (2) (1) Department of Music, Mercer University miksch (2) Department of Music, Luther College Abstract When first hearing electroacoustic music, many music theory students lack strategies to help them focus on sound parameters, identify musical gestures, and improve their overall perception. One difficulty stems from traditional theory's tendency to overlook musical elements that are essential to an enriched hearing of electroacoustic music. With the goal of redirecting and revising traditional listening approaches, our method begins with an explicit list of sound parameters categorized as either fundamental or interpretive. After establishing this initial focus, we suggest new ways to encourage students to actively engage sounds and increase their retention through the use of visual and movement-based modes of analysis. I. Introduction The challenge of teaching new ways to listen discourages man)y music theory teachers from including electroacoustic music in the classroom. Students' initial reactions to first hearings are often overly judgmental, sometimes including unflattering comparisons between sounds in a composition and disturbing real-life sounds. "Blip, buzz, blurp" may have been a sophisticated musical gesture, but the student only hears the sound her car makes when it needs a new axle. Thus, teachers who expand their curriculums to include electroacoustic music must introduce new models for listening and analysis. Ideally, students will learn to move freely between different listenings modes. Denis Smalley identifies three basic relationships between the listener and sounds: indicative, reflexive, and interactive.' In the indicative mode listeners attempt to identify the sound sources and any relevant associations. Whether the music employs easily identifiable or heavily-processed sounds, electroacoustic compositions often trigger the indicative mode. In contrast, the reflexive mode focuses on emotional responses. In the absence of other relationships, this subject-centered mode seduces the listener into a passive role. The interactive relationship involves a more formalistic contemplation of the sound object itself. Although music students learn to enter the interactive mode in their analysis of Beethoven and Brahms, the unfamiliarity of electroacoustic music (exacerbated by a lack of scores) makes this mode less accessible. Instead, they rely on indicative and reflexive modes, failing to develop a well-rounded approach. One challenge students face involves a shift to aural analysis. Those accustomed to scores find themselves lost without a visual reference, and lack of retention becomes an obstacle. As a remedy, students can develop skills representing aural experience visually. Our approach begins with notation of fundamental parameters, such as amplitude and frequency spectrum, and moves towards more subjective discoveries, such as form and phrasing. The body's ability to translate sound into movement may be equally useful, allowing kinesthetic memory to replace visual notation. We propose that teachers incorporate movement as a mnemonic and analytical device. II. Parameters Since students rarely engage the interactive relationship when first listening to electroacoustic music, we propose that they begin by concentrating on fundamental parameters of the sound object. A sound-centered focus allows students to describe music with a common language and steers them towards greater objectivity and open-mindedness. Judgmental attitudes prohibiting appreciation of unfamiliar sounds may be replaced by active involvement in the analytical process. Table 1 lists fundamental parameters, which we have categorized into five domains essential to sound. One may wonder why we have omitted timbre as a domain. The crucial elements of timbre, namely attack and release, spectrum, and density, are included as parameters within the domains of amplitude, frequency, and texture. Although students should consider all ICMC Proceedings 1999 -515 -

Page  00000516 Table 1: Fundamental Parameters DOMAIN PARAMETER CONTINUUM ~ ' ' i l.... II I ' I i I _ 1 ii iiiiii iii i i i.. X k....... i-,, y v.~. / Texture Vertical Density Thick Thin...., M oo ' I, II I t II11. W. k. Sof Horizontal Density Busy Sparse " m - - lll l mlll lX ". 4...!~ ~ ~~r~irxn4ir..*~u.. a ý Soft,:*'I:''*". I.,.. ~ ~~.~. ~~~:~~ ~ I-, Aitfia:'IC~: l as o. Frequency Pitch Pitched -------- Non-pitched Spectnrun (Range) Narrow ---------- Wide istaiIce Clos Far I ii iii.....:1 n*;-nXr?:::,x!*,'*t Pxw':rrr. Ku:. '01... orotal. 44 8 j% -: 1800I. 11 'IAN.rY i!!IXI*i~(i i::~:I~Y*I*(,`lr~i I*i~, i:~ ';:iF r xxxxx X.IxxIK * XNA XI(. Mi*i*Xdat, ~......................:.~.? ' ' C"............. ~41w X41. - 8..... parameters, they may find some less relevant to particular pieces. We have ordered parameters from the most readily apparent-to the more detailed, encouraging students to begin by describing how time and texture function within a given piece. An evaluation of temporal progression, for example, generates a rewarding discussion because continuity and disjunction profoundly effect our musical impressions. Observations of fundamental parameters provide a solid foundation but lack interpretation. At this point, we encourage students to approach the next stage of analysis from three vantage points: object-centered, subject-centered, and context-centered (as shown in Table 2). In the object-centered mode, students synthesize their observations and express opinions about parameters such as form. In the subject-centered. mode, listeners are free to connect emotional and physical reactions to music. The context-centered mode invites listeners to develop extra-musical associations and consider artistic intention. Consider a piece that contains sounds of farm animals juxtaposed against the clinking of silverware and bits of conversation at a dinner table. First, a student would focus on quantifiable features in the piece. For example, he might notice how the composition gradually grows from quiet and sparsely arranged animal sounds to a loud and dense cacophony of bleeting and clucking. By assimilating this information, he would form an object-centered interpretation. He might notice a three-section arch form: animal noises climax in part I and recede in part III, contrasted by dinner sounds in part II. Entering the subject-centered mode, he may feel hungry or nostalgic for his childhood days on the farm. When considering contextual issues, he may decide that the work shows a kindred spirit between animals and people, perhaps even revealing a pro-vegetarian agenda. l. New Methods of Representation Visual representation of music helps listeners to locate and recall sound events. Non-technical scores such as Ligeti's Artikulation illustrate an intuitive relationship between sound and graphics. Other composers prefer sonograms, which plot frequency and amplitude over time.' This method illuminates fundamental parameters, but allows the listener to construct her own interpretation. n her graphical representations of Mellipse and Dragon of the VNebula, Mara Helmuth combines sonograms with analytical markings and comments.3 In addition to clarifying elements that sonograms lack. these markings help to demonstrate the composer's intentions. With access to tools that generate sonograms, students can add markings of their own, demonstrating aural recognition of fundamental and interpretative parameters. If visual representations are not available, students can draw upon their listening experiences to construct one. Visual models following the familiar layout of traditional scores and sonograms will prove most successful. For this reason, we display time on the horizontal axis and frequency on the vertical axis. - 516 - ICMC Proceedings 1999

Page  00000517 Table 2: Potential Interpretive Parameters ]LISTENI[NG MJODE PARIAMETERS Subject-centered Gut reactions, Physical responses, Emotions, Personal imagery.i~ii~d~lg~riMtap;~e1 Aestheticr:~ ciii n cendon:"".~~ ~ ~:: ______________________________________ Ir~s~ aspect~a (vemzidfl~usinet:); m~aking time and relative pitch ~a constant consideration. We begin with a focus on time and texture, asking students to dtiscern the nmuber and temporal placement of sound events. Each sound is assigned a shape and placed along both axes. The shape's form derives from the sound's amplitude envelope while its height represents density.3 Next, students fill in the shapes with darkness representing dynamics. Finally, an indication of sound location can be marked with l~etters. For instance, stereo pieces simply call for L, R, C, with arrows to show panning. More involved placements could use coordinate systems. After mapping fundamental parameters, students can add interpretation te their graphs. We recommend the use of a high~lighter pen to illuminate foreground, or three separate colors to distinguish between primary, secondary, and background events. Following lielmuth's example, we include a line at the top of the page with brackets to show phrasing, resting points, and larger sections. In addition, a particular piece might occasionally demand notation of exact pitches, meters, or rhythms on a staff. To document their individual experiences of the piece, students describe tone color, emotional responses, and other interpretive parameters with text. The relationship between electroacoustic music and other time-based art forms such as dance, film, or computer animation can also deepen students' listening experiences. In the classroom, teachers frequently avoid the obvious connection between sound and the bod~y, A student lacking technical vocabulary may express what he has heard more easily with physical movements. This correlation works naturally for pulsed music, but movement can also represent continuously evolving gestures. Classroom inhibitions or excessive silliness may initially hamper students' ease with physical expression, so it is i~mportant to find movements that are comfortable to the class. Group involvement multiplies the possibilities for physical representation. For examnple, each student could enact a single sound event. moving only for the duration of that event and remaining still. for the rest of the piece. Conrversely, a single sound event could be represented wnith multiple bodies. Students could collectively respond to a quiet and spartse texuture that grows in volume and intensity by beginning in all corners of the room and moving inwards to form a group huddle. IV. Conclusion We want to emphasize that our methods of representing music serve the purpose of helping undergraduate students appreciate the soundworld of electroacoustic music. We hope our method of observing furndamental parameters will lead to interpretive conclusions easily. As a result, we ask students to prodfuce visual graphs and physical analogs that show intuitive sense and require minimal practice with complicated notation systems. Graphs should be easy to read, and movements should correspond naturally to sound events. At this point, our study privileges the visual approach. We welcome suggestions on how one could map sound to motion systematically. In our exuperience as teachers who include electroacoustic music in theory classes, we find students need direction when approaching this un~fam~iliar and often frightening musical realm. Seven years after the death of John Cage and thirty-three years after the death of Edgard Var~se, listeners still need to b~e reminded that musical expression need not be limited to traditional instrumental and vocal sounds. instead of succumbing to exasperation, teachers should offer strategies to help students achieve more meaningful understandings of electroacoustic music. ICMLC Proceedings 1999-57. - 517 -

Page  00000518 End Notes 1. Denis Smalley, "The Listening Imagination: Listening in the Electroacoustic Era," in Companion to Contemporary Musical Thought, ed. John Paynter, Tim Howell, Richard Orton, and Peter Seymour (New York: Routledge, 1992), pp. 519-520. 2. Peter Lunden and Tamas Ungvary, "MacSonogram: A Programme to Produce Large Scale Sonograms for Musical Purposes," Proceedings: 1991 International Computer Music Conference (Montreal: McGill Univ., Computer Music Association, 1991), pp. 554-554d. This article mentions the discrepancy between sonograms and aural experience, and calls for solutions that account for psychoacoustic effects. 3. Mara Helmuth, "Multidimensional Representation of Electroacoustic Music," Journal of New Music Research, vol. 25, no. 1 (Mar. 1996), p. 77. 4. An interpretation of timbre, "tone color" refers to non-technical descriptions, such as fat, grainy, and wet. Although electronic music courses would include sound processing as a parameter, the subject is beyond the scope of most music theory courses. 5. We have essentially conflated the time vs. amplitude axes common to sound editors and time vs. frequency axes of sonograms into one graph. We believe that the vertical representation of both pitch and vertical density will actually simplify graphing in a classroom situation. For more greater precision, Helmuth employs two parallel graphs to avoid this potentially confusing problem. -518 - ICMC Proceedings 1999