Page  00000061 A PRACTICE-BASED APPROACH TO USING ACOUSTICS AND TECHNOLOGY IN MUSICIANSHIP TRAINING John Fariselli Young Lorenzo Picinali Centre for Excellence in Performance Arts & Music, Technology and Innovation Research Centre De Montfort University Leicester LEI 9BH, UK jyoung@dmu.ac.uk Dimitris Moraitis ABSTRACT Digital audio tools can be used to facilitate many aspects of traditional note-based music making, but one of the challenges they present is also found in their potential to open up new opportunities for the shaping and deconstruction of sound in ways that are difficult to assimilate with traditional Western notation-based models of musical materials. Development of an understanding of the musical use of these new materials may require an expanded view of the nature of musicianship. This paper presents reflections on an attempt to address this by teaching musicianship via principles of acoustics and psychoacoustics in the context of a music technology undergraduate degree. 1. BACKGROUND Music technology programmes have grown rapidly in number and in size in the UK in recent years. The experience in the BA(Hons) Music, Technology and Innovation degree course at De Montfort University, Leicester UK has been that students bring a varied range of musical backgrounds to the programme and that many have considerable experience in practical music making facilitated by technological tools such as sequencing and audio recording and processing. A detailed knowledge of music notation does not necessarily accompany these practical skills. Acoustics, by providing foundation in the study of sound per se, has been considered a useful tool in developing students' aural awareness in a way that is relevant to the breadth of sonic material with which they wish to engage and to meet the requirements of a range of studio-based activities. Acoustics is taught in the second semester as part of the musicianship training students receive in year one. Whilst the pitch/duration paradigm of traditional notation offers useful points of reference, the concept of musicianship within the ethos of the course is based on broader principles. The programme itself is focused on the teaching of music through technology and, more specifically, the creation and study of music that can only be made with electroacoustic and digital technologies. 2. A DEFINITION OF MUSICIANSHIP A general definition of musicianship that I would like to offer is the ability to discriminate, replicate and recognise features of sound and its organisation and to act/function in musical situations in relation to what is heard and to what is imagined as a desirable sonic outcome. This applies not only to the obvious areas of composition and performance but also to analytical/musicological work and other critical listening situations such as studio engineering and sound diffusion. 'Traditional' elements of musicianship such as discrimination and retention of note-bound pitch, subdivisible rhythmic information and the understanding of phrase structures clearly offer models for transferable musical values, but inherent in the traditional approach is the assumption that musical material is essentially limited to what can be expressed in those terms (Wishart [12]). In reviewing the work of Gilles Gobeil, Bidlack [2] suggested that Gobeil's work is not of a composer, but a sculptor, saying: Music is a temporal art. Fundamentally, the music of any style, in any historical period, serves to articulate the passage of time. We humans perceive the passage of time through that wonderful invention we call rhythm, which is also a form of cyclic repetition.... Sculpture, in contradistinction to music, is about the articulation of space, light, and texture. Translate these domains into the sonic realm and the discussion revolves around space, reverberation, intensity, and timbre. 'Space, reverberation, intensity, and timbre' are, of course, features of all music, but are rendered malleable in electroacoustic music with technology permitting access to the world 'inside the note' and placing the effects of micro-editing under a musician's control. This projects new emphasis onto what can be drawn out of sound only with the assistance of technology: pitch becomes nested within spectral data, rhythm part of the wider effects of sound's morphological evolution and phrase an element of segmentation. A composer working in a studio environment is responsible for the overall production values of a work and assumes many of the qualities of musicianship previously associated with performance, such as tuning and balance, and also embraces audio engineering issues such as equalization and spatial articulation. Development of an expanded concept of musicianship goes hand in hand with what might be considered the defining innovative aesthetic potentials of the music technology field. Rowe [7], p.3 argues a case for the extension of human musicianship through 'computer music programs that will recognise and reason about human musical concepts... and reason with and reinforce the basic nature of human musicianship'. Yet the potential of technology to allow the active analysis, deconstruction and reassembly of sound in ways that transcend normal physical 61

Page  00000062 constraints poses new questions about the way we relate musically to these possibilities. In building a notion of musicianship around acoustical principles through the active application of technology, we might consider two core 'modes' in which musicianship operates: the descriptive and the functional. The descriptive mode concerns development of the ability to distinguish and understand the nature of an acoustical event, which may be facilitated by analysis with technology, such as waveforms and spectrograms. The functional aspect concerns the ability to envisage creative possibilities through descriptive analysis or through intuitive realisation by a composer of the potential for a sound object for musical development. The descriptive and functional are therefore mediated by affective and psychoacoustical responses. Both of these dimensions benefit from suitable terminology for the description of sonic events as they are analysed. In computer music, a conflict can arise between what is considered musically interesting in terms of the sonic outcomes of processes or compositional schemes and methods, and the interest gained from purely procedural systems where, for example algorithmically driven mechanisms with radically unpredictable outcomes may be used to develop materials. Concern amongst some writers in the finding of musical/perceptual ways of discussing electroacoustic/computer music alongside generative/procedural ones are part of this need to elaborate the distinctive contribution of the genre to the understanding of music, ergo musicianship. Certainly from an educational perspective, the procedural dimension ('getting your hands dirty with sounds') must be a part of an expanded view of musicianship. While a detailed descriptive framework for the structure of sound and its musical affordances such as Smalley's discussion of spectromorphology (Smalley [9]) is comprehensive and, in intent, open to interpretation, the first author has found this often to be perceived by undergraduates as a prescriptive (and therefore somewhat intimidating) imposition on listening. 3. CASE STUDY The BA(Hons) in Music, Technology and Innovation at De Montfort University has been running as a single honours (major) course since 2001. The nature of the degree's content has been shaped by the desire to offer a programme which focuses on the innovative potentials of technology in music. This has meant that much of the degree content has been based around creative work integrated with a range of contextual and technical study. A corollary of this is that a core ethos of the programme has also been an emphasis on issues of musicality. As part of the development of the DMU programme's teaching of musicianship, a pedagogic research project was instigated within the 10-week spring term of the first year (Freshman) 'Foundations of Music' module with the support of the Centre for Excellence in Performance Arts. All readings, listening and practical exercises were listed on the module web page. The project had two main aims: 1. Development of a structured pathway through teaching resources in acoustics and psychoacoustics with an emphasis on developing a vocabulary for observed features and behaviours of sound, and 2. Development of a set of practical and listening tasks to help draw students' attention to the functional properties of timbre and the potential for extension of these into creative activity. In addition, several qualitative surveys of student listening were made, the responses from which are embedded within this discussion. The theme of timbre was chosen because of the way notions of pitch radiate from it, from the colouration of perceived tone to the potential to identify discrete frequency components within a timbre.1 Understanding of frequency relationships within the harmonic series in turn branches out into the concept of intervallic ratios (the relationship to scale construction was taught at the end of the module), the nature of inharmonic spectra and noise, the spectral envelope, and processes of shaping and directing the evolution of sound with technology, such as morphing. The concept of pitch as the result of periodicity within a vibrating system also can be directed to understanding some of the characteristics of digital signal processing methods, such as the pitched artefacts that result from conventional granular brassage technique, or the noise artefacts resulting from waveset processing. The time constraint meant that there was not a large amount of time to be spent in, for example, basic interval recognition drills. Instead, the study of timbre seen from various perspectives aimed to continually reinforce basic ideas. Two core texts were used Cook [5] and Campbell and Greated [4] with additional resources in, principally, Bregman and Ahad [3], Schaeffer [8] and Truax [10]. Students were introduced to key ideas using core texts as references, encouraged to test elements of these ideas through simple devised experiments using audio processing tools, then directed to specific musical examples for analysis/discussion and encouraged to experiment further with concepts through self-directed creative exercises. Initial sessions were devoted to consideration of the propagation and representation of sound, including the relative merits of time and frequency domain representation, in that respect one significant musical example that students were asked to analyse was an extract from Alvin Lucier's Still and Moving Lines of Silence in Families of Hyperbolas [6] in which a single marimba note played against a pure tone tuned with a one quarter cycle difference in frequency (in opposite channels) results in interference that is heard as a stereo phase shift. Further sessions dealt with topics in the nature and structure of spectral content. Additive synthesis exercises experimenting with weighted sums of pure tones and subtractive With backgrounds in music technology, most students appeared more comfortable with the concept of pitch as frequency than notated pitch class. Yet most were active guitarists and when presented with a recording of the Chopin A major Prelude, and informed that the first chord was E major, many were able to follow and label in terms of tablature the harmonic evolution of the piece. 62

Page  00000063 synthesis, such as the first few even numbered partials being filtered out of a saxophone sample in order to transform it into a clarinet, and the selective removal of partials from a guitar sample allowed students to direct and evaluate their listening experience through technology leading to practical analysis of phenomena such as Shepard tones, Warren loops, the function of timbre in streaming phenomena and the ear's perceptual thresholds. One of the advantages of using digital audio tools in this kind of educational context is that students can be engaged through practical analysis of sounds without knowing what will be found, such as in analysis of the bassoon timbre across its range, which led to the 'discovery' of formants which were studied further through the typical model of the voice. Similarly, by taking an audio processing technique with which students were already familiar-time stretching-the significance of onset and decay transients was demonstrated through comparison of time-varying and constant processing but, again, before the theory of the concept was explained. Explanation of the concept of equal loudness and the Fletcher-Munson curve was facilitated by students doing an audiometric examination, using sinusoids at different loudness and frequency using a Max/MSP patch by Lorenzo Picinali, allowing them to observe their frequency-amplitude curve. Use of the Schaeffer [8] text offered a useful perspective on the relationship between procedural and perceptual aspects of electroacoustic techniques, particularly the effects of simple processing techniques on the role of onset transients and the phraseology of 'complex notes'. 3.1 Two specific listening examples Visualisation of spectra is an important way for students to investigate what they are hearing in both simple and complex sounds. For example, the opening of Aphex Twin's Ventolin (wheeze mix) [1] was used in class as an analysis example for the stark mix of sonorities in order to investigate listening response.' In a purely descriptive response one would expect a starting point to be the distinction between the prominent 4.1kHz 'sine' tone2 and the noisy clustered material leading to questions about perception of that pure tone through the texture as it develops, a case of apparent continuity of a signal (Bregman [3], p. 52) and the spectrogram demonstrates that a node of noisy distortion clusters around 4.1 kHz as a centre frequency. With spectral filtering the effect of removing that specific pitch can be evaluated and related to an understanding of the way the ear can be directed in juxtapositions and mixes of sonorities. In an initial listening session (in week 7) to the first minute of this work, a number of student responses implied musical values within their descriptions: 1A second listening example in this session was the opening of Alejandro Vinao's Chant d'ailleurs. The spectrogram shows second and third harmonics are present, along with constant 15.7 and 16.3kHz tones! * 'Timbral changes occur when another sound with the same harmonic content appears in certain sections making the initial sound seem inaudible' * 'High pitched feedback, subtle timbre variation, sounds morph back to high-pitched feedback'. A frequent response to the larger extract (the first minute) suggested that awareness of the phenomenon of streaming assisted their appreciation of the timbral and melodic layers in the work. A further question in week 7 attempted to establish the extent to which students' felt their vocabulary for describing the two musical extracts had improved through their previous 6 weeks of study. 64% of students reported some improvement, 29% were unsure and 7% felt no improvement, with concepts of streaming, masking and spectral structure cited as specifically helpful............ i.... si I ~iii I:--::... iiiiiiiiLiiiiiiiiiiiiiiiiii................iiiiii i..... Figure 1 Spectrogram: Aphex Twin Ventolin opening Further responses to listening examples were collected in week 10, which qualitatively demonstrated some students utilising more sophisticated descriptions of sonic content than previously: for example pointing to registral separation and the range of harmonic, inharmonic and noise components as significant distinguishing features within the texture in Venetian Snares's Yes love, my soul is black [11]. The perceived effects of registral and other sonic extremes in repertoire of this kind, such as micro-editing and micro-looping, may resist analysis in terms of traditional elements of musicianship training, but can be linked effectively to the acoustical and pyschoacoustical descriptive vocabulary and triangulated through supporting practical experimentation. Another example used to demonstrate the creative potential of a complex spectrum whilst demonstrating an acoustical principle was drawn from an element in Arrivederci for ensemble and electroacoustic sounds by John Young. In this example, the first 23 partials in the spectrum of a bell were analysed, showing 140Hz to be the lowest actual frequency component, but with three possible perceived fundaments (x, y and z) detected using IRCAM's Audiosculpt fundamental estimation algorithm. Searching through the spectrum for 63

Page  00000064 approximations of harmonic relationships provides a way of explaining the potentially ambiguous perceived pitches in such inharmonic spectra. But this was taken a step further in moving toward creative exploration of the sound by selectively filtering partials to 'retune' the spectrum. For example, taking partials 0, 1, 2, 3, 4, 11, 14, 18 and 21 yields a distinct C-sharp minor harmonic 'flavour'; 0, 1, 4, 6, 9, 10, 11, 14, 16, 18, 20, 22 an Fsharp 7th flavour and; partials 7, 10, 12, 16, 22 a G minor flavour. This kind of exercise was supported in teaching by textbook examples, such as the analysis of the tubular bell timbre displayed in Campbell and Greated [4], and Jean-Claude Risset's example of an inharmonic spectrum shifted up an octave with a perceived drop in pitch due to the proximity of adjacent midrange partials. By this sort of approach, students can be encouraged to deepen their own listening, explore the potentials of complex spectra and devise their own strategies from creative investigation based on core principles of timbral fusion. Partial Frequency Hz x 518 Y 66.6 182.8 0 140 1 280.7 2 323.7 3 428.7 4 538.8 5 716.6 6 749 7 806.8 8 862.2 9 888.7 10 919.5 11 1118.4 12 1179.2 13 1255.6 14 1323.4 15 1405.9 16 1460.2 17 1806.4 18 2188.7 19 2554.2 20 2930.3 21 3296.4 212 3673.7 Table 1. Bell spectrum from Arrivederci 4. FINAL OBSERVATIONS By mixing core concepts of acoustics, analysis of sound (including musical examples) and creative activity, one of our aims was to facilitate mechanisms by which student's ability to identify and compare features of sound could be enhanced. Encouraging creative play with sound aims to increase acuity of listening and awareness of the potentials and aesthetic implications of shaping and structuring sounds with technological tools, for which acoustical principles provide a structured framework. In a survey of student experience on the module the following responses were gained to the question: 'has this module helped you understand or given you creative musical ideas?' 64% responded 'yes'; 18% responded that they had developed listening skills without specifically fostering creative ideas, 110% were unsure and 7% responded 'no'. Although creative activity was used as a learning method, students were not specifically instructed that they were being taught compositional methods. Of those who had developed creative ideas, responses included: * 'it has given me musical ideas by manipulating sounds to morph and create new timbres, to use streaming, the psychoacoustics of sound in general has been very influential on musical ideas' * 'especially with the use of scales, beating and Warren loops. I have already made pieces using this beating technique. I was previously using Warren Loops but what unaware of the fact of what they actually were'. * 'it has helped me by teaching me proper EQ use, and spectral behaviour, which has vastly improved my mixing skills'. The technological extension of musical materials available to composers, sound designers and others working creatively with sound present challenges to the notion of a generalised concept of musicianship. If we accept that one of the relatively tacit aims of traditional musicianship training is the ability to inwardly 'hear' through notation-bound concepts without the actual presence of sound, we should also recognise that that situation may be antithetical to the practice of musicians attracted to realising their ideas with and through technology. An evolving concept of musicianship training will necessarily engage with the practical exploration of sound that technology facilitates. 5. REFERENCES [1] Aphex Twin. Ventolin. Warp Records, 1995. [2] Bidlack, R. Review of Gilles Gobeil's La mecanique des ruptures. Computer Music Journal, 20 (4), 1996: 67-68. [3] Bregman, A. and Ahad, P. Demonstrations oJ Auditory Scene Analysis. [Montr6al]: Department of Psychology, McGill University, 1995. [4] Campbell, M. and Greathed, C. The Musician 's Guide to Acoustics. Oxford: OUP, 2001. [5] Cook, P. (ed.) Music, Cognition and Computerized Sound. Cambridge, MA/London: MIT Press, 1999. [6] Lucier, A. Still and Moving Lines of Silence in Families of Hyperbolas. NY: Lovely Music, 2003. [7] Rowe, R. Machine Musicianship. Cambridge, MA/London: MIT Press 2001. [8] Schaeffer, P. Solfige de 1l'objet sonore. Paris: InaGRM, 1998. [9] Smalley, D. Spectromorphology: Explaining Sound Shapes. Organised Sound, 2 (2), 1997: 107-126. [10] Truax, B. Handbook for Acoustic Ecology. [Vancouver]: Cambridge Street Records, 1999. [11l] Venetian Snares. Winter in the Belly of a Snake. Planet Mu, 2002. [12] Wishart, T. (ed. Emmerson). On Sonic Art. Amsterdam: Harwood: 1996. 64