Page  00000001 MUSIQUE LAB 2: A THREE LEVEL APPROACH FOR MUSIC EDUCATION AT SCHOOL Vincent Puig Fabrice Guedy Michel Fingerhut Fabienne Serriere Jean Bresson Olivier Zeller IRCAM 1 place Igor Stravinsky 75004 Paris puiZ ircarmfr http://musiquelab.ircam.fr ABSTRACT Musique Lab 2 is the follow-up development of a software suite, first designed in 1999 with high school music teachers, for interactive music manipulations based on MIDI and currently used in more than 2000 classrooms. This second generation, intended both for high schools and conservatories has been commissioned by French Ministry of Education and French Ministry of Culture. Musique Lab 2 can be approached through three pedagogical strategies according to the teacher: an approach by way of the repertoire using simple annotation and synchronisation tools (ML-Annotation), an approach through interactive sound creation and production (ML-Audio) and an approach based on musical composition (ML-Maquette). 1. MUSIQUE LAB 1 Musique Lab 1 [4] is a set of 6 interactive applications for Mac OS X and Windows which is freely downloadable for French high schools since September 2002 and for which an international edition has been marketed since April 2003. 1.1. Pitch and dynamics This application explores pitch and intensity as evolutional continua and dynamic variations. The interface offers in up to four interactive voices an access to many actions: drawing of the pitch and intensity curves, symmetrical process on the curves, duration, timbre, mixing, controlling, adjustments and results recording. 1.2. Polycycles This application explores cycles sets and polyrhythmic, repetitive systems, isorhythmic and intervals sets. 1.3. Clouds This application build textures or evolutional frames thanks to various granular type processes (exclusively MIDI) and to control their dynamic evolutions. It allows one to experiment and to understand the material/form in the audio domain. The obtained results are rich and diversified, certainly atypical in the MIDI field. 1.4. Scales and Modes This application allows one to discover the scale and modes of old, contemporary, tempered, non-tempered, oriental or non-octaviant scales. For the first time, this application offers to a non specialist the possibility to understand and experiment this universe known as inaccessible, thanks to a clear and user-friendly interface allowing to clearly visualize the scales, to modify them and create new one to play them with any MIDI instrument, and to read and record MIDI files. 1.5. Rhythmic construction This application help one to design rhythmic models on four voices. For each one, apart from the percussion timbre, the number of beats of the tempo and the division of each beat can be modified. Two modes allow one to work either with a constant beat or with duration of constant tempo, the whole set opening to two complementary approaches of the rhythmic writing. At first, it's a matter of model construction based on tempo, times, accentuations and constructions of phrases criteria. Subsequently, it's a matter of ordering these models horizontally and vertically before applying phrasing curves or variations occurring at different levels. Once realized, these phrases can be changed along time. Figure 1. Pitch & Dynamics in Musique Lab 1

Page  00000002 2. MUSIQUE LAB 2 Mu s (MIDI Prcessang~W MiL SAudik (Souad processing) Coatents: Signed isiening ML-,Anotationt (Intertactive Listetnin G(uides) (Sy'nchronisatit.-Pb blbiatio)m DVDromI/Web radiotWeb po.rtals Figure 2. Musique Lab 2 modules Musique Lab 2 development started in January 2004 in collaboration with teachers both from music conservatories and high schools since it was supported by both the Ministry of Education (for high schools) and the Ministry of Culture (for music conservatories). This brought a first prerequisite in the need to propose interfaces with various levels of complexity (for instance using musical notation in Conservatories and controllers for non-musicians in high schools). Teachers specified four pedagogical principles: The need to start from a listening experience based on musical pieces from the repertoire, especially those studied for the French Baccalaureate (end of high school exam). The ability to interact in real time with musical concepts like the ones used by composers in the repertoire. A possible link with instrumental metaphors and constraints either using the adapted graphical interfaces or using external controllers. A possible connection between the signal domain and the symbolic domain. The project includes the development of three software modules, the production of pedagogical scenarios starting from the repertoire and finally a program of workshops and tailor-made developments in different regions of France. Musique Lab 2 targets various elements at the heart of music education which are differently reflected by the three different modules currently being developed: 1. Musical annotation and synchronization module: scanned score following, multiple readings (MLAnnotation). 2. Recording module: visualization, analysis and realtime audio processing, especially for voice (ML-Audio) 3. Writing module: symbolic manipulation in tonal space (generation of material, processes), musical analysis of data coming from the audio signal (electro-acoustic music, performance analysis) and formal data (MLMaquette). With these tools, we have been working on a number of pedagogical scenarios including: Grisey and spectral music models, Bach and symmetry and other scenarios on contemporary works such as VoiRex from Philippe Leroux. 2.1. Starting from the repertoire: ML-Annotation ML-Annotation is an intuitive tool for seamlessly synchronizing an audio file with a multi-page image and annotating the resulting document with time-varying elements (graphics, text). In our case we created a specialized interface for the synchronization of audio with scores by simply defining how and where in the scanned score a cursor must progress. The cursor is defined simply by clicking on the notes of the score while listening and is editable afterwards with time code markers. ML-Annotation can also serve as a multimediaauthoring tool for building coursework using elements produced in the two other modules of Musique Lab 2. It was first developed by Ircam's Hypermedia Studio and the Musical practices analysis research team in the context of a project called "Signed Listening"'. EJ ^.-:^:|:s;^ ^ ^;~::;^ IIi LII:-:::: WebAtrn, op.5 nSI: an;tytial resmarks 1 Donin, Towards organized listening: some aspects of the Signed listening project at Ircam, Organized Sound, volume 9, April 2004

Page  00000003 Figure3 and 4. First version of ML-Annotation and Annotation of Webern, Op5 (courtesy Nicolas Donin) 2.2. An approach through recording: ML-Audio ML-Audio is dedicated to real-time interaction with sound. It allows students to record audio, to display a sonogram, to interact with spectral features, to export audio analysis data in the SDIF1 format to the MLMaquette module for symbolic manipulation, and much more. ML-Audio is a meta-interface developed in Max/MSP [5] that runs as a standalone application. Teachers can save and recall presets, audio treatments, and functions. ML-Audio also accepts external midi controllers, web cams, VST plug-in and has internal sequencing functions. ML-Audio was conceived to be modular in nature; features and modules can be added by simply including the new modules in a specified folder. This flexibility allows for future technological advances in audio analysis and interaction at Ircam to be included without reinstallation of the application or changes in the core interface. Audio created in ML-Audio can be exported simply as a.wav or.aiff audio file. adjusting the operator's parameters. Next, the process is launched and the result is displayed on the workspace. This workspace has several features, among them being the horizontal temporal space for sequencing the musical events. The vertical dimension of this temporal space can also be used for interacting with the content (for instance for transposition). In a latter phase MLMaquette should provide a large library of musical operators which will allow students to open musical pieces in the workspace and simulate the transformations used by composers, for instance in the case of a musical variation as with Bach's La Chaconne. Figure 3. ML-Audio 2.3. Manipulation of music: ML-Maquette ML-Maquette is an application dedicated to musical material creation and manipulation based on the OpenMusic computer-aided environment [1] [2]. It is an innovative attempt to model tonality and to allow for musical manipulations using high-level operators in the tonal and non-tonal fields such as arpeggio modeling, harmonic path building, and musical transposition. By entering notes in a simple score editor, from a keyboard, or importing a MIDI file or SDIF data, the student can represent data in musical notation and then manipulate the musical structure by dragging it onto various operators. A settings window is then displayed for 1 Sound Description Interchange Format (http://www.ircam.fr/sdif) Figure 5. ML-Maquette 3. PRODUCTION OF CONTENT 3.1. Musical analysis Our first concern with Musique Lab 2 was to facilitate musical analysis of the musical work on the 2005-2006 baccalaureate curriculum: La Chaconne from J.S. Bach's Partita in D minor transcribed by Busoni. This piece raises many transcription and arrangement challenges and serves as a rich example. Many composers and musicologists have transcribed or arranged La Chaconne, most notably: Busoni, a symphonic piano rendition by Brahms, a version for piano with left hand restrictions by Schumann, and Jose Miguel Moreno's transcription for lute in which two voices are added weaving lute chorals into Bach's score. Another applicative use of Musique Lab 2 is the work conducted on the analysis of Partiels by Gerard Grisey. This example takes advantage of communications between ML-Audio and ML-Maquette to capture spectral elements of the recording, to extract pitch and harmonics, and to manipulate them in the symbolic domain for a better understanding of Grisey's composition and of Musique Spectrale in general. 3.2. Production of Signed Listening documents Several Signed Listening documents have been created including comparative listening of Bach's first prelude from the Well-Tempered Clavier, Signed Listening by

Page  00000004 Philippe Leroux of his own piece VoiRex, and Andrea Cera's Signed Listening of techno music including Aphex Twin. 4. TARGETTED APPLICATIONS A variety of pedagogical metaphors were established according to the educational wishes expressed by the participants in the creation of Musique Lab 2. These metaphors represent potential coursework scenarii, each focused on a specific pedagogical goal. Musique Lab 2 allows for these courses to be interactive and understandable. Some of the goals which are currently being developed are detailed below: 1. ML-Maquette allows one to represent tonal structures, to differentiate between notes that belong and notes that are foreign, to set the notes and chords scales, and to define the chord numbering and even the chord cadences. 2. Musique Lab 2 also lets one become aware of musical structures and to separate overlapping thematic material. The software can situate the user in a score and at the same time display detailed information about overlapping structures whether they are rhythmic, harmonic, instrumental, or thematic. These annotations can be local and/or global. 3. Rhythmic structures and meter can be easily viewed in Musique Lab 2. The software permits all levels of users to see and hear temporal structures as well as affect and change them. It is thus possible to try various meters on the same fragment of a musical work to perceive harmonic offsets and to understand the choices of the composer. 4. MusiqueLab 2 allows one to visually represent the change over time of one or more parameters. Using different types of graphics the trajectories of these elements are thus clarified. The element to follow during a listening session can be changed; one can follow for example the dynamics, harmonic texture, or the orchestration. 5. Musique Lab 2 is of significant help in the explanation of the relationship between composition and acoustics. The software suite allows one to analyze audio and apply various treatments to the sound. This clarification renders certain compositional techniques tangible and accessible, most notably techniques employed by composers in the 20th Century that in other cases cannot be explained except theoretically. 6. It becomes possible in Musique Lab 2 to tighten the relationships between the fields of science, mathematics, life sciences, physical sciences and music. Examples of coursework have already been created around mathematical data, statistics, probability and music. 5. CONCLUSION These first six metaphors helped guide the development of Musique Lab 2. Whether for listening, analysis, discovering repertoire, instrumental performance, or composition, these tools permit the instructor to develop custom pedagogical coursework. Musique Lab 2 exists not only for advanced music students in a conservatory setting, but also for students at a rudimentary introductory level. This software suite can be utilized in music classes for nonmusicians in grammar schools, junior highs and high schools as well as classes in conservatories teaching performance, music education, theory, composition, and music history. Musique Lab 2 is a flexible suite of software designed for use by teachers and renders coursework adaptable for various levels of musical expertise. Each available manipulation and operation in Musique Lab 2 allows for a progressive approach to learning. This group of tools can answer the needs of a large variety of musical public and is easily manipulated by teachers preparing courses. In conclusion, Musique Lab 2 can be deployed in a multitude of situations[3] strengthening the relationships between different classes in the same school or organization, or between interacting students in a single classroom. 6. REFERENCES [1] Agon, C. (1998) "OpenMusic: Un Langage Visuel pour la Composition Assist6e par Ordinateur", PhD. Thesis, Universit6 Paris VI. [2] Assayag G6rard, Agon Carlos, Fineberg Joshua, Hanappe Peter: "An Object Oriented Visual Environment for Musical Composition". ICMC97, Thessaloniki Hellas: September 1997 [3] Gu6dy Fabrice, Atelier des Feuillantines, Une exp6rience de p6dagogie et de cr6ation musicale avec des enfants psychotiques - Infancia, Avril 2005, Madrid [4] Serra, Gerzso, Cahen, Jourdan, Idray, Serriere and al., Musique Lab for Windows and MacOSX, IRCAM & French Ministry of Education, IRCAM CDROM, Paris, April 2005. [5] Zicarelli D. "An extensible real-time signal processing environment for Max", ICMC 98, University of Michigan