Page  00000001 The Music Technology Program at McGill University Gary P. Scavone and Marcelo M. Wanderley Department of Theory, Faculty of Music McGill University gary.scavone@mcgill.ca, marcelo.wanderley@mcgill.ca http://music.mcgill.ca/musictech/ Abstract The Music Technology program at McGill University prepares students for commercial and academic opportunities in music and new media technologies with degrees at the undergraduate, master, and doctoral levels. Areas of ongoing research include audio signal processing, music information retrieval, human-computer interface design and analysis, and computational acoustic modeling. Together with the rich music performance environment of McGill's Faculty of Music, the Music Technology program provides students with research and practical training experience in the rapidly evolving music industry. 1 Introduction The Music Technology Program at McGill University offers an unique environment for students to explore musical creation, technology, and research within a Faculty of Music which boasts an acclaimed performance tradition and stateof-the-art facilities. The Faculty of Music at McGill University is recognized as one of the premiere music schools in North America. Active programs in music composition, sound recording, music education, theory, and musicology, in addition to music technology and performance, make it a rich environment for students to explore and nurture their interests in music and related technologies. The thriving city of Montreal, Canada, provides further opportunity for cultural enrichment. 2 Personnel & Students From its start in the 1990s by Bruce Pennycook, the number of faculty serving the Music Technology program at McGill has rapidly increased to now include four full-time professors: Philippe Depalle, Marcelo Wanderley, Ichiro Fujinaga, and Gary Scavone. Stephen McAdams will fill an additional position in Fall 2004 as head of the Center for Interdisciplinary Research in Music Media and Technology (CIRMMT). Additional staff include a postdoctoral researcher, Vincent Verfaille, a part-time system administrator, and three instructors, Andrew Brouse, Robin Davies, and Kojiro Umezaki. The current music technology student population includes 16 graduate students and about 30 undergraduate students in the Honours and Minor programs. 3 Teaching McGill's Music Technology Program offers degree programs at the undergraduate, master, and doctoral levels. The undergraduate programs emphasize practical training in the design, development, and use of software for audio and new media processing. The graduate programs focus on the development of research methodologies via a sequence of seminars. 3.1 Undergraduate The undergraduate degree program combines professionallevel music training with courses in computer music, new media, digital signal processing, computer science, humancomputer interaction, psychoacoustics, and acoustics. Students can pursue two different degrees at the Bachelor level. The Bachelor of Music with Honours in Music Technology degree requires, in addition to the core music requirements, completion of courses on topics covering the fundamentals of computer music and related technologies. Other required courses include acoustics, psychoacoutics, and programming taught in the physics and computer science departments. The Minor in Music Technology program is available to students from any McGill undergraduate program who wish to graduate with a knowledge of new music technologies and the impact they are having on the music industry. Proceedings ICMC 2004

Page  00000002 3.2 Graduate The graduate programs are heavily based on technological and scientific research, with applications to music and sound. The Master of Arts degree in Music Technology is a thesisoption, research-based program. Instruction is provided via graduate seminars that are typically completed in the first year of study. Students then concentrate on thesis research during their second year. Applicants must have a bachelor degree in a related technological field and demonstrate evidence of expertise in music technology. The Ph.D. in Music Technology is a research-intensive program culminating in a thesis that demonstrates significant contribution to the field. Applicants at the doctoral level must demonstrate strong skills in music technology, including previous research experience and the ability to write research reports. Ph.D. applicants are typically expected to have completed a masters degree in science or technology. 4 Research Research projects are centered around, but not limited to: a) Sound Analysis and Synthesis; b) Sound Processing and Digital Audio Effects; c) Human-Computer Interaction; d) Gestural Control in Multiparametric Environments; e) Music Information Retrieval and Digital Libraries; f) Optical Music Recognition; g) Acoustic Modeling and Perception. The interrelationships between the faculty research interests are diagrammed and described below. Fujinaga Optical Music Recognition Musical Timbre Information Interface Recognition Retrieval ~ Dsg the conception of computer music tools. Applications include innovative research on non-linear oscillating or timevarying systems, room-acoustics, psycho-acoustics, knowledge for the design of audio processing, recording, transmission systems and electronic musical instruments, and creative tools for composers and multimedia artists to process sounds. * Ichiro Fujinaga is working to develop and evaluate practices, frameworks, and tools for the design and construction of worldwide distributed digital music archives and libraries. The research conducted within this project will address unique and imminent challenges posed by the digitization and dissemination of music media. The project consists of four major research programs: 1. optical music recognition using microfilms; 2. development and evaluation of digitization methods for preservation of analogue recordings; 3. design of workflow management system with automatic metadata extraction; and 4. formulation of interlibrary communication strategies. * Marcelo Wanderley's research focuses on the analysis of performer-instrument interaction with applications to gestural control of sound synthesis. This work is pursued in two ways: 1. the study of a generic digital musical instrument (DMI), its constituent parts, and the suggestion of novel approaches to its design; and 2. the analysis of acoustic instrument performances with the aim of eventually finding cues to improve the design of current DMIs. Applications include the prototyping of novel gestural controllers and digital musical instruments. * Gary Scavone is interested in wind instrument acoustic research, modeling, and sound synthesis with the goals of developing: 1. a theoretical understanding of the fundamental acoustic behavior of wind instrument systems; 2. computerbased mathematical models that implement these acoustic principles as accurately as possible; 3. efficient, real-time synthesis algorithms capable of producing convincing woodwind instrument sounds; and 4. human-computer interfaces for use in controlling and interacting with real-time synthesis models. Recent synthesis developments have been focused on aspects of woodwind instrument toneholes, conical air columns, vocal tract influences, and reed/mouthpiece interactions. 4.1 Current Projects * Quantitative Analysis of Expressive Movements of Musicians: This project focuses on the correlation between physical and musical gestures, specifically on expressive movements. Such movements do not have a direct link to the generation of sound but are an integral part of musical performance. Expected results, in addition to a better understanding of music performance and the role of expressive movements, include the design of new computer-based musical instruments as well as hardware interfaces that take expressive movements into account. Depalle Sound Analysis, Processing and Synthesis Wanderley Human-Computer Control of Sound Synthesis Interactiony Gestural Communication Acoustics Scavone Input Device and DSPAcousticDevelopment Modeling Sound Synthesis and Perceptions * Philippe Depalle's research is focused on digital synthesis and processing of sound using an analysis and re-synthesis approach. The fundamental component of this work is the systematization of the "analysis/synthesis" point of view in Proceedings ICMC 2004

Page  00000003 * The McGill Audio Preservation Project (MAPP) is part of a larger research plan to develop frameworks and tools for creating distributed digital music libraries. Since phonograph records are quickly disappearing from circulation, it is extremely important that libraries and archives around the world start preserving this significant cultural artifact of the 20th century and make them more accessible. However, there are no standard software or systems to easily digitize the phonograph records and make them widely available. This research project investigates into an efficient and economical workflow management system for digitization of records. * Controlled Gestural Audio Systems (ConGAS). The Cost287-ConGAS Action intends to contribute to the advancement and to the development of musical gesture data analysis and to capture aspects connected to the control of digital sound and music processing. It gathers delegates from 13 countries (Belgium, Finland, France, Germany, Ireland, Italy, Netherlands, Norway, Spain, Sweden, Switzerland, UK, and Canada). * ENACTIVE Interfaces. The Sound Processing and Control Lab (SPCL) is one of the 24 partners of this European Community funded project (VIFramework Network of Excellence). Gathering research centers from 11 countries (Belgium, Canada, France, Germany, Ireland, Italy, Spain, Sweden, Switzerland, UK, USA), the ENACTIVE Network is a four-year research project on the development of advanced interfaces for human-computer interaction (HCI), including applications to creative arts and music. 4.2 Student Projects This section summarizes recent graduate student research projects. * Anne-Marie Burns (M.A.) is working on a project to retrieve gestural information, including finger position, of a guitar player using visual input (video image) as a main source. * Wesley Hatch (M.A.) is investigating audio morphing using high-level features that enable one to interpolate two sounds along certain, well-defined dimensions, which may or may not be perceptually relevant. * Paul Kolesnik (M.A.) is researching orchestral conductor movements and their applications within the framework of human-computer interaction. He is making use of artificial intelligence, and hidden markov models in particular, to perform movement recognition and analysis. * Ian Knopke (Ph.D.) is completing his thesis work on a search engine for audio files on the web which includes a high-performance web crawler using both analyses of web page text and audio data to increase the quality of audio classification. An interface similar to Google or Altavista is provided. * Catherine Lai (Ph.D.) is working on the McGill Audio Preservation Project (MAPP) digitizing a unique set of jazz recordings from 1940's and 1950's in 78 rpm formats as well as a set of LP recordings from David Edelberg's Handel collection housed in the McGill Music Library. * Cory McKay (M.A.) is researching automatic classification and clustering of music, including work on musical similarity and genre classification. * Francois Thibault (M.A.) is working on adaptive processing of singing voice timbre using high-quality harmonic plus noise analysis/synthesis systems to make transformations based on audio perception and signal content. * Caroline Traube (Ph.D.) is completing her thesis work on indirect acquisition of instrumental gesture based on signal, physical and perceptual information and timbral analogies between vowels and plucked string tones. * Doug Van Nort (Ph.D.) is interested in real time control of computer music and interactive instruments with both research and artistic applications. His research goal is to develop methods for control (mapping strategies, control devices) given synthesis techniques which are informed by an aesthetically-based notion of what a new instrument can be. * Philippe Zaborowski (M.A.) is working on a new interface for handhelds, called the "ThumbTec", with a wide variety of applications from text-entry in cell phones to musical instruments. 5 Facilities & Resources 5.1 Research Laboratories A number of research laboratories have been established including the Sound Processing and Control Laboratory, the Distributed Digital Music Library Laboratory, the Computational Acoustic Modeling Laboratory, the Electronics Development Laboratory, and the Musical Input Devices and Human-Computer Interaction Laboratory. Workspaces are provided in each lab for graduate students and research assistants working on related projects. Detailed information about each laboratory facilities is available from the Music Technology website. Funding for the research labs has come from grants from FQRNT (Fonds Quebecois de Recherche sur la Nature et les Technologies), CFI (Canadian Foundation for Innovation), FQRSC (Fonds Quebecois de Recherche sur la Societe et la Culture), and McGill University. Proceedings ICMC 2004

Page  00000004 5.2 Computer/Teaching Laboratory The Music Technology Computer Laboratory currently includes 16 computers running the Macintosh OS-X operating system. A wide range of software applications are installed on the computers to support instruction, including Max/MSP, Matlab, Code Warrior, Tassman, Flash, Final Cut Express, Sibelius, Acrobat, Open Office, Gimp, Simple Synth. 5.3 Faculty of Music Resources Other facilities exist within the Faculty of Music that can be used by students, including the Digital Composition Studio, the studios of the Sound Recording Area, the Marvin Duchow Music Library, and in the near future, facilities of the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT). Collaborations with faculty in the departments of Electrical Engineering and Psychology also provide opportunities for research and pedagogical pursuits in various other parts of the McGill campus. Recent Publications Cook, P.R. and Scavone, G.P. (2003). The Synthesis ToolKit (STK) in C++. In Audio Anecdotes: A Cookbook of Audio Algorithms and Techniques, ed. Ken Greenbaum, Natick, Massachusetts: A.K. Peters. Fujinaga, I. (2004). Staff detection and removal. In Visual Perception of Music Notation, ed. S. George. Hershey, PA: Idea Group Inc. Fujinaga, I., and Weiss, S. (2004). Music. In Blackwell Companion to Digital Humanities, eds. S. Schreibman, R. Siemens, J. Unsworth. Oxford: Blackwell Publishing. Hunt, H., Wanderley, M., and Paradis, M. (2003). The Importance of Parameter Mapping in Electronic Instrument Design, J. of New Music Research, 32(4):429-440, 2003. Kolesnik, P. and Wanderley, M. (2004). Recognition, Analysis and Performance with Expressive Conducting Gestures. In Proc. of the 2004 Int. Computer Music Conf. McAdams, S., Depalle, P., and Clarke, E., Analysing Musical Sounds, Chapter 4 of Empirical Musicology: Aims, Methods, Prospects, Oxford University Press, to be published in 2004. McKay, C. (2004). Automatic genre classification as a study of the viability of high-level features for music classification. In Proc. of the 2004 Int. Computer Music Conf. Riley, J., and Fujinaga, I. (2003). Recommended best practices for digital image capture of musical scores. OCLC Systems and Services 19(2): 62-69. Scavone, G.P. (2003). The Pipe: Explorations in Breath Control. In Proc. of the NIME-03 Conf on New Interfaces for Musical Expression, Montreal, Canada, pp. 15-18. Scavone, G.P. (2003). Modeling vocal-tract influence in reed wind instruments. In Proceedings of the 2003 Stockholm Musical Acoustics Conference, pp. 291-294. Thibault, F, and Depalle, P. (2004). Adaptive Processing of Singing Voice Timbre. Canadian Conf on Elect. and Computer Engineering (CCECE), Niagara Falls, Canada. Tindale, A., Kapur, A., and Fujinaga, I. (2004). Towards timbre recognition of percussive sounds. In Proc. of the 2004 Int. Computer Music Conf. Traube, C. and Depalle, P. (2003). Extraction of the excitation point location on a string using weighted least-square estimation of a comb filter delay. In Proc. of the Conf on Digital Audio Effects, London, UK, pp. 188-191. Traube, C. and Depalle, P. (2004). Timbral analogies between vowels and plucked string tones, Int. Conf on Acoustics, Speech, and Signal Processing, Montreal, Canada. Traube, C., McCutcheon, P., and Depalle, P. (2004). Verbal descriptors for the timbre of the classical guitar. Conf on Interdisciplinary Musicology (CIM'04), Graz, Austria. Traube, C., and Depalle, P. (2004). Phonetic gestures underlying guitar timbre description. In Proceedings of the International Conference on Music Perception and Cognition (ICMPC'04), Evanston (IL), USA. Van Nort, D., Wanderley, M.M., and Depalle, P. (2004). On the Choice of Mappings Based on Geometric Properties. In Proceedings of the NIME-04 Conference on New Interfaces for Musical Expression, Hamamatsu, Japan. Vines, B., Wanderley, M., Nuzzo, R., Krumhansl, C., and Levitin, D. (2003). Performance Gestures of Musicians: What Structural and Emotional Information do they Convey? In A. Camurri and G. Volpe (eds.) Gesture-Based Communication in Human-ComputerInteraction. Springer-Verlag, pp. 468-478. Wanderley, M. and Depalle, P. (2004). Gestural Control of Sound Synthesis. In Proc. of the IEEE, Vol. 92, No. 4 (April), Special Issue on Eng. and Music - Supervisory Control and Auditory Comm., Ed. G. Johannsen. Zaborowski, P.S. (2004). ThumbTec: A New Handheld Input Device. In Proc. of the NIME-04 Conf on New Interfaces for Musical Expression, Hamamatsu, Japan. Zadel, M., Kosek, P.C., and Wanderley, M.M. (2004). An Inertial, Pliable Interface. In Proc. of the 2004 Conf on New Interfaces for Musical Expr, Hamamatsu, Japan. Proceedings ICMC 2004