Page  1 ï~~THE MUSIC TECHNOLOGY PROGRAM AT MCGILL UNIVERSITY Gary P. Scavone and Nathan Whetsell Schulich School of Music, McGill University Montr6al, Quebec, Canada ary. ABSTRACT The Music Technology program at McGill University prepares students for commercial and academic opportunities in music technology, offering degrees at the master's and doctoral levels as well as two minor programs at the undergraduate level. Areas of research include audio signal processing, music information retrieval, human-computer interface design and analysis, computational acoustic modeling, and music perception and cognition. 1. INTRODUCTION The Music Technology program at McGill University affords students the opportunity to explore music creation, technology, and research in an environment that boasts an acclaimed performance tradition and state-of-the-art facilities. The Schulich School of Music is recognized as one of the premiere music schools in North America. Active programs in Performance, Composition, Sound Recording, Music Education, Theory, and Musicology offer additional avenues for students to explore and develop their musical interests. In addition, the Centre for Interdisciplinary Research in Music, Media, and Technology (CIRMMT) brings together researchers and their students from McGill University, l'Universit6 de Montr6al, and l'Universit6 de Sherbrooke. For extracurricular activities, Montreal offers a thriving and varied music scene. 2. PERSONNEL & STUDENTS There are five full-time professors of Music Technology: Philippe Depalle, Ichiro Fujinaga, Stephen McAdams, Gary Scavone, and Marcelo Wanderley. Additional staff includes a chief electronics technician, Darryl Cameron, adjunct professors Axel Mulder, Bruce Pennycook, and Marc-Pierre Verge, and an instructor, Kojiro Umezaki. The current student population includes five postdoctoral researchers, nineteen doctoral students, nine master's students, and about thirty undergraduates in the Minor programs. nathan 3. DEGREE PROGRAMS McGill's Music Technology Program offers degrees at the master's and doctoral levels, as well as two undergraduate Minors. The undergraduate programs emphasize practical training in the design, development, and use of software for audio and new media processing. The graduate programs focus on research. 3.1. Undergraduate The Musical Science and Technology Minor program focuses on interdisciplinary topics in science and technology applied to music. The goal of the program is to help prepare students for commercial jobs in the audio technology sector and/or for subsequent graduate study and research. The program is designed for students who have a strong background in the sciences and prior experience with mathematics and computer programming. The goal of the Musical Applications of Technology Minor is to provide instruction in practical and creative applications of technology for musical purposes to a broad range of students from varied backgrounds. 3.2. Graduate The Master of Arts degree in Music Technology is a twoyear thesis-based program. Instruction is provided via graduate seminars that are typically completed in the first year of study. Students then concentrate on thesis research during their second year. Applicants must have a bachelor degree in a related technological field and demonstrate evidence of expertise in music technology. The PhD in Music Technology is a research-intensive program culminating in a thesis that demonstrates significant contribution to the field. Applicants at the doctoral level must demonstrate strong skills in music technology, including previous research experience and the ability to write research reports. PhD applicants are typically expected to have completed a master's degree in science or technology. 4. RESEARCH Research is loosely organized under the five full-time faculty members and their respective research labs. At the

Page  2 ï~~graduate level, all students are expected to pursue research projects for both the master's and PhD programs. 4.1. Faculty Research * Philippe Depalle's research focuses on digital synthesis and processing of sound using an analysis and resynthesis approach. Applications include non-linear oscillating or time-varying systems, room acoustics, psychoacoustics, audio effects processing, recording, transmission systems and electronic musical instruments, and creative tools for composers and multimedia artists to process sound. Philippe co-directs the Sound Processing and Control Laboratory (SPCL) with Marcelo Wanderley. * Ichiro Fujinaga is working to develop and evaluate practices, frameworks, and tools for the design and construction of worldwide distributed digital music archives and libraries. The research focuses on four general areas: - Optical music recognition using microfilms - Development and evaluation of digitization methods for preservation of analogue recordings - Design of workflow management system with automatic metadata extraction - Formulation of interlibrary communication strategies Ichiro directs the Distributed Digital Music Archives and Libraries Laboratory. * Stephen McAdams' research goal is to understand how listeners mentally organize a complex musical scene into sources, events, sequences, and musical structures using techniques from digital signal processing, mechanics, psychophysics, cognitive psychology, and cognitive neuroscience. Stephen directs the Music Perception and Cognition Laboratory and the Real-Time Multimodal Laboratory. He is also director of CIRMMT. * Gary Scavone is interested in acoustic research, modeling, and sound synthesis that leads to: - a theoretical understanding of the fundamental acoustic behavior of sounding objects; - computer-based mathematical models that implement the acoustic principles as accurately as possible; - efficient, real-time synthesis algorithms capable of producing convincing sounds; - human-computer interfaces for use in controlling and interacting with real-time synthesis models. Research projects are also directed toward the development of software tools to assist with the synthesis of sounds in real time. Gary directs the Computational Acoustic Modeling Laboratory. * Marcelo Wanderley's research focuses on the analysis of performer-instrument interaction with applications to gestural control of sound synthesis. His main research interests include gestural control of sound synthesis, input device design and evaluation, sensor de sign and data acquisition, and human-computer interaction. He is the co-author (with Eduardo R. Miranda) of the textbook New Digital Musical Instruments: Control and Interaction Beyond the Keyboard (A-R Editions 2006), the first comprehensive reference on this area. Marcelo directs the Input Devices and Music Interaction Laboratory and co-directs the SPCL with Philippe Depalle. 4.2. Student Research and Projects * John Ashley Burgoyne (PhD) is starting his dissertation on statistical sequence models for music information retrieval (MIR). Using three case studies - optical music recognition, audio chord recognition, and harmonic analysis of MIDI scores - he is seeking to parameterize the space of labeling problems that tend to arise in MIR to help researchers who may lack a strong background in statistics to choose more appropriate models for their tasks than the standard hidden Markov model. * Avrum Hollinger (MA) is developing musical interfaces that can be used inside magnetic resonance imaging (MRI) scanners. His piano controller lets neuroscientists studying motor learning of musical tasks scan a subject's brain while synchronizing the scanner, auditory and visual stimuli, and auditory feedback with the onset, offset, and velocity of the piano keys. * Corey Kereliuk (MA) is examining techniques to improve parameter estimates for sinusoidal models of audio. Using the Wigner-Ville distribution to estimate the instantaneous frequency of non-stationary signals, this estimate is used in conjunction with the Viterbi algorithm to score state transitions of a hidden Markov model of phase trajectories. * Moonseok Kim (PhD) is developing real-time synthesis algorithms of bowed-string instruments, focusing on the modeling of their body response using 1D, 2D and 3D digital waveguides. * Denis Lebel (MA) is working on an efficient model to synthesize the sound of breaking glass. Control of the synthesis model uses time and frequency information obtained through analysis of spectrograms of sound samples, as well as stochastic parameters to model the random nature of breaking glass. * Antoine Lefbvre (PhD) is working on the design of a saxophone with improved acoustical and mechanical characteristics, accomplished with the aid of composite materials, physical modeling, impedance measurements, and collaborations with professional musicians. The work will entail the progressive design and construction of a new instrument in collaboration with industry. * Beinan Li (PhD) is investigating the optical extraction of audio from stereo phonograph records, i.e., to scan LPs as a very high-resolution image, extract the groove information of the image, and finally restore the audio information from the groove model.

Page  3 ï~~* Mark Marshall (PhD) is using techniques from the field of human-computer interaction to examine the suitability of classes of sensors for the control of specific parameters in computer music interfaces, and to develop systems for the creation of vibrotactile feedback in such interfaces. * Joseph Malloch (PhD) is focusing on the development and refinement of new digital musical instruments (DMIs), and on issues and approaches involved in mapping control to sound. His T-Stick controller was presented at NIME 2007 in New York and Wired Nextfest in Los Angeles. Malloch has developed software for "plug and play" Open Sound Control routing and mapping of DMIs in collaboration with Stephen Sinclair. * Cory McKay (PhD) is developing jMIR, an open-source software suite for use in MIR research. It can be used to study music in audio and symbolic formats, to mine cultural information from the internet, and to manage music collections. jMIR includes software for extracting features, applying machine learning algorithms, and analyzing metadata. * Bertrand Scherrer (PhD) is working on the indirect acquisition of instrumental gestures, with particular emphasis on the classical guitar. He is extending previous work by Caroline Traube to include more performance gestures and to use different spectral analysis tools to acquire harmonic flute fingerings. * Andrey da Silva (PhD) is researching the numerical study of single-reed woodwind instruments, focusing on the aeroacoustic phenomena, i.e., the phenomena associated with the interaction between the acoustic and flow fields. * Stephen Sinclair (PhD) has produced a software package called Dimple that allows control of hapticallyenabled rigid body simulations through the Open Sound Control protocol. This enables the use of force feedback devices as controllers for sound synthesis in dataflow programming environments, such as Max/MSP and Puredata. * Douglas Van Nort (PhD) is researching theoretical and practical mapping strategies for control of sound processing, with an interest in the geometric and topological structure of the parameters of musical performance systems. He is also designing an adaptive analysis/synthesis framework, using a state-space representation for texture and timbre modifications. * Shi Yong (MA) is investigating the modeling and measurement of the radiation directivity of wind instruments. 5. FACILITIES & RESOURCES 5.1. Research Laboratories The Music Technology program's six research laboratories provide workspaces for graduate students and research assistants. Detailed information about each laboratory is available from the Music Technology website 1. Grant funding for the research labs has come from the Canadian Foundation for Innovation, the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, le Fonds Qu6b6cois de la Recherche sur la Nature et les Technologies, le Fonds Quebecois de la Recherche sur la Soci6t6 et la Culture, CIRMMT, and McGill University. 5.2. Computer/Teaching Laboratory The Music Technology Computer Laboratory currently includes twelve iMac Intel computers. A wide range of software applications are installed on the computers to support instruction, including Max/MSP, MATLAB, Logic Express, Tassman, Finale, and SmartMusic. 5.3. Other Resources CIRMMT's state-of-the-art research labs, the Digital Composition Studios, the studios of the Sound Recording program, and the Marvin Duchow Music Library offer additional resources for students. Active research collaborations with other faculty in Medicine, Neuroscience, Computer Science, Electrical and Mechanical Engineering and Psychology, to name a few, are ongoing. 6. SELECTED RECENT PUBLICATIONS Burgoyne, J. A. and S. McAdams. "A meta-analysis of timbre perception using nonlinear extensions to CLASCAL," R. Kronland-Martinet, S. Ystad and K. Jensen, Eds., Sense of Sounds, Lecture Notes in Computer Science, Springer, Berlin, to be published. Burgoyne, J. A. and S. McAdams. "Non-linear scaling techniques for uncovering the perceptual dimensions of timbre," Proc. Int. Computer Music Conf, Copenhagen, Denmark, 2007, 73-6. Burgoyne, J. A., L. Pugin, C. Kereliuk and I. Fujinaga. "A cross-validated study of modelling strategies for automatic chord recognition in audio," Proc. Int. Conf on Music Information Retrieval, Vienna, Austria, 2007. Caclin, A., M.-H. Giard, B. Smith and S. McAdams. "Interactive processing of timbre dimensions: a Garner interference study," Brain Research, 1138, 159-70, Mar. 2007. Caclin, A., S. McAdams, B. Smith and M.-H. Giard. "Interactive processing of timbre dimensions: an exploration with event-related potentials," J. Cogn. Neurosci., 20(1):49-64, 2008. Dalitz, C., M. Droettboom, B. Czerwinski and I. Fujinaga. "A comparative study of staff removal algorithms," IEEE Trans. Pattern Anal. Mach. Intell., 13(2). 1

Page  4 ï~~Gingras B., S. McAdams and P. Schubert. "Effects of musical texture, performer's preparation, interpretative goals, and musical competence on error patterns in organ performance," A. Williamon and D. Coimbra, Eds., Proc. Int. Sym. on Performance Science, Porto, Portugal, Nov. 2007, 259-64. Giordano, B. L., S. McAdams and J. McDonnell. "Acoustical and conceptual information for the perception of animate and inanimate sound sources," Proc. 13th Meeting of the Int. Conf on Auditory Display, Montreal, QC, Canada, 2007. Hollinger, A., V. Penhune, R. Zatorre, C. Steele and M. Wanderley. "fMRI-compatible electronic controllers," Proc. 7th Int. Conf on New Interfaces for Musical Expression, New York, NY, USA, 2007, 246-9. Kereliuk, C., B. Scherrer, V. Verfaille, P. Depalle and M. Wanderley. "Indirect acquisition of fingerings of harmonic notes on the transverse flute," Proc. Int. Computer Music Conf, Copenhagen, Denmark, 2007. Lagrange, M., N. Whetsell and P. Depalle. "On the control of the phase of resonant filters with applications to percussive sound modelling," Proc. 11th Int. Conf Digital Audio Effects, Espoo, Finland, 2008. Lai, C., I. Fujinaga, D. Descheneau, M. Frishkopf, J. Riley, J. Hafner and B. McMillan. "Metadata infrastructure of sound recordings," Proc. Int. Conf on Music Information Retrieval, Vienna, Austria, 2007, 157-8. Lefebvre, A., G. P. Scavone, J. Abel and A. BuckiewiczSmith. "A comparison of impedance measurements using one and two microphones," Proc. Int. Sym. Musical Acoustics, Barcelona, Spain, 2007. Lemaitre, G., P. Susini, S. Winsberg and S. McAdams. "The sound quality of car horns: A psychoacoustical study of timbre," Acta Acustica, 93(3):457-68, May 2007. Li, B., S. de Leon and I. Fujinaga. "Alternative digitization approach for stereo phonograph records using optical audio reconstruction," Proc. Int. Conf Music Information Retrieval, Vienna, Austria, 2007, 165-6. Malloch, J. and M. Wanderley. "The T-Stick: From musical interface to musical instrument," Proc. 7th Int. Conf New Interfaces for Musical Expression, New York, NY, USA, 2007, 66-9. Malloch, J., S. Sinclair and M. Wanderley. "A networkbased framework for collaborative development and performance of digital musical instruments," to appear in Proc. Computer Music Modeling and Retrieval 2007 Conf., Springer, Berlin, 2008. Marentakis, G., N. Peters and S. McAdams. "DJ Spat: Spatialized interactions for DJs," Proc. Int. Computer Music Conf., Copenhagen, Denmark, 2007. Marshall, M. T., J. Malloch and M. Wanderley. "Non conscious control of sound spatialization," Proc. 4th Int. Conf Enactive Interfaces, Grenoble, France, 2007, 377-80. McKay, C. and I. Fujinaga. "jWebMiner: A web-based feature extractor," Proc. Int. Conf Music Information Retrieval, Vienna, Austria, 2007, 113-4. McKay, C. and I. Fujinaga. "Style-independent computerassisted exploratory analysis of large music collections," J. Interdisciplinary Music Studies, spring 2007, 63-85. Peters N., M. Evans and E. Britton. "TrakHue - Intuitive gestural control of live electronics," Proc. Int. Computer Music Conf, Copenhagen, Denmark, 2007, 500 -3. Pugin, L., J. A. Burgoyne and I. Fujinaga. "Goal-directed evaluation for the improvement of optical music recognition of early music prints," Proc. Joint Conf on Digital Libraries, Vancouver, BC, Canada, 2007, 303 -4. Pugin, L., J. A. Burgoyne and I. Fujinaga. "MAP adaptation to improve optical music recognition of early music documents using hidden Markov models," Proc. Int. Conf on Music Information Retrieval, Vienna, Austria, 2007, 513-6. Rogers, S. E. and D. J. Levitin. "Short-term memory for musical intervals: Cognitive differences for consonant and dissonant pure-tone dyads," Proc. AES 123rd Convention, New York, NY, USA, 2007. Scavone, G. P., A. Lefebvre and A. R. da Silva. "Measurement of vocal-tract influence during saxophone performance," J. Acoust. Soc. Am., 123(4), Apr. 2008. da Silva, A. R. and G. P. Scavone. "Coupling lattice Boltzmann models to digital waveguides for wind instrument simulations," Proc. Int. Sym. Musical Acoustics, Barcelona, Spain, 2007. da Silva, A. R. and G. P. Scavone. "Lattice Boltzmann simulations of the acoustical radiation from waveguides," J. Phys. A: Math. Theor., 40, 397-408, 2007. da Silva, A. R., G. P. Scavone and M. van Walstijn. "Numerical simulations of fluid structure-interactions in single-reed mouthpieces," J. Acoust. Soc. Am., 122(3): 1798-810, Sep. 2007. Susini, P., S. McAdams and B. Smith. "Loudness asymmetries for tones with increasing and decreasing levels using continuous and global ratings," Acta Acustica, 93(4), 623-31, Jul. 2007. Tardieu, J., P. Susini, F. Poisson, P. Lazareff, M. Mzali and S. McAdams. "Perceptual study of soundscapes in train stations," Applied Acoustics, to be published. Van Nort, D. and M. Wanderley. "Control strategies for navigation of complex sonic spaces," Proc. 7th nt. Conf New Interfaces for Musical Expression, New York, NY, USA, 2007, 379-82. Van Nort, D., D. Gauthier, S. X. Wei and M. Wanderley. "Extraction of gestural meaning from a fabric-based controller," Proc. Int. Computer Music Conf, Copenhagen, Denmark, 2007.