Page  1 ï~~CCRMA STUDIO REPORT Carr Wilkerson / Sasha Leitman / Fernando Lopez-Lezcano CCRMA, Stanford University email: carrlane,sleitman,nando @ 1. INTRODUCTION The Stanford Center for Computer Research in Music and Acoustics (CCRMA) is a multi-disciplinary facility where composers and researchers work together using computerbased technology both as an artistic medium and as a research tool. While the CCRMA community is settling into the new Knoll, a rich selection of visiting scholars, faculty, professionals and talented students mix daily. The following report attempts to capture CCRMA's current creative and research environment. 2. PEOPLE 2.1. Movements Ge Wang from Princeton University arrived last Fall to take his Assistant Professor post. Michael Gurevich completed his post-doctoral stay at CCRMA and has moved to Belfast to take a position as a Lecturer at School of Music and Sonic Arts, Queen's University. Congratulations to Chryssie Nannou for being elected as an "At Large" Board Member of the ICMA. 2.2. Visiting Scholars Tom Rossing, Visiting Professor of Physics, organized a special session of the Acoustical Society of America Conference in New Orleans for Max Mathews where there was a special panel on the History of Computer Music including: Max Matthews, John and Maureen Chowning, Richard Moore, David Wessel, Chris Chafe and Julius Smith. He has an active schedule of teaching, lecturing and publishing (see Books below). Henri Penttinen, a visiting researcher from the Helsinki University of Technology, Department of Signal Processing and Acoustics, spent the Autumn and Winter term at CCRMA working on Mobile Phone synthesis and ensemble performance with his eBottle. Jonathan Middleton, visiting researcher and composer from Eastern Washington University where he is an Assistant Professor of Music Theory and Composition. He has come to explore composition collaborations in an interdisciplinary educational system hoping design a music informatics curriculum for EWU. Mario Mora, a visiting composer from the University of Chile, Arts Faculty, spent this Winter attending to works culminating in a concert of electroacoustic music performed at CCRMA in December. Alain Renaud, a visiting researcher from School of Music and Sonic Arts, Queen's University Belfast spent last summer as part of the strong collaboration between SARC and CCRMA in network performance. 3. COMPOSITIONS Chris Chafe: Cefiru (2007) for celleto with live DSP made from a bank of phasor filters designed by Max Mathews. Tomato Quintet (2007) installation: live sonification of ripening tomatoes with CO2, temperature and light sensors. Tomato Music (2007) 8-ch computer-generated music from the Tomato Quintet installation. Congruence (2007) 48-ch. Remix of e=m2c concert with Roscoe Mitchell and Roberto Morales Jonathan Berger: Jiyeh, for 8-channel digital audio and its companion piece for violin and orchestra. It is based on time lapsed satelite photographs of a war-caused environmental disaster and was performed in California and Banff Canada. His CD, Miracles and Mud was released on Naxos American Masters series. Fernando Lopez-Lezcano: El Dinosaurio Habla, a piece for an analog synthesizer ("El Dinosaurio") that Fernando built from scratch in 1980/1981. After recently building a MIDI to CV interface for it, it was used live for the first time in conjunction with a Linux computer running Sooperlooper and Ardour to build and change sonic textures in realtime. Juan-Pablo Caceres: Give a man an orchestra and he doesn't become us (2007), performed in San Francisco. He also presented a visual piece orquesta at SEAMUS 2007 (in collaboration with Carlos Costa). Bruno Ruviaro: Anomia (2007) for chamber orchestra, deals with the use of borrowed fragments from existing music in the composition of new material. Malleable (2007) the first version was presented in the CCRMA Modulations concert in San Francisco. It works with the degree of recognizably and "malleability" of audio fragments, which are reintegrated into new sonic structures. Connected to these two works, he is writing an essay (called Intellectual "Improperty") about musical borrowing (sampling, etc) in both the acoustic and electronic domains. Per Bloland: FeXIV (Iron Fourteen) (2007, commissioned by Michael Straus) for solo saxophone and elec

Page  2 ï~~tronics with video. Henri Penttinen: Dialogue (2007) A dialog between a violin (Elise MacMillan) and a saxophone (Adnan Marquez). Structure and backdrop provided by 8-string electric bass/guitar (Chris Warren) and his eBottle. Mario Mora: NUD for flute and tape, SAX for alto saxophone and tape, and CaLMA for piano, live electronics and computer generated image. 4. RESEARCH 4.1. Signal Processing Jonathan Berger and Chris Chafe collaborate in fMRI studies of music cognition with Dan Levitin, Vinod Menon and S. Devarajan as well as with numerous researchers in environmental and biological sciences on auditory display of data. Julius Smith has been focusing primarily on polishing and extending the four books in his music signal processing series, with recent emphasis on developing software reference implementations, primarily in the Faust language. Some of this material has been adapted for software laboratory modules in the RealSimple project at CCRMA. Nelson Lee has been measuring and modeling the coupling of a single-vibrating string in two orthogonal planes with the intent of high-fidelity acoustic guitar synthesis. Matt Wright works toward his thesis defense by taking a look at a musical event's Perceptual Attack Time (PAT), or is its perceived moment of rhythmic placement. The representation of PAT with probability density functions provides a new perspective on the long-standing problem of predicting PAT directly from acoustical signals. Marina Bosi is looking at applications of the Integer Modified Cosine Transform (MDCT) to lossless coding, multichannel audio coding with the Karhunen-Loeve Transform, and advanced entropy coding schemes. Song Hui Chon considers "sound annoyance" using using six stimuli. As expected, sound annoyance was a function of loudness level, subjects perceived a stimulus with higher bandwidth as more annoying even at the same loudness level. She is also looking at the relationship between sound annoyance and consonance. Chris Chafe and Stefania Serafin collaborate with Juraj Kojs, on cyberinstruments via physical modeling synthesis and their compositional applications, which was published in Leonardo Music Journal, 2007. With Patty Huang and Julius Smith, Jonathan Abel developed methods for implementing interior losses and scattering in waveguide mesh reverberators. These methods allow a new class of closed waveguide mesh structures. Gautham Mysore has been doing research on music dereverberation. Jonathan Abel and Henri Penttinen measured the highly inharmonic response of a "Slinky." They created a physical model for the slinky based on waveguide modeling. Guillermo Garcia has been working on restoration of musical signals using source models by developing efficient restoration models that incorporate a priori knowledge about the sources present in the signal mixture. Juhan Nam is working on signal processing research applied to 3-D sound. Working with measured HRTF, he is extracting acoustic features and applying filter design methods to obtain efficient models. Jonathan Abel and Patty Huang developed a measure of reverberation echo density based on the percentage of reverberation impulse response samples lying outside the local standard deviation. Hiroko Terasawa, Malcolm Slaney, and Jonathan Berger propose a new hybrid model for timbre perception in collaboration with Patty Huang and Jonathan Abel. This model integrates two complementary component models, one of color and the other of texture. David Yeh has been working on digital, real-time guitar distortion effects based upon existing analog effects circuits. The techniques involve understanding the intent of the original design to inform a digital implementation. With Tamara Smyth, Jonathan Abel has been studying reed instrument measurement and synthesis. Ryan Cassidy is working in the area of auditory signal processing for music listening for the hearing impaired. His work focuses on efficient loudness modeling and control, accurate simulation of impaired listening situations, and novel auditory modeling methods. Jonathan Abel has also been involved with Gary Scavone looking at the how the reflection function of a mouthpiece or bell can be estimated by comparing impulse response measurements from a tube rigidly terminated. 4.2. New Media and Musical Systems Jonathan Abel, Patty Huang, Miriam Kolar, Julius Smith and John Chowning collaborate on this multidisciplinary project with Professor John Rick of the Stanford University Department of Archeology: Acoustics of the Underground Galleries of Ancient Chavin de Hudntar, Peru. The importance of site acoustics is suggested by distinctive architectural features, notably an extensive network of underground galleries used in part for ritual purposes. They focus on measuring, modeling, and reconstructing the original site acoustics at Chavin to understand the implications of auditory experiences within the galleries as related to the sites role in developing religious authority. Stanford Laptop Orchestra is to become a new full scale ensemble of computer mediated meta-instruments. Currently, its founder Ge Wang is leading the construction phase of the ensemble as well as organizing the curriculum to culminate in a new course appearing at CCRMA soon. This project is based on Ge's work with Princeton Laptop Orchestra. Woon Seung Yeo, now an Assistant Professor at the Graduate School of Culture Technology at KAIST, has been working on multimedia environment for direct realtime sound synthesis from images based on the method

Page  3 ï~~of raster scanning combining scanning variables (such as sample rate and probe size) with image processing filters. Juan-Pablo Caceres is currently working on machine learning techniques for automatic sound design. This is currently implemented using a hybrid of k-means clustering, decision trees and gradient descent in order to get real-time performance. He is also on the early stages of research of a project involving music and sound prediction for applications on network performance. John Chowning has been working on a interactive score program called ScorePlan in Max/MSP which provides markers and transport for scores embedded in patches. It has been designed for his piece Voices V2. Composer Jonathan Middleton and acoustician Henri Penttinen have collaborated to create a mobile phone composition based on the circular shapes of a rotary harmonograph. 4.3. Networking and Music The SoundWire research group (Chris Chafe, Ge Wang and Juan Pablo Caceres) uses Internet networks as an extension to computer music performance, composition and research. Soundwire is both a technology and a collective. Stable technology allows relatively painless regular rehearsing and concertizing in a range of sites worldwide. The group has been developing and testing JackTrip used for multi-machine jam sessions over Internet2. New tools include Jmess, a utility to save/load audio connections in Jack and and JackPeek, a real-time networkedcontrolled audio visualization program. Further, DIRAC (Distributed Internet Reverberation for Acoustical Collaboration) addresses a method for creating shared acoustical spaces by "echo construction." NSF funding in hand SoundWIRE begins a project to implement a Quality of Service (QoS) evaluation system using physical models on the network. Concerts include "100 Meeting places" 4-way concert (New York/Chicago/Santa Cruz/Stanford, March 2007) including live audiences in all 4 locations. The project has also been developing long term musical collaborations using network technologies, taking part in a two quarter sequence (fall 2006, winter 2007) of weekly rehearsals with Pauline Oliveros' improvisation class at RPI (Troy, New York), and later also Cynthia Payne's ensemble at UCSC. More connections are planned. Juan-Pablo Caceres in collaboration with Alain Renaud at SARC, Queens University initiated a collective to explore the musical potential of high-speed networks as a real-time performance medium. This collective has been implementing network reverberation/feedback delay and visual synchronization. 4.4. Controllers Edgar Berdahl has been researching applications of con trol theory to computer music. He has been using haptic interfaces to control the gestures that musicians make to their instruments. Conversely, he has been making the acoustics of a vibrating string programmable by closing a feedback loop around a string sensor and string actuator. Rob Hamilton is using an electric guitar with controllable acoustics for some of his compositions. Together with the SPCA's Francis Metcalf, Michael Gurevich is investigating technologies to support the training of dogs to assist the hearing impaired. This research entails human factors concerns of a trainer working in a time-critical situation, along with developing a remote interface for spatialized sound playback. A prototype application is currently being developed using a mobile phone as the controller. Nick Bryan, Steinunn Arnardottir, and Hayden Bursk are working on a 3D tangible step sequencer, CUBEATS. The sequencer consists of clear plastic cubes and a grid of sensor detecting vertical rods. Anyone can build, manipulate, and perform musical patterns by stacking up to three cubes/rod in any position along a grid. Edgar Berdahl is working with Bill Verplank on fundamental and complex haptic control algorithms. They present two practical and effective haptic musical instruments: the haptic drum and the cellomobo (Colin Oldham). Sasha Leitman has continued her work with interactive sound art. In collaboration with Stanford Dance and Drama professor, Aleta Hayes, she created a series of acoustic and computer generated real time musical instruments as set pieces for the production of Califia. In addition, she and Jen Carlile have continued to work on Wheel to Reel, the set of interactive sound sculptures that uses abandoned bicycle parts to play old tape reels. 4.5. Computing Planet CCRMA: Fernando Lopez-Lezcano has been working on keeping up to date with the most recent versions of sound, music and midi Open Source software for the Linux platform. PlanetCCRMA currently runs on the latest Fedora Core releases. A current collaboration with IRCAM researcher Arnaud Gomes-do-Vale seeks to port PlanetCCRMA to CentOS. ChucK: Ge Wang now leads the ChucK programming language project from Stanford. He is closely collaborating with Rebecca Fiebrink on Analysis and Learning frameworks for and within ChucK and with Rebecca, Perry Cook, and Dan Trueman on developing ChucK features. After 10 years of work on Snd, Bill Schottstaedt has declared the editor complete, and has returned to nonlinear sound synthesis. The main result so far has been a set of about 40 new generators, and a couple hundred animals and machine sounds. Music, Computing, and Design (M:C:D): a new research group looking at design and development of software systems of all sizes as well as programming languages for computer music synthesis and analysis, soft ware interfaces/interaction paradigms for composition, performance, and education, music information retrieval, computer mediated ensembles. Ge Wang: Instigator.

Page  4 ï~~CCRMA has added nine smaller and faster fanless workstations designed by Fernando Lopez-Lezcano bringing the total to 49. 5. CONCERTS AND EVENTS CCRMA ongoing concert series, organized by Chryssie Nannou, presents performances by an array of pioneers of electronic music, and established and emerging new composers. Recent guests have included Jean-Claude Risset, Hans Tutschku, Mark Applebaum, Graeme Jennings, and a series of concerts embracing emerging technologies and musical paradigms such as the laptop orchestra, mobile phone ensemble, and multi-site networked performance collaborations. Max Mathews, as part of his 80th birthday celebration, performed the Henry Cowell piece Rhythmicana (1931) with the Stanford Orchestra, Jindong Cai conducting. Assisted by Leland Smith, Max interpreted the score by realizing the Rhythmicon using custom designed resonant filters and his Radio Baton. The piece was recorded at Skywalker Sound. Mobile Phone Orchestra of CCRMA performed in January at the CCRMA stage. The Mobile Phone Orchestra harnessed the phone's keyboards, built-in accelerometers, and cameras. This way phones and laptops are turned into powerful musical control interfaces for interactive musical works. Pieces for the concert were composed by Jonathan Middleton, Greg Schiemer, Ge Wang, Mario Mauro, and Henri Penttinen. In November, CCRMA hosted a night of experimental electronic music at Cellspace in San Francisco with performances by Ge Wang, Juan-Pablo Caceres, Fernando Lopez-Lezcano, Luke Dahl, Michael Gurevich, Bruno Ruviaro, and Les Maladies Sexualment Transmissible (MST's). Performances were enhanced with projections and DJ sets by Steinnun Arnardottir and Jeff Cooper (Noggin). 6. WORKSHOPS, CLASSES AND GATHERINGS 6.1. Workshops This year three of our workshops were held at the Knoll: Digital Signal Processing with Perry Cook and Xavier Serra, Perceptual Audio Coding with Marina Bosi and Richard Goldberg, and Signal Processing Techniques for Digital Audio Effects with Jonathan Abel and Dave Berners. Others workshops were exported. Fernando Lopez-Lezcano and Juan-Pablo Caceres taught a two-week summer workshop in computer music and signal processing at Guanajuato, Mexico. We hosted three workshops at the Dongguk University in Seoul, Korea including: Introduction to Sound Synthesis by Jun Kim, Introduction to Audio Digital Signal Processing for Musicians by Woon Seung Yeo, and Music Information Re trieval by Kyogu Lee. Three workshops were held at the Republique Polytechnic of Singapore: Introduction to Sound Synthesis and Audio DSP for musicians with Woon Seung Yeo, Improvisation and Experimentation with Mark Applebaum and Physical Interaction Design for Music with Micheal Gurevich and Carr Wilkerson. CCRMA hosted a Mobile Phone Programming Workshop, in November 2007, with guest facilitators Georg Essl from Deutsche Telekom Laboratories, Germany and Jarno Seppinen from Nokia. 6.2. Classes A class of Music and Acoustics of Ancient and Contemporary Greece hosted at the Aristotle University of Thessaloniki, Greece, was taught by Chris Chafe assisted by Chryssie Nanou, as part of the Bing Overseas Studies Program Seminars at Stanford. The course for 15 undergraduate Stanford students was an exploration of musical archeology, environmental soundscapes of the countryside, urban environment, and undersea world, and an introduction to contemporary Greek composition and performance. There are several new courses planned for this year, initiated by CCRMA's newest faculty member: Ge Wang. Keep an eye out for these titles: S, M, L, XL: Designing and Implementing Software Systems for Computer Music, Laptop Orchestra, Music and Computing (computational applications, algorithms, systems for analysis and synthesis), and Music, Computing, and Design. 6.3. Gatherings Among the active groups are the DSP Group, Stanford AES Student Section, and the CCRMA User Group. Weekly, we host the CCRMA Colloquium series where various experts speak on relevant topics. Recently we've hosted: Judith Shatin, Hans Tutschku, Dan Overholt, and David Merrill. We've also hosted several meetups sponsored by the Bay Area Computer Music Technology Group. 7. BOOKS Thomas Rossing Springer Handbook ofAcoustics. Springer Press, 2007. Thomas Rossing, ed. One of the newest handbooks in the Springer Handbook series. Its 28 chapters describe some of the latest research in acoustics along with in-depth reviews of both fundamental theory and applications. 8. CONCLUSION Traditional CCRMA creative and research foci like Digital Signal Processing and Composition are still very active while new disciplines like Network Telematics and Mobile and Laptop Orchestra performances are gaining traction. The workshop program is undergoing some revisioning (look soon for the schedule) but will remain as part of CCRMA's offering. CCRMA looks to be as vibrant as ever. Come by and see us!