Page  00000001 MEDIALOGY AT AALBORG UNIVERSITY COPENHAGEN RolfNordahl, Stefania Serafin, Amalia De Goetzen, Juraj Kojs Medialogy Aalborg University Copenhagen {rnm, sts, adgo, juko } ABSTRACT In this paper we describe recent developments in the Medialogy education at Aalborg University Copenhagen focusing on the role of sound in interactive new media. Furthermore, we describe new educations which have been developed in Medialogy in the last year in which sound plays an important role. 1. INTRODUCTION In 2002 the study of Medialogy was established at Aalborg University. The study was initiated since in Denmark there was a need for further studies within the vast field of Multimediadesign - namely directed towards the graduates of the short degree study Multimedia design. Music technology was from the start a pillar stone in the concept of Medialogy. However, 2004 brought a benchmark for Medialogy - both in conceptual strategy as well as within strengthening the part of music technology. This was done through several factors: starting intake of students from short degree IT-graduates programmes, revision of the undergraduate programme, starting the M.Sc. programme, as well as geographically expanding Medialogy to being offered by Aalborg University in three cities of Denmark: Copenhagen (capital city), Esbjerg and planned for 2005 - the city of Aalborg itself. Medialogy has been formulated to enable the students to create their own bridges between current technologies and creative applications and in doing so breaking down conventional barriers between humanities and Science; creating the education in Medialogy has been founded on the belief that in real-life creative teams there is a need for people that have a strong intuitive understanding of technology. 2. FACULTY During the winter 2003-04 the Copenhagen division of Aalborg University was initiated. In this process both existing as well as new faculty members have been contributing to the start: Stefania Serafin's research interest focus on sound models for interactive media [3]. Currently she is extending the research pursued during her PhD studies at CCRMA, Stanford University on physical models of friction driven musical instruments to simulated multimodal environments in which sound plays an important role by physical principles. She is also investigating interactive physically model spaces such as 3D waveguide meshes which are simulated in the Aalborg VR lab CAVE system, together with Federico Fontana from the University of Verona in Italy. Amalia De Goetzen's research interest is focused on sound synthesis driven by expressive gestures [4]. Amalia is currenty investigating how to apply HCI laws to the analysis of performer's gestures. In particular, she is interested in the role of multimodality and multi-sensory communication as central in the design of the next generation interfaces. Juraj Kojs is a composer interested in sonicscapes positioned at the border of hearing, geometric patterns in music, interactive composition, composing with physical models, and music of his native Slovakia. His works were performed in Chile, Cuba, Denmark, Great Britain, Italy, Slovakia, and the US. In the academic year 2004/2005 he taught classes in computer music, interactive performance and augmented sonic spaces. 3. RESEARCH 3.1. Sound synthesis for interactive media Research in Medialogy focuses on designing new sound models for interactive media such as installations, computer games and virtual reality systems. Interactive sound models have been studied for a long time in the computer music community. The same approach can also be used to simulate everyday sounds. Such models are interesting for realistic real-time simulations and to use computers to extend sonic possibilities offered by the real world. 3.2. Multimodal systems We approach the problem of "making sound with gestures" delving into psychological theories about expressiveness, focusing in particular on possible applications dealing with inter-modality and mixed reality environments. HCI design can indeed benefit from this kind of approach because of the quantitative methods that can be applied to measure expressiveness [2].

Page  00000002 Interfaces can be used in order to convey expressiveness, which is a plus of information that can help interacting with the machine; this kind of information can be coded as spatio-temporal schemes, as the role of multimodality and multi-sensory communication will be central in the design of the next generation interfaces. As a consequence, non-speech communication will play an important role inside the information stream established between machines and users it is stated in Gestalt theory. 3.3. Sound design to enhance presence Virtual Environments offers new affordances for multimodal experiences and new roles for sound. However, most of the research has been performed on the visual modality and technologies supporting this sense. We are researching how sound and images interact in virtual environments. Furthermore, conclusions from the project show that new approaches to sound design for VE can be used with success. In a series of experiments, 18 participants were exposed to a Virtual Environment containing such new techniques for implementing sound. Through the experiments an understanding to which degree multimodality works within VE was achieved. The project found that illusions such as the ventriloquism effect and visual dominance exist within VE. However, it also found that by merging information for the auditory and visual modalities, increased performance in memory capabilities can be achieved. In this installation, the angel shown in Figure 2 is augmented with a camera connected to a laptop which performs a motion tracking algorithm, and loudspeakers hidden inside the wings which deliver the sampled thoughts of the people passing in front of the angel. The main idea of the installation is that people wearing the wings perceive that they are listening to the thoughts of people passing by. The technical part of the project consists on tracking motion, distance and position of the people seen by the angel, and varying the sonic characteristics the sampled thoughts (amplitude and frequency content cues) in such a way that the person personifying the angel feel like he or she is listening to what the people passing by are thinking. Figure 1. The Prague viewpoint used for experiment on sound design for multimodal experiences and presence research A paper [5] concerning the relationship between sound in movies and sound in VR has been accepted to the School of Sound Workshop, and international venue taking place in London in April 2005, where professionals from the movie industry talk about their experiences with sound design. 3.4. Interactive installations The cyber angel [6] is an installation featured at the QI and Complexity conference in Beijing, November 2004. The idea behind the installation is to create an interpretation of the movie "Der Himmel iiber Berlin" by Wim Wenders, in which an angel is able to read thoughts of people. Figure 2 The Cyber-angel - an interactive installation based on Wim Wenders movie "Der Himmel ilber Berlin". 4. TEACHING Currently Medialogy enrols student that have a short degree in Designer of New Media and equivalent educations. Approximately 85% of the students are graduates from the nationwide Multimedia designer education. Students have through the study of multimedia design been exposed to areas within design and HCI and have a keen interest and are seemingly well traversed in applying current methodologies and concepts of HCI. The students come with a firm background in designing for New Media and have been previously educated with the approach of designing for end-users; to a high degree focusing more on the aesthetical aspects and properties of

Page  00000003 using applications and technology, than solving more traditional engineering problems in creating applications. To endorse the interest of the students, the problems and projects, are initially diverging from the normal taskcentric formulation to an approach that is centred on the user and the technical problems encountered; initial project ideas are formulated in a way that they involve designing methods and technologies for communication and interaction that are aesthetically pleasing and interesting, as described in [1]. However, through working with the projects students gain knowledge and understanding of topics within areas such as music technology, computer vision, software engineering, human perception and other areas that allows them to deeper technological understanding. 4.1. Bachelor education Several sound related classes such as auditory perception, sound synthesis and sound for games and animation are taught in the Bachelor level. The main goal of the bachelor education is to allow the students to achieve a broad knowledge in new media technology, so sound is explored mostly in relationships to other media than as a for of communication on its own. 4.2 Master education Since September 2004, students graduated from the Medialogy bachelor as well as other qualified students can pursue their studies in interactive media through a Master education. Concerning sound related issues, students enrolled in the Medialogy Master focus on the role of sound in multimodal interfaces, advanced sound synthesis for interactive media, augmented sonic spaces. During the Medialogy Master, students focus on the role of sound in multimodal interfaces, ubiquitous computing, collaborative learning environments, handheld devices, interactive toys, sensor technologies, virtual reality systems and motion capture systems. Students work at the VR lab in Aalborg, one of the largest VR installations in Europe. In order to design interactive sonic spaces, students work in a 8 channels surround sound system, in which each speaker is located in one of the vertex of the CAVE. 5. STUDENTS PROJECTS 5.1. Connecting people at a train station Typical Medialogy projects consist on combining motion tracking with interactive soundscape design, usually performed in Max/MSP and Jitter. As an example, a project called "Connecting strangers at a train station" consisted of designing a virtual instrument which is a performance space, placed at a train station in Denmark, and establish communicative connections between strangers, by letting users of the system create soundscapes together across the rails. Figure 3. A description of the installation which connects strangers at a train station. Motion of passengers is detected using motion tracking algorithms, and such motion stimulates strangers to interact together through non-speech sound. By making a discreet, interactive system that does not require physical contact with hardware, we hope to make the system compliant with normal social behaviour patterns in the public space. 5.2. The Lego composer The Lego Composer is a loop based sequencer system with a tangible interface. It is an easy way of playing with music without any preconceiving knowledge about notes, composing, synthesis and harmonies. It is a straightforward, simple and easy approach to composing music, using simple tangible objects - Lego blocks - that represent music loops in different variations. iiOftaima:i. Figure 4 The Lego composer. 6. CONCLUSIONS Medialogy at Aalborg University Copenhagen has started less than one year ago, but students and staff members are already active in several international and national conferences among which NIME 2005, QI and complexity 2004, School of Sound 2005, Danish HCI 2004, Presence, EUSIPCO 2005, Forum Acousticum 2005. Medialogy researchers are also involved in several European networks such as the Presence consortium. Furthermore Juraj Kojs has been successful in having pieces performed in Chile, Cuba, Denmark, Great

Page  00000004 Britain, Italy, Holland, Slovakia, and the US. Medialogy is also organizing the Sound in Interactive Media Workshop which will take place in Copenhagen on May 2005. 7. ACKNOWLEDGEMENTS The authors would like to thank the Benogo-team, Erik Granum, Niels Bottcher, Ole Gregersen, Jakob Olsen, Lars Pellarin, David Rasmussen, Lars Holmquist, Henning Dons Andersen, Philip Jerving, Katja Mohr and Michel Guglielmi. 8. REFERENCES [1] Nordahl, Rolf & Serafin, Stefania "Medialogy: a bridge between technology and creativity based on the Aalborg model", Proceedings ICMC 2004 [2] Serafin, Stefania. Physical synthesis of bowed string instruments. I: Audio Anecdotes / Ken Greenebaum ed.. AK Peters, 2005. (Audio Anecdotes) [3] Serafin, S., Goetzen, A. De., Kojs, J. and Nordahl, R.., "Multimodal Interaction in physics based sound synthesis." Proc. Danish HCI workshop, 2004. [4] Goetzen, A. De., Kojs, J., Serafin, S. and Nordahl, R.., "Fitts' law from an audio perspective", Proc. Danish HCI workshop, 2004 [5] Nordahl, Rolf. "Future Roles of Film Sound Design in VR Applications", The School of Sound, 2005 [6] Guglielmi, Michel & Serafin, Stefania. "Cyberangel". I: Qi and complexity. Roy Ascott. Beijing, China, 2004.