Page  00000001 After the first year of Rencon Rumi Hiraga*, Roberto Bresint, Keiji Hirata$, Haruhiro Katayose** * Bunkyo University, tKTH, tNTT, ** Kwansei Gakuin University and PRESTO/JST email: rhiraga@ Abstract Rencon, CONtest for performance RENdering systems, started in 2002. Since we have not had evaluation methods for such systems whose output is interpreted subjectively, furthermore performance rendering is a research of not only computer science but also musicology, psychology, and cognition, Rencon has roles of (1) pursuing evaluation methods for systems whose output includes subjective issues, and (2) providing a forum for researchers of several fields related to performance rendering. Rencon was executed twice in 2002 as workshops with technical presentations and musical contests. In this paper, we describe how two Rencon workshops were, the analysis of the results of musical contests, the practical problems we faced especially from the point of a common meeting ground. Although not big yet, we conclude Rencon made a good start to diffuse the research of performance rendering and draw attentions of people to computer music. 1 Introduction Performance rendering system generates expressive performance of a musical piece[l] [5] [9] [12] [14] [16][18]. For such a system whose result is appreciated subjectively, it is difficult to indicate metrical items to evaluate. Thus there have been no methods for people to understand performance rendering system literally by its "performance evaluation." As for performance rendering systems, if they have a method to evaluate both from technical and musical point of views, we will understand immediately the excellence of a system. The performance rendering research involves researchers and specialists of several fields: those in computer science, musicology, psychology, and cognition strongly take part research of performance rendering. Rencon[15], CONtest for performance RENdering systems, started in 2002. With the lack of evaluation method and its interdisciplinary property of performance rendering, we need (1) evaluation methods for systems whose output has considered to include subjective issues, and (2) a forum for the performance rendering research. In order to make Rencon take these roles on itself, it is executed in a workshop style where there are both technical presentations and musical evaluation by a contest. In 2002, we had two Rencon workshops: the first Rencon in July 6 as a satellite workshop of ICAD 2002 (International Conference on Auditory Display) in Kyoto, Japan and the second in September 28 as a special event of FIT 2002 (Forum on Information Technology) in Tokyo. Hereafter we call them ICAD-Rencon and FIT-Rencon respectively. After the two Rencon workshops, we have noticed several practical problems especially from the point of a common meeting ground. Last year in ICMC, we bravery proclaimed that a performance rendering system wins the Chopin contest in half a century[3] by passing through Rencon. Although some geniuses have foreseen what computers would be able to do from the beginning to the middle of the twenty-first century[1l1][17] referring to human spirit, nobody has mentioned the championship of a computer system at a musical contest. Though challenging, the pursuit of the proclamation will give fruitful discovery to computer music research. In Section 2, we classify performance rendering systems into three types. In Section 3, we describe the first and the second Rencon. In Section 4, we inspect the contest results of the second Rencon and describe problems risen in two workshops. Finally in Section 5, we summarize two Rencon workshops in 2002. 2 Rendering Systems Performance rendering consists of three stages: (1) preprocessing, where music analysis or learning performance occurs, (2) performance rendering, and (3) post-processing, modifying the expression of the rendered performance manually [3]. Using the degree of human intervention in each of the above stages, we categorize systems into three type: (1) manual rendering, (2) assistance type, and (3) autonomous type. Manual rendering generates an expressive performance by hand using sequence software that refers to musical sheets, especially in the case of classical music. This corresponds to the manual intervention in the third stage of performance rendering. Both the assistance type and autonomous type are research software systems. While assistance-type systems provide users with better usability and the ability to use more musical information than sequence software, performance expression or clues to expression are given by humans, not automatically. The ultimate

Page  00000002 Preprocessing Rendering Post Type Engine Processing Manual Assistance - x Autonomous x x: the type does not include this stage, 0: the type includes manual intervention in this stage, x: the type should not include manual intervention in this stage. Table 1: Manual intervention in three types of performance rendering (current) style of the autonomous type has the ability to learn case music and automatic music analysis, automatically generating an expressive performance based on individual technique. Table 1 shows the current possible manual intervention (indicated by "0") during the process of each type of performance rendering system. Since none has succeeded in the complete and satisfying automation of music analysis such as GTTM (Generative Theory of Tonal Music), even systems of the autonomous type are given information manually or in an ad hoc way. Both in the assistance and autonomous types, fine tuning on each note in the third stage is not expected (prohibited). 3 Rencons in 2002 3.1 As an ICAD 2002 satellite workshop The first Rencon was held as a satellite workshop of ICAD 2002 (International Conference on Auditory Display) in Kyoto, Japan[7] [10]. In the whole day workshop, there were eight technical presentations, a general discussion for common basis for performance rendering contest, and the listening comparison accompanied by public voting. Presentations covered "perception and theory," "methodology and architecture," and "system and application." As the first workshop with technical presentations and listening comparison, there were no restriction on music entries; music of any genres and by any composers were accepted. Six performances (one manually rendered and five automatically rendered by systems) were played on an acoustic grand piano with MIDI controller (called MIDI Bar). Since we didn't know the objective evaluation of music performance, the vote was based simply on whether a listener likes/does not care a performance. The first prize went to Hashida's manually rendered piece "Nina"- a piano solo used in a Japanese animation movie, which received "like" evaluation by 79% of thirty-five listeners. Among the system rendered performances, "Letter48" by Bellman, rendered by Director Musices (DM) got the first prize ("like" by 71%). A problem caused by an idiosyncrasy of an acoustic piano made many performances distorted. 3.2 As a special event of FIT The second was held in September 28 as a special event of FIT 2002 (Forum on Information Technology)1 in Tokyo[2][4]. While ICAD-Rencon was a pay workshop, FIT-Rencon was open and free that has the role of enlighting people about computer music, performance rendering, and Rencon. Its half day workshop program was as follows: 1. Introduction: Rencon's purpose and significance. 2. Explanation of each system, listening comparison, and voting. 3. Panel discussion: on MIDI performance, musical analysis and performance, and performance rendering systems. 4. Lecture: Rencon in the Future Altogether sixty-three people voted in spite of the last day of FIT on rainy, Saturday afternoon. The winner was a system of assistance type. We also introduced the musical judge by a specialist. He listened to performances before FIT-Rencon and evaluated them concerning musical structure and its effect to performance. Musical pieces were restricted to those composed either by W. A. Mozart or F Chopin. Three of ten pieces were by Mozart, four of the rest were Etude Op. 10, No. 3, though there was no compulsory music. Although each system is developed in different environments (different hardware, OS, programming language, sound generator, musical score, and referring performances), it is not desirable to use different sound generator at the listening comparison in order to concentrate on listening to "what is rendered", not "how a piece is performed". This is a big difference between actual piano contests for human and a contest for music systems. At FIT-Rencon, we found it was not possible to use a single sound generator because some systems were tuned to specific generators and some performances were not well performed on the different types from the one they use as usual. Therefore each music entrant chose one to use at a rehearsal. 4 Analysis 4.1 Results of music evaluation at FIT-Rencon At the FIT-Rencon listeners answered two questions for each performance: (1) how they liked the performance, and (2) whether they thought the performance was natural. They voted with points (point 1 (worst) to 5 (best)) for the ten performances. We ignored the listeners' music experience and knowledge in deriving the result. The answers were regarded as subjective and intuitive, and treated evenly. 'FIT is the biggest forum on information technologies in Japan.

Page  00000003 Ranking System Name Type FITRencon* 5* 6* Muse Assistance 1 1 1 1 1 1 1 2 1 (Machine Learning) Autonomous 2 3 2 2 3 2 3 3 3 MIS Autonomous 3 2 5 5 2 3 2 1 2 CiP2 Assistance 4 5 3 6 5 4 5 5 5 Yutaka Assistance 5 4 4 3 4 6 8 8 6 DM Autonomous 6 7 7 4 7 5 4 6 7 (Manual) Manual 6 8 6 7 6 8 7 7 4 CiPi Kagurame HHH Assistance Autonomous Autonomous 8 10 9 6 9 10 8 9 10 8 10 9 8 10 9 7 9 10 6 9 10 4 9 10 8 9 10 Muse, (Machine Learning), and DM played Mozart, the others played Chopin. Yutaka, (Manual), Kagurame, and HHH played Etude 10-3. At the FIT-Rencon and comparisons 5* and 6*, the audiences were given explanations about the systems before voting, while no explanation was given in the other four. Table 2: Results of listening comparison After the FIT-Rencon, we tried the listening comparison for eight groups. Altogether 305 people voted. Different from the FIT-Rencon, systems were explained only to two groups (102 people). Table 2 shows the results of the listening comparisons at FIT-Rencon and by the eight groups. Since Color in Piano (CiP) provides a user-friendly interface on MIDI keyboard to render expressive performance, one of the two performances by CiP is at the beginning of a user starts to use the system (CiPl), and the other one is held after ten minutes practice (CiP2). In the table, system names are abbreviated; HHH for Ha-Hi-Hun, MIS for Music Interpretation System, and Director Musices for DM. Brief descriptions by researchers of each system and their references are given in [4]. Those that have no system names are parenthesized. We are able to categorize ten performances into three groups by their rankings: the top group consists of Muse, (Machine Learning), and MIS, the bottom group consists of CiPi, Kagurame, and HHH, and the middle consists of other four. A music specialist evaluated the same as for the top groups though he added (Manual) to this group. From the results, we observe the following. * The results show a certain tendency regardless of the constituent of the voting. * Although listeners do not have necessarily musical training, understanding performance is similar to music specialist. * System explanation does not have much effect to the evaluation. * Since some systems of the autonomous type were appraised higher than the other two types, there is some possibility of a system rendered performance to be the winner at a real piano contest. If we make a closer investigation into the relationship between the voters' music experiences and their evaluations, we will be able to derive an interesting examination. Because the order of the performance may affect the evaluation, we should consider a way to present performances to audiences. 4.2 Problems All the concrete problems are categories of the problem of letting more researchers participate in Rencon. This is a very important, realistic point for Rencon to live on. To get more researchers involved in the area of "performance rendering", the genres and instruments of a contest could be expanded. For instance, the genre is not limited only to classical music, but is also open to pop and jazz. In addition, in terms of instruments, the violin, saxophone and percussion will be taken into account. Furthermore, we should provide useful tools to facilitate developing performance rendering systems, and benchmark data as questions and "correct" answers. The following are some of the concrete problems and current solutions to them. Sound source. Because music entrants render their performances in their own environment, it is important to notify music entrants previously the official sound generator to use at Rencon or at least to disclose a velocity curve of the sound generator used at Rencon versus the MIDI velocity value. Data preparation. So far, music entrants prepared performance data to refer in rendering and digitized the musical sheets by themselves. When Rencon indicates a compulsory music, it is desirable if related data to the piece is also provided then each system has less of burden in preparing data. We plan to provide both performance and score data by

Page  00000004 XML [6] [13]. The information offers the correspondence between a note on a score and a performed one. Even the deviation of Note On timing from the score and other information may be included in the future. The data preparation will also make the number of researchers participating Rencon increase because they can more concentrate on their systems, not data format nor collecting data. Compulsory Music. For a common evaluation ground, a compulsory music is necessary. At the same time since it gives a strong restriction to each performance rendering system, a compulsory music may decrease the number of Rencon entrants. 5 Concluding Remarks We described the classification of performance rendering systems, Rencon workshops held in 2002 with its analysis and problems. As for evaluating a system in a musical contest along with its technical paper, there has been no clear relationship between the music and the paper, while there seems to be a meaning in public voting at a musical contest from the consistent results of voting and a judgment by a music specialist. Since researchers from various fields gather together at Rencon, we think Rencon has performed the role as a forum. After the two Rencon, we got some inquiries both from researchers and business people. Though small, Rencon took the first step toward its roles. Kurzweil's father, a famous musician, had the difficulty when he tried to listen to his symphony because he had to hire an orchestra to play it which costed time and money. Another composer mentioned the similar thing: when he composed a ballet suite to present the music with his expressive intention to ballet dancers, he wished that he had an automatic rendering system. Thus, performance rendering is not the interesting research object but also a convenient tool for musicians. Furthermore, if music is taken more focus on its expression in computer music research, machine generated performance can provide new business opportunities. Though Rencon is still a small publicity and has less public acceptance related to other competitive events for computer systems, we heard many warm encouragement to Rencon. In 2003, the third Rencon was held as a workshop of IJCAI (International Joint Conference on Artificial Intelligence) 2003 in Mexico in August. GigaPiano by Tascam was used as an official sound source then [8]. Acknowledgement We show our great appreciation to K. Noike and M. Hashida for their efforts to Rencon. Rencon is funded by Kayamori Foundation and Japan Science and Technology Corporation. References [1] Bresin, R. and Friberg, A.: Emotional coloring of computer-controlled music performances, Computer Music Journal, 24(4), pp. 44-63, 2000. [2] Hashida, M., Noike, K., Hiraga, R., Hirata, K., and Katayose, H.: A Report on FIT 2002 Rencon Workshop, 2002-MUS-48, pp. 35-39, 2002. [3] Hiraga, R., Hashida, M., Hirata, K., Katayose, H., and Noike, K.: RENCON: Toward a New Evaluation Method for Performance Rendering Systems, Proc. of ICMC 2002, 2002. [4] Hiraga, R., Hirata, K., and Katayose, H.: The Second Rencon: Performance Contest, Panel Discussion, and the Future, Proc. of FIT, pp. 116-119, 2002. [5] Hirata, K., Hiraga, R., and Aoyagi, T.: Next generation performance rendering -exploiting controllability, Proc. oflICMC 2000, pp. 360-363, 2000. [6] Hirata, K., Noike, K., and Katayose, H.: A Proposal for a Performance Data Format, IJCAI 2003 Workshop on Methods for automatic music performance and their applications in a public rendering contest, 2003. [7] ICAD 2002 Rencon Workshop Proceedings with a CDROM of performances, 2002. [8] Ikebuchi, T. and Katayose, H.: On Rencon Environment -Musical Instrument-, 2003-MUS-50, 2003. [9] Ishikawa, O., Aono, Y., Katayose, H., and Inokuchi, S.: Extraction of Musical Performance Rules using a Modified Algorithm of Multiple Regression Analysis, Proc. oflICMC 2000, pp. 348-351, 2000. [10] Katayose, H., Hiraga, R., Hirata, K., Noike, K., and Hashida, M.: Report of ICAD-Rencon, 2002-MUS-47, pp. 79-83, 2002. [11] Kurzweil, R.: The age of spiritual machines-When computers exceed human intelligence, Penguin, 1998. [12] Mantaras, R. L and Arcos, J. L.: The Synthesis of expressive music: A challenging CBR application, Lecture Notes in Artificial Intelligence 2080, pp. 16-26, Springer Verlag, 2001. [13] Noike, K., Hirata, K., and Katayose, H.: A Report of the Rencon-Kit, the 1st. Release, 2002-MUS-50, 2002. [14] Oshima, C., Miyagawa, Y., Nishimoto, K. and Shirosaki, T.: Two-step Input Method for Supporting Composition of MIDI Sequence Data, Proc. 1st Int'l Workshop on Entertainment Computing, pp. 253-260, 2002. [15] Rencon HP: [16] Suzuki, T., Tokunaga, T., and Tanaka, H.: A Case Based Approach to the Generation of Musical Expression, Proc. oflJCAI 1999, pp. 642-648, 1999. [17] Turing, A. M.: Computing Machinery and Intelligence, Mind, 59(236), pp. 433-460, 1950. [18] Widmer, G.: Machine Discoveries: A Few Simple, Ro bust Local Expression Principles, Journal of New Music Research, 31(1), pp. 37-50, 2000.