Page  578 ï~~A musicians approach to physical modelling Mark Pearson and David M. Howard Deptartment of Electronics, University of York, York, YO1 5DD. England. email:, 16th May 1995 ABSTRACT: This paper describes a software system for sound synthesis by physical modeling. The system is based on a new synthesis engine, the result of two years research, and provides an orchestra language for describing instruments and a score language for describing complex time varying excitations. The system provides instruments which are true physical models, allowing multiple excitations and virtual pickups, and access to the physical parameters at any point on an instrument. A working implementation written in C++ has been extensively used to produce a variety of high quality timbres. Introduction Currently the main general purpose techniques for sound synthesis by physical modeling are modal synthesis [Adrien, J.M. 1991] and waveguide filters [Smith, J. 1987, Smith, J. 1992]. The synthesis model used in this system differs from both the above in that it takes its inspiration from cellular automata modeling techniques. The main goals of the research are to:* synthesise sounds with a convincing acoustic quality; " provide a consistent, open ended language for the description and excitation of instruments; " create instruments which a) are truly dynamic and responsive to multiple excitations and b) allow pickups to be placed at any locations simultaneously; " minimise the number of parameters which the user has to control to synthesise sounds without limiting the range of possible instruments and timbres; " and finally to make the model as efficient as possible, without compromising the goals given above. The synthesis model The material out of which instruments are constructed is actually a cellular model which has been developed to exhibit all the wave phenomena observed in the physical world (i.e. reflections, refraction, diffraction). Although not based on any specific physical material the model allows the construction of an infinite variety of instruments using just three basic principles:" The shape and size of each piece of material. " The way in which regions of the material are damped and which regions are fixed in one position or free to move. " How different pieces of material are glued and joined together. 578 I7 CMC PROCEEDINGS 1995

Page  579 ï~~Figure 1: Two pieces of cellular acoustic material after a single impulse has been applied to both. The script language The script language consists of two parts: an orchestra language containing declarations of instruments in terms of their shape, fundamental frequency and overall decay time; and a score language for describing when, where and how to excite them. By default every cell is free to move and damping (determined by the decay time) is uniform across the material. However the orchestra allows regions of local damping to be specified and individual cells or groups to be locked (held still). It also allows different acoustic components to be coupled together. The example below was used to generate the results shown in figure 1. Rectangle r: 300 Hz, 350 Hz, 10 secs; lockcorners;... Circle c: 350 Hz, 10 secs; lockperimeter;... Score 1 secs: At start for 0.2 msecs: r(0.3,0.4).applyforce: 1.0; // r at position x=0.3, y=0.4 c(0.2,0.3).applyforce: 1.0; // left=0.0, right=l.0... // bottom=0.0, top=l.0... //...' means end of block of instructions The score notation provides control structures for scheduling events in time. In the next example the comments in angle brackets show when the instructions in that block would be executed, but in reality these comments would be replaced by arbitrary calculations, nested control structures, excitations to instruments, screen output etc. Score 10 secs: From 0 secs to 5 secs: <from 0 secs to 5 secs>... Before 5 secs: At start: <once at 5 secs>... From start to end: <from 0 secs to 5 secs>... After 6 secs: <from 6 secs to 10 secs>... ControlRate 100: <on every 100th sample>... Every 0.5 secs: <every 0.5 secs>... There are two ways to couple components together: join, which couples two pieces of material edge on so that they act as one continuous piece; and glue, which physically connects regions of components together. Graphical output from instruments is available within the system in the form of animations. These have proved to be an invaluable aid in understanding and debugging the sound output. [CMC PROCEEDINGS 199557 579

Page  580 ï~~Excitations Predefined excitations such as plucking, hitting and bowing [Woodhouse, J.] are provided. New excitations can be described in terms of relationships between the instruments physical parameters (position, velocity, force). These are accessed using notation like:r(1/3,1/2).velocity /1 the velocity of r at x=0.333, y=0.5 r(left,0.5).position // the position of r at x=0, y=0.5 c(centre).force /1 the force acting upon c at x=0.5, y=0.5 Conclusions and further work The system is currently fully functional and is useful in its present form, but suffers from one major problem, the speed of execution. This is currently in the range of several tens to several thousands of times real time when running on a Silicon Graphics Indy workstation. This may seem rather extreme, but it must be remembered that sometimes speed is gained at the expense of unacceptably compromising the original concept. This research project has focused on the esthetic quality of the sounds, the ease with which the system can be used in practice, and the flexibility of the synthesis model. If a system satisfies these criteria, which we believe this system has the potential to, then waiting for the necessary hardware to make the system faster is well justified. The system could be developed in the following ways:" By implementing the synthesis engine on custom designed parallel processing hardware. " By providing a graphical interface for the development of instruments. " Using physical controllers for live performance, once the system is capable of real time operation. " The system is very similar conceptually to the MOSAIC [Morrison, J. & Adrien, J.M. 1993] system, although the synthesis engine is entirely different. It might be possible to integrate the system with MOSAIC since both synthesis engines are implemented in C++, allowing the advantages of both models to be used with one consistent interface. References [Adrien, J.M. 1991] The missing link: Modal synthesis. In Representations of Musical Signals. Cambridge, Massachussetts. MIT press, 1991. [Morrison, J. & Adrien, J.M. 1993] Mosaic: a framework for modal synthesis. Computer Music Journal, 17(1):45-56, 1993. [Smith, J. 1987] Waveguide filter tutorial. In Proceedings of International Computer Music Conference., pages 9-16, 1987. [Smith, J. 1992] Physical modeling using digital waveguides. Computer Music Journal, 16(4):74-91, 1992. [Woodhouse, J.] Physical modeling of bowed strings. Computer Music Journal, 16(4):43-56, 1992. 580 I C MC PROCEEDINGS 1995