# NOVEL PERCUSSIVE INSTRUMENT DESIGN – CONVERTING MATHEMATICAL FORMULAE INTO ENGAGING MUSICAL INSTRUMENTS

Skip other details (including permanent urls, DOI, citation information)This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact mpub-help@umich.edu to use this work in a way not covered by the license. :

For more information, read Michigan Publishing's access and usage policy.

Page 1 ï~~NOVEL PERCUSSIVE INSTRUMENT DESIGN - CONVERTING MATHEMATICAL FORMULAE INTO ENGAGING MUSICAL INSTRUMENTS Katarzyna Chuchacz, Roger Woods and Sile O'Modhrain Sonic Arts Research Centre Queen's University Belfast, Cloreen Park Belfast, Northern Ireland, BT7 1NN +44 (0)28 9097 4641 ABSTRACT The paper gives insights into the development of new types of electronic percussion instruments, both in terms of the design of underlying hardware and identification of hardware implementation parameters, but also the development of novel interfaces; this allows the performer to control these parameters in the context of extended expressive performance. Improvements in silicon technology provide an opportunity to realise highly realistic electronic models of instruments, such as gongs, timpani and drums operating in real-time; this provides a new and exciting platform as it opens up a number of new parameters for the performer to access. However, interaction with this new technology then becomes a major area of research in itself, as it is possible to drive these models in ways, unachievable for their acoustic equivalents, and to produce sounds previously unheard of. The paper describes the types of parameters available in this percussive instrument, and discusses some of the challenges in developing new types of interfaces. 1. INTRODUCTION With classical musical instruments, the interfaces have been developed over the years and the interface has been largely governed by the physics and mechanics of the underlying sound-producing mechanism. However, with a novel electronic musical instrument, the interface is not an easy issue and there are a number of aspects that need to be considered. Moreover, it is not just the parameters that can be changed in the model that will determine the interface; the instrument designer must develop interfaces which challenge the performer in their playing of the instrument. This means that there exist a few different perspectives that the designer must consider. In the world of acoustic instruments, the acoustic system that produces the instruments' sound has a certain number of constraints resulting from its physicality. Improvements that are performed by their designers change the physicality of the instrument and sound production mechanism itself and through this, the subsequent resulting sound produced by the system. The motivation here is to make the instrument more responsive and/or engaging for a potential performer, making it sensitive enough for detailed nuances of the instrumentalist's gesture. The situation is different in the process of novel electronic instrument design. Firstly, there is no strict relationship between the sound production engine and the process of the sound excitation, thus providing the instrument designer with various levels of freedom. However, this has its downside, as it requires designers to be quite discerning. Thus, the design process not only comprises the development and implementation of the sound production algorithm, but also the interface to the system and more critically, the mapping between the two. In fact, this makes it crucial to have very specific goals at every stage of the design process. Obviously, the motivation for developing an instrument that is highly responsive and sensitive to fine grain nuances of performer's gesture, is the same as for acoustic instruments. The difference though, lies in methods and their variety to achieve this goal. Secondly, there is also an aspect that is never an issue when designing acoustic instruments, namely the need to discern how far we go in enhancing, or emulating, the existing acoustic instrument; how far do we want to go away from this idea by creating a device that produces sound that could never be achieved by the equivalent acoustic system? In this paper, aspects of the design process of an electronic musical instrument are presented, and illustrated using the specific example of a novel electronic percussion synthesiser, which is being designed at SARC. The instrument is built upon a physical modelling-based sound synthesis algorithm, with a complex physical model of the vibrating plate as a starting point. As the reproduction of a high quality, realistic sound is computationally complex, a Field Programmable Gate Array (FPGA) realisation has been developed; this not only enables real-time operation but also provides a high degree of control and flexibility. The interface that aims to make efficient use of all the subtle nuanced gestures of the instrumentalist is being designed. Our design methodology crucial point is focused on the exciting opportunity of connecting the professional player to the sound world of the real-time working model through its parameters' space before the actual instrument's controller is designed. The paper presents the aspects of the design process resulting in the first instrument set up for such experiments.

Page 2 ï~~2. THE INSTRUMENT 2.1. Sound synthesis algorithm The instrument sound synthesis algorithm is based on a classical Kirchhoff plate model [1, 2], described by partial differential equations (PDEs). The Finite Difference (FD) technique has been chosen as the most obvious way to solve PDEs numerically [3, 4]. The algorithm results in discretization of time and space to transform the PDEs to difference equations that can be then implemented digitally. As a consequence, a grid of discrete points is achieved that represents the transverse plate deflection approximation, in both time and space coordinates. The value of each point of the grid is updated on the basis of its neighbouring points' values calculated in the previous iteration steps, and its excitation value (if available). This is described by the formula below (equation 1). n+l = nn-i ui= E flkIIlli+k,+l + kI,ll1Ui+k,j+l + At2 fij Ikl+IllI_2 Ikl+Illl_1 (1) Coefficients Aki,Il and 'k,11 involve most of the algorithm control parameters namely: plate stiffness K, linear damping ', frequency dependent damping b1, distance between the grid points Ax and time step At (sampling frequency) [3]. Apart from those, we have plate grid size and excitation value. RAM takes over 35 minutes to produce 1 second of sound for a 100x100 square grid. The algorithm's highly concurrent nature suggests the adoption of a dedicated hardware solution such as FPGA which is highly programmable, and possesses built-in, dedicated signal processing units which allow high performance DSP implementation. Additionally, the high level of memory access bandwidth, allows several efficient strategies to be applied. Former work [6] has shown that it is possible to perform the calculations for sound synthesis faster than real-time. FPGAs are also desirable in terms of the project specification as they allow interfacing to a wide range of sensors. 2.3. Real-time model The real-time synthesis model has been implemented on a commercial hardware platform (the Xilinx XUPV2P board) that contains the Xilinx Virtex II Pro FPGA chip. The algorithm has been implemented as a network of 10 processing elements (PEs) performing calculations simultaneously, resulting in the update of the whole grid points' values in every single iteration step. Each PE has assigned a sub-domain of 1000 grid points, each performing 0.8 billion operations per second, allowing a 100x100 grid to be implemented. The PEs network controller communicates with hardware interface component receiving set of parameters and the excitation value and outputting the synthesis results. It is implemented in such a way that the excitation signal triggers the sound synthesis calculation so whenever it changes a new set of parameters can be accepted. It effectively means that the parameters could be changed in each iteration step. The hardware implementation allowed us to specify the sound synthesis parameter space in real-time, to which we are providing access. The set of accessible parameters for our plate model is presented in Table 1. grid grid plate freq. dep. linear grid resolution stiffness damping damping excitation Nx x Ny K b l 6 f Table 1. Sound synthesis parameter accessible in realtime. Grid point excitation can be applied in the form of a single value or as a function over the sub-domain of the grid points. Grid resolution represents the number of grid points for each co-ordinate i.e. Nx x Ny (at the maximum of 100 x 100). The FD scheme combines all these parameters into mathematical formulae which result in 5 abstract coefficients controlling the computation hardware. This forms the bottom layer of the instrument parameter mapping structure as presented in Figure 2..fit' JV 1. j-1 i-2 i- 1 i i+ i+2 Time Step n-1 )R --t i t (1 l Tune St ep n? Figure 1. Update Point within the grid. Figure 1 gives a grid fragment showing the sample update point and its significant neighbours in two following iteration steps. &2 > 2bAt+c2At2 + 2(2blAt +c2At2)2 +16K2At2 (2) Plate stability condition which constraints the space of available parameters can be obtained by spectral or von Neumann techniques [5] and is represented by equation 2. 2.2. Hardware Implementation FD schemes have high computational requirements that prevent them from being implemented in real-time on a single computer. For illustration: a MatLab model running on a P4 Centrino 1.6 GHz PC with 512MB

Page 3 ï~~C.to Z r: r Parameters ) Figure 2. Instrument parameter mapping structure. 2.4. The interface design A sound synthesis algorithm hardware implementation working in real-time that can be driven and read in a number of ways provides the possibility of deriving a highly flexible instrument where many parameters are fully open. This allows the exciting opportunity of connecting the instrumentalist to the entire sound world of the model through the parameter space, before the instrument's controller itself is actually defined. On this basis, a strategy has been developed for the instrument interface design, which is best illustrated by the diagram presented in the picture below (Figure 3). priorities became clear for novel instruments which we relate to the specific example given below. 3. IMPLICATIONS OF THE HARDWARE IMPLEMENTATION 3.1. Compromise The hardware implementation strategy in using a timedomain FPGA implementation meant that the parameters listed in Table 1, could now be changed in real-time. The algorithm specificity results in certain stability condition (equation 2) that had to be met in parameter space calculations. Originally, the condition directly determines the dependence of the grid spacing parameter Ax minimal value on, inter alia, stiffness parameter K. From the hardware implementation perspective, the square root visible in the condition although possible to be implemented is time and resources consuming. Our solution was to reverse this relation and make the maximal value of K parameter to be dependent on Ax parameter (additionally involving K in the parameter /1, which is then used in all the rest of the calculations instead of K ). As seen in the equations (3) and (4), this simplifies the computation to a few multiplications and additions. The argument was that we will lose the flexibility of K as a stiffness parameter of sound synthesis algorithm. However, keeping in mind that the priority is a real-time implementation, the second option is preferred. Moreover, with the proper parameter scaling, the same level of flexibility is achieved anyway. This shows that even though the solution does not seem to be ideal from the algorithmic point of view, there has to be a compromise if one is aiming at meeting requirements of the project. HAROW1Y PARAMMTDR:.:;.;.....................E I ~CATI' t,.r~~ 'r n: t t,' > 1 ' j < 1 1 2 0.0625- 0.25Atb1 - 0.125c2At 2 dx dx2 KAt Ax2 (3) (4) Figure 3. Interface design approach. From our perspective, the crucial point of all actions is the stage where we connect the skilled percussion instrumentalist with the physical model hardware implementation, giving him/her through the space of physical parameters a full control over the model of a vibrating plate in real-time. This is stage poses then the foundation for the whole interface design process. This approach has not been previously explored for physically-based modeling. Thus, a critical aspect of the design has been to optimize the hardware implementation strategy in such a way that parameter interface is fully available, allowing a player to have maximum flexibility, and extensive real-time control of an implemented physical model. During the development of the implementation, a number of 3.2. Leaving options open Another issue to be addressed during the decision making process, was the K parameter (strictly Ax in our case) and its real-time changeability. For real-time hardware implementation, making the parameters changeable in real-time is not an issue, perceptually though, it is hard to imagine a percussive instrument that could change its stiffness while it is being played! Whilst it could be argued that 'it just might not sound good', allowing such parameter variation does not sound unreasonable, particularly as the main aim of the design approach is experimental; it is therefore important to leave as much freedom in terms of the sound control as possible to the instrumentalist. The argument is then to

Page 4 ï~~leave as many options open as possible so as not to have to come back and redesign the interface implementation. This approach to the interface design has a number of advantages, namely it saves time and could result in very interesting, sound effects that could be lost if the implementation was constrained without verification. In other words, the aim is to preserve as much freedom in terms of the controllable parameters at the implementation stage which can be sometimes done in order to achieve an efficient hardware implementation. Thus, it is important to not put constraints on the algorithm implementation based on some unverified presumptions. 3.3. A Happy medium The final issue that we would like to introduce is the necessity to strike a happy medium between aspirations and capabilities. In other words, the designer should not give up on some ideas that could improve the final sound result or the instrument flexibility if, at first, it seems to be inapplicable in a certain environment. It also means avoiding solutions that have a potential to improve the result at some point but in the same time are quite time-consuming and may affect the instrument's performance severely in some other way. For example, such a decision was made in the later phase of our instrument design process, when fixed-point as compared to floating-point implementation, was chosen. Fixed-point offers considerable speed and area advantages. Our experiments proved however, that this solution significantly affects the instrument's dynamic range. With a standard excitation's characteristic, this is not that noticeable and can be overcome. However use of implemented plate in an extended mode where it could be potentially excited with any sound track, is out of reach for this fixed-point implementation. This would suggest moving towards floating-point, although its performance will not be acceptable and means that we lose the real-time performance. Another example is our aim to output the resulting sample stream through two audio channels by moving the samples circularly over the plate surface in order to produce a richer sound texture. Ideally, the circles should have different radii and the motion should be slow relative to audio rates. According to Stefan Bilbao [7], this makes a significant difference leading to a very pleasant phasing effect which is much more natural than simple static output. To achieve this goal on the rectangular grid we are using, a bilinear interpolation has to be applied. It basically means finding the coordinates of the point on the circle which we want to output in the specific time and their interpolation to the grid values. This involves solving the circle equation which is not that straightforward on FPGAs. The solution though is simple and very straightforward. Instead of finding a way to implement the circle equation on the FPGA, we pre-compute the coordinates of the points on the circle and store them in memory. In fact, we lose the flexibility to choose the circular motion rate in real time, but we can store a few circle settings and we do not sacrifice the real-time performance. These two examples illustrate that at some point, in every design process, we need to consider costs and advantages of a particular approach. 4. CONCLUSION In this paper, a brief description of a project aiming at the creation of a novel percussion instrument and its design methodology has been presented. Emphasis is placed on the process of optimization of the hardware implementation strategy, particularly in terms of low level parameter space implementation. Our conclusion is that whenever we are dealing with a computationally compound musical synthesis algorithm, where many parameters are fully open, the simple rules that we follow, enable us to build an engaging instrument out of the complex mathematical formulae. 5. ACKNOWLEDGEMENTS The authors would like to thank Dr Stefan Bilbao, the author of the plate sound synthesis algorithm, for his invaluable help and advice in the realisation of its hardware implementation. 6. REFERENCES [1] Graff, K. Wave Motion in Elastic Solids. Dover, New York, USA, 1975. [2] Szilard, R. Theory and Analysis of Plates. Prentice, Hall, Englewood Cliffs, New Jersey, 1974. [3] Bilbao, S. "A Finite Difference Plate Model", Proc. of Intl Comp. Music Conf (ICMC 2005), Barcelona, Spain, 2005. [4] Bilbao, S. "Sound Synthesis for Nonlinear Plates", Proc. of 8th Int. Conf on Digital Audio Effects (DAFx'05), Madrid, Spain, 2005 [5] Strikwerda, J. Finite Difference Schemes and Partial Differential Equations. Wadsworth and Brooks/Cole Advanced Books and Software, Pacific Grove, California, 1989. [6] Motuk, E. Woods, R. and Bilbao, S. "FPGABased Hardware for Physical Modelling Sound Synthesis by Finite Difference Schemes", Proc. of Int'l Conf on Field Programmable Technology, Singapore, Dec. 2005 [7] Bilbao, S. Arcas, K. Chaigne, A. "A Physical Model of Plate Reverberation", Proc. of IEEE Int'l Conf on ASSP, Toulouse, France, May 2006, Vol. 5, pp 165-168.