Page  00000379 A HAND DRUMMING DATASET FOR PHYSICAL MODELING Randy Jones rj @csc.uvic.ca Department of Computer Science, University of Victoria Mattieu Lagrange lagrange @csc.uvic.ca Department of Computer Science, University of Victoria Andrew Schloss aschloss @ finearts.uvic.ca Department of Music, University of Victoria ABSTRACT Physical modeling is a proven technique for creating sounds with rich expressive potential, but the state of the art in control does not offer access to the whole of this potential. New developments in modeling algorithms are typically presented with single-point, idealized excitations where more complex ones would add vitality to the sounds produced. The 2D waveguide mesh, in particular, can be excited simultaneously at multiple points on a surface, like a physical drum by a hand. The authors present a synthesis system in which this control has been implemented using a 2D pressure sensor, resulting in sounds that capture the some of the salient qualities of hand drumming. A dataset of 2D force measurements from various hand drumming techniques is presented, to be used by researchers in physical modeling synthesis. 1. INTRODUCTION Physical modeling has proven to be a good way of synthesizing sounds with natural qualities. Simulating physical interactions, even in quite imprecise ways, can give rise to acoustic phenomena which, as listeners, we recognize from nature. A model may reproduce sonic phenomena we would call natural, without sounding like any particular instrument or object. This musically useful aspect of physical modeling has been called 'plausibility' [2]. One type of model that can generate plausible sounds is the 2D waveguide mesh, first described by Van Duyne and Smith [10], and refined by others ([3] [4] [1]). The mesh algorithm in its simplest form simulates an idealized membrane that can be thought of as either a drum head, clamped at the edges, or a magically supported vibrating plate with free edges. Either of these edge conditions gives rise to a rich, physically plausible collection of inharmonic partials. In live performance, plausibility depends not only on the synthesis method used, but on the richness of its control. Intimate control over sounding objects is central to musical expression. Julius O. Smith has expressed this elegantly by stating "A musical instrument should be 'alive' in the hands of the performer." [8] Our own work linking 2D sensor hardware and a membrane physical model has been done with the aim of providing as much as possible of this liveness that real-world instruments so elegantly generate. We feel that with this system we have reproduced some of the salient qualities of hand drumming in a pleasingly plausible way. In order to share the fruits of our efforts with other developers of physical models, not all of whom may have access to 2D force sensors, we have recorded a collection of different hand drumming techniques. Our dataset may offer several benefits to the developers of physical models. It can be a bridge between the realtime and nonrealtime worlds, allowing live excitations to be used in an environment such as Matlab. Using 2D excitations may prove to elicit more musically useful sounds from a given modeling scheme. Finally, a consistent dataset can be a testbed for the development of algorithms, allowing a basis for comparison of the work of different groups. 2. IMPLEMENTATION Our synthesis system combines a 2D pressure sensor with our own implementations of the 2D waveguide mesh algorithm. The sensor, a Tactex MTC Express, detects pressure applied to grid of 72 sensors under a foam pad, about 20 by 15 cm in size. In the context of mappings, Wessel [12] discusses the difficulty of using the many degrees of freedom offered by the MTC Express. Likewise, the 2D waveguide mesh has a huge number of degrees of freedom. At each of hundreds of points on the mesh, a signal can be added and local properties of the mesh such as damping and tension can be changed. In previous work on the waveguide mesh of which we are aware, the model has been excited only at single points, whether directly to one mesh junction or deinterpolated as discussed by Vilimiki [9]. Live control over damping has been applied to parameters that affect the model globally. A 2D pressure sensor, however, can be connected to the model over the entire surface simultaneously, offering a more intimate degree of control. This control flow in our system is shown in Figure 1. A 2D force matrix is created by interpolating data from the pressure sensor. This matrix is considered to be a continuous 2D field, sampled in time at the audio signal rate and in space at the dimensions of the physical model. We apply the field to both excitation and damping at each junction of the waveguide mesh. Using the force matrix allows all of the data from the 2D pressure sensor to affect 379

Page  00000380 Figure 1. Controlling the waveguide mesh using a 2D force matrix. our sound model in a meaningful way. Qualitative results of this approach are discussed in detail elsewhere[5]. Our hand drumming dataset consists of six short sequences recorded as matrices from the pressure sensor. The Max/MSP/Jitter environment was used to do the data collection and processing. Sequences were captured from the sensor at a sampling rate of 100 Hz. These were written out as Jitter matrix (.jxf) files for use within Jitter, and as XML files for use in other environments. A collection of Matlab programs was written to read the XML data and excite a canonical 2D waveguide mesh implementation, producing sounds offline. These Matlab programs and the drumming dataset itself are available online [7]. In our mesh implementation, a scalar value at each junction represents the velocity of the membrane at that point. We use an 8-connected interpolated rectangular mesh, as described by Savioja and Vilimiki[9]. While Fontana and Rocchesso [3] have shown that a triangular mesh geometry has desirable properties including better wave dispersion characteristics, we chose a rectangular mesh geometry because its calculation can be approached as a standard 3x3 convolution. This allowed the standard convolution operator to be used in Matlab resulting in an efficiency gain which, while not crucial in offline processing, is nice. One special case of this convolution, in which each edge kernel value is 1.0, results in the simple 4-connected mesh as first discussed by Van Duyne and Smith [10]. This simpler structure would offer more opportunities for optimization, but we found it to sound far less natural than the interpolated kernel. The background noise from the sensor will generate a steady rumbling from the model if left unattenuated. In the interests of providing the rawest possible useful data, this noise was left in our dataset. Our synthesis implementation accounts for the noise by simply clipping values below a certain threshold to zero. 3. HAND DRUMMING TECHNIQUES In hand drumming, a great variety of expressive techniques exist which arise from a trained performer's control over the timing, force and areas of contact with the drum. For example, a survey of strokes and damping mechanisms in tabla drumming is given by Kapur et al.[6]. The complex interactions between hand, drum head and drum allow for a handful of very distinct sounding types of strokes, and a vast expressive space between them. One example of a particular type of stroke we have included in our dataset is the pitch bending Ga stroke, as played on the Bayan, the larger of the two tabla drums. In the Ga stroke, the tabla player first excites the drum head with a tap of the middle and index fingers. The heel of the hand then raises the pitch of the stroke by moving across the drum head. We have recorded this type of stroke, in three speed variations. A sequence of frames from our recording is shown in Figure 2. The initial tap is visible at the left side of the drum at 0.25 seconds, followed by the larger area of damping from the heel of the hand moving from right to left. We chose the other five drumming examples in an attempt to gather a wide range of different techniques. Another criterion for including a technique was whether our sensor could differentiate it from other ones. For example, two conga techniques, a "slap" and "muff" were played. One important difference between these is the spread fingers of the slap, which are close together in the "muff." On examining the data, however, it was clear that our spatial resolution was not sufficient to distinguish between the two. Temporal resolution was also a factor in our choices-interesting possibilities such as very quick taps had to be discarded because the sensor does not integrate input over its sampling period and thus may miss a quick tap completely. The techniques we kept to form our dataset are as follows: 1. open: maximal palm contact 2. slap: open or damped 3. edge hits 4. heel-toe: as in conga drum techniques 5. Ga stroke: as in tabla drum 6. tension modulation: one hand provides pressure, other taps Each technique is recorded with at least three variations. Simple strokes were recorded in groups of three 380

Page  00000381 0.25s 0.5s 0.75s 1.0s 1.25s 1.5s 1.75s t= O.Os Figure 2. Pressure data from a slow pitch-bending ga stroke. different dynamic levels: mp, mf and f each consisting of hits at the drum's left, top, right, bottom and center. More complicated techniques such as the Ga stroke were recorded with fewer variations, in either dynamics or speed. More detailed descriptions and notations of the techniques played are given with the data files themselves. A note on overall positioning of the excitations is in order. We expect that often one will want to simulate a round drum head or membrane, in which case excitations in the corers of the rectangular sensor will not be useful. We tried, therefore, to make sure that the most salient activity was taking place in a circle inscribed within the sensor's rectangular border. 4. PRESENT GOALS We look forward to using the dataset to further our own work in synthesis. The addition of a resonant shell and an air loading to the drum, as well as nonlinear filtering and excitation of the membrane, are topics we are exploring. Though the sounds we are generating are satisfying, the sampling rate of our existing sensor is by no means sufficient to capture the nuances of hand drumming. In an ideal scenario, the excitations would be captured at the sampling rate of the physical model: 44.1 kilohertz or higher. At these rates, a whole new set of phenomena including friction and other micro-interactions with the drum head could be explored as excitation. FPGA (fieldprogrammable gate array) development hardware, as used by Wessel et al. [11] is a promising tool for dealing with the high bandwidth required to sample the whole surface at audio rates. 5. ACKNOWLEDGEMENTS Thanks are due to George Tzanetakis for crucial direction. Financial support for this project was provided by NSERC and SSHRC Canada. 6. REFERENCES [1] M. Aird, J. Laird, and J. Ffitch. Modeling a drum by interfacing 2D and 3D waveguide meshes. In Proc. Int. Computer Music Conf(ICMCOO) Berlin Germany, 2000. [2] N. Castagne and C. Cadoz. Ten Criteria for Evaluating Physical Modeling Schemes for Music Creation. In Proc. of the 9th Int. Conference on Digital Audio Effects, 2003. [3] F. Fontana and D. Rocchesso. A new formulation of the 2D-waveguide mesh for percussion instruments. In Proceedings of the XI Colloquium on Musical Informatics,(Bologna, Italy), pages 27-30, 1995. [4] F. Fontana, L. Savioja, and V. Vilimiki. A modified rectangular waveguide mesh structure with interpolated input and output points. Proc. Int. Computer Music Conf,(La Habana, Cuba), pages 87-90, 2001. [5] R. Jones and W.A. Schloss. Controlling a Physical Model with a 2D Force Matrix. In New Interfaces for Musical Expression (NIME), pages 27-30, 2007. [6] A. Kapur, P. Davidson, P.R. Cook, P. Driessen, and W.A. Schloss. Digitizing North Indian Performance. In Proc. Int. Computer Music Conf, 2004. [7] A. Schloss, R. Jones, and M. Lagrange. Retreived May 7, 2007. [Online] Files at http://people.finearts.uvic.ca/raschloss/publications/icmc07drumming.html. [8] J.O. Smith. Physical modeling synthesis update. Computer Music Journal, 20(2):44-56, 1996. [9] V. Vilimiki and L. Savioja. Interpolated and warped 2-D digital waveguide mesh algorithms. In COST G-6 Conference on Digital Audio Effects, pages 7-9, 2000. [10] S.A. Van Duyne and J.O. Smith. Physical modeling with the 2-D digital waveguide mesh. In Proc. Int. Computer Music Conf, pages 40-47, 1993. [11] D. Wessel, R. Avizienis, and A. Freed. A Force Sensitive Multi-touch Array Supporting Multiple 2 -D Musical Control Structures. In New Interfaces for Musical Expression (NIME), pages 41-45, 2007. [12] D. Wessel, M. Wright, and J. Schott. Intimate musical control of computers with a variety of controllers and gesture mapping metaphors. In New Interfaces for Musical Expression (NIME), pages 1-3. National University of Singapore, Singapore, 2002. 381