GAMELUNCH: A PHYSICS-BASED SONIC DINING TABLESkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact email@example.com to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 00000041 GAMELUNCH: A PHYSICS-BASED SONIC DINING TABLE Stefano Delle Monache University of Verona VIPS (Vision, Image Processing and Sound group) Verona, Italy firstname.lastname@example.org vr. it Pietro Polotti Music Conservatory of Como Electronic Music School Como, Italy & VIPS, University of Verona, Italy email@example.com. it Stefano Papetti University of Verona VIPS Verona, Italy firstname.lastname@example.org Davide Rocchesso IUAV Department of Art and Industrial Design Venice, Italy email@example.com Abstract The Gamelunch1 is a sonically augmented dining table. This work aims at exploiting the power and flexibility of physically-based sound models towards the investigation of the closed loop between interaction, sound and emotion. Energetic consistency (or better, inconsistency) throughout the loop "gesture - (inter)action - sound - perceived sound - information/emotion - gesture" is examined. Continuous interaction gestures are captured by means of contact microphones and various force transducers, providing data that are coherently mapped onto physically-based sound synthesis algorithms. While performing usual dining movements, the user encounters contradicting and unexpected sound feedback, thus experiencing - a contrario - the importance of sound in everyday-life acts. 1. INTRODUCTION Bearing in mind that human interaction with the real world is naturally continuous, we assert that continuous sound feedback can emphasize or, vice-versa, contradict causality. Exploiting various sensors and some physically-based and dynamically synthesized sounds, we designed an augmented dining table with the intent to give rise to a number of quantitative and conceptual questions about mapping strategies for coupling a) continuous human gestures, b) sensors and c) control of sound synthesis parameters. Dourish' definition of embodiment as the transition from the realm of ideas to the realm of everyday experience  includes not only physical embodiment of objects, but also embodied actions like speech and gesture. Also, in accordance with Varela, Thompson and Rosch , sensory and motor processes, perception and action, are fundamentally inseparable in live cognition: 1) cognition depends upon the kind of experience that comes from having a body with various sensorimotor abilities; 2) these individual abilities are embedded in biological, psychological, and cultural contexts. The enactive approach consists of two points: 1) perception 1 This work was supported by the EU project CLOSED (Closing the Loop Of Sound Evaluation and Design), FP6 -NEST-29085, path "Measuring the Impossible" http://closed. ircam fr/ 41 consists in a perceptually guided action and 2) cognitive structures emerge from the recurrent sensorimotor patterns that enable action to be perceptually guided. Cadoz's operational typology is very helpful in fixing such energetic physical consistency in the case of action and sound . Complementarily, the approach developed within the EU project SOb   is based on the modeling of sound sources in terms of their physical behavior, thus providing a natural mapping between human gestures and control parameters of the sound model. This way, sound design follows a precise decomposition of events occurring in the action. In terms of sound modelization (perception analysis, cartoonification, control of physically meaningful parameters), the main identity of sounds can be defined and reproduced by means of a set of basic physical models for interaction. Starting from these reference points, the Gamelunch aims at forcing the natural flow of the enactive process in order to show, by means of the paradox (a sort of a proof per absurdum), how this is crucial. 2. FROM SOUNDING OBJECTS TO SOUND OBJECTS AND BACK "I heard (ouir) you despite myself although I did not listen (ecouter) to the door, but I didn't understand (comprendre) what I heard (entendre)"  As a perceptual activity, the act of listening is particularly abstract and elusive and, in the confrontation between the "object of perception" and the "activity of the perceiving consciousness", this makes the dualisms Abstract/Concrete and Objective/Subjective even stronger. Sound, sensation and interaction form a complex chain that includes internal loops and feedbacks, not exempt from unresolved colliding and frictional points. In this view, the "schaefferian" sound object still remains a fundamental reference for the whole theory and practice of sound design. Schaeffer's theory   can be seen as a kind of Odissey that goes from the detachment of the sound from its source, through the awareness of our perceptual activity, back to the (synthesized) source, i.e. to the sounding object.
Page 00000042 In this regard, physically-based sound synthesis becomes very helpful as it provides a new virtual lutherie in terms of a transition from the sound object towards the sounding object, through a simplified (cartoonified) causal-everyday listening. Dealing with sound effectiveness in an interactive context the main elements that need to be considered are cartoonification, augmented intelligibility and direct manipulation features. The reaction of the listener to a sound reflects his implicit (aesthetical and emotional) evaluation of the acceptability of that sound in a particular interactive context. Emotional sound qualities introduce a third aspect of the sound-sensation-interaction chain: that in which causal listening moves towards - and coexists with - cultural and semantic aspects. 3. PHYSICAL MODELS IN CONTINUOUS (INTER)ACTIONS Starting with the SOb project, several physicallybased sound models for the interaction have been developed and implemented. These are currently being further developed as part of the CLOSED project. Most of the provided sound algorithms complies with the modular structure "resonator-interactor-resonator", hence representing the interaction between two resonating objects. Resonators are modeled through the modal  and the digital waveguide  techniques. The first one describes a vibrating object by means of its resonating modes (i.e. their frequency, decay time and gain). The second one simulates the propagation of waves along an elastic medium (e.g., in the ID case, an ideal string). Thanks to the modularity of the adopted framework, it is possible to interconnect any couple of resonators through complex (non-linear) interaction models. So far, impact  and friction  models have been implemented. These have been recognized as the basic interaction events underlying many complex processes: e.g. rolling, bouncing and crumpling have been implemented as complex temporal patterns superimposed to the impact model, while the friction model is exploited to simulate braking, rubbing and squeaking sounds. As a result of the physical coherence of the models, it is straightforward a) to map their control parameters to continuous physical interaction, and b) to describe resonators and interactors by means of their physicalgeometric properties. For example, when simulating a struck string, one can set the length, mass and tension of the string (waveguide resonator), the mass of the hammer (modal resonator), the shape and stiffness of the hammer felt (impact interactor), the strike position along the string, the force and velocity applied to the hammer, and so on. Further, a bubble  and an airflow sound models have been implemented. The first one makes use of the modal paradigm, modeling a bubble as a time-varying (collapsing) resonating cavity, and mapping its radius to both the initial pitch and pitch slope. The bubble model serves as a basis for e.g. burbling, dripping, pouring and frying models. Finally, the airflow model makes use of the waveguide paradigm, and allows to simulate e.g. an airflow through a tube (as in a vacuum cleaner) or a swoosh. 4. THE GAMELUNCH, ENACTION AND PHYSICAL MODELS Sound, emotions, action, sound, emotions, action, sound... 4.1. Concept "It's not only what we hear that tell us what we know, but it's also what we know that tell us what we hear" A prototype of the sonic table (Fig. 1) was presented as final project during the workshop on "Acoustic Display and Sound Design / Sound in Interaction, Winter 2006-7", organized by Karmen Franinovic, Yon Visell and Daniel Hug at the HGKZ (Hochschule ffir Gestaltung und Kunst ZUrich), as part of the CLOSED EU project activities  . Figure 1: The Gamelunch setup The leitmotiv of this new version of the Gamelunch is to let people experiencing the relevance of continuous sound feedback in everyday-life acts, as that of sitting at a table and having lunch. Exploiting the principle of contradiction we aim at setting up a self-explanatory experience with respect to the role played by sounds in simple acts as those of cutting, piercing, drinking, pouring, grasping, stirring, mixing, and so on. To this end, we set up a sonic feedback system which systematically misleads these acts. We primarily focused our attention on possible interactions and expected everyday sounds, particularly through the analysis of those "natural" embodied actions, which are directly available to the user without any apprenticeship. Secondly, we categorized gestures according to their accompanying sounds and behavior of their sources. Finally, such categorization has been "circuit-bended" for the purpose of the work. As it deals with embodied sound interaction, the Gamelunch differs from other table-based interactive sonic devices. Most of existing tabletop-like tangible interfaces, such as the "reacTable" and other related devices , act as controllers for mainly musical purposes, thus focusing on and questioning about expression issues. As a further example, the "Table 42
Page 00000043 Recorder"  is a sonically augmented table whose concept and realization cleverly couples interaction and everyday sounding objects (such as glasses, cans, dishes and so on). 4.2. Interactions and everyday sound feedback The Gamelunch aims at showing the relevance of the "sounding object" approach in contexts of continuous interaction. That is consistent with other works of the VIPS group as . In order to highlight perceptive/cognitive contradictions, we provided direct and continuous manipulation of physically-based sound models. A set of sound enhanced objects whose perceptive sound feedback has been "circuit-bended" is presented hereafter: - the decanter: continuous friction/braking sounds alter sound feedback so that it's perceived as a force opposite to that naturally applied in pouring-liquid actions; - the roast-beef knife: the right alignment on a plane perpendicular to a cut of beef gives back a vibrato effect whose frequency and amplitude are inversely proportional to the human gesture; - the glass: contradiction comes out from the concurrent percept of both material and action. Refined material as crystal is sonified by low-fi and heavy solid sounds. Further, a dual-axis accelerometer and a gyroscope feed back a pouring-liquid sound model; - the salad bowl: by means of light sensors the interaction with the fork and the bowl is bended by continuous dripping and boiling sounds. The percept moves towards a sensation of softness and inconsistency; - the cutlery: the selection of the provided cutlery (spoon vs. fork/knife) triggers two different configurations, namely the "soup" configuration and the "steak" configuration (see below): - the "soup configuration" (interaction with the spooninstrument): naturalness is bended by means of the increasing density of the fed back liquid sounds. The spoon becomes heavy, while the action becomes hard and fatiguing. - the "steak configuration" (interaction with fork and knife): by means of pressure sensors cutting and slicing sound interactions are addressed to and guided by nondense liquid sounds. In order to detect the inclination of the decanter in pouring actions we used a Nintendo Wii remote controller. The Wiimote is connected via Bluetooth to a Max/MSP patch. The rotational orientation sensed by the Wiimote controls a friction sound model: in vertical position no sound is triggered, while inclination with respect to the horizontal plane activates the friction sound synthesis (the greater the inclination, the greater the acceleration of the friction/braking sounds). As a result, the continuous sound feedback describing the state of the pouring action produces the perception of a force opposite to the human action, and thus a contradictory perception with respect to the flow of the liquid and the lightening of the decanter. 4.3. The table The interaction with the table exploits the natural propagation of acoustic wave  . A primary issue was the design of the wooden table in order to properly map relevant areas. We decided to cut off sections corresponding to the position of the dish, of the cutlery (on the left and right sides of the dish), of the glass and of the decanter (see Fig. 2). These sections were then reassembled into the table, laid on wooden brackets attached to the bottom side of the table, and covered with foam. The foam is intended to acoustically isolate actions performed on each area. rigure z: U etaIs ana mappea areas oj tne taote The table scenario uses embedded contact microphones placed under each mapping area (Fig. 3) in order to capture sound pressure signals and track their amplitude. Audio signals are sent to the computer and converted to control signals for the physically-based sound models using custom designed Max/MSP patches. Sound is output by two loudspeakers placed under the table. By means of contact microphones we mapped gestures, testing several absolute thresholds for the impact models and average thresholds for the friction ones. As for impacts, we supposed a certain absolute value within the duration of 35 ms as a gate for detection into the Max/MSP synthesis patch. For relevant friction detection we chose an average duration of 300 ms instead. Figure 3: TAI, dish section with contact microphone 5. CONCLUSIONS AND FUTURE PERSPECTIVES Whereas sound interaction and physical models are concerned with respect to everyday-acts and objects, perceived sound plays a significant role in the gestureinteraction-perception-emotion chain. With its set of immediate and natural gestures and actions, the dining scenario provides a fertile context for the investigation 43
Page 00000044 of continuous sound feedback and emotional response. Here, a sonically augmented dining table and accessories are presented. A primary investigation is carried on by exploiting the principle of contradiction, in such a way that coherent and consistent physically-based synthesized sounds "mislead" their corresponding acts. The Gamelunch represents a first stage in an iterative sound design process, where further developments will undertake a set of formal evaluation tests both following the per absurdum approach adopted here and a "straight" gesture mapping. Results are expected in terms of improvements of gesture mapping and design of new and proper sets of sound families based on physical models. Also, embedded sound diffusion systems will be taken into consideration in order to improve the immersiveness and embodiment characteristics of the Gamelunch dining environment. The Gamelunch is one of the working benches adopted in the EU project CLOSED. 6. ACKNOWLEDGMENTS The early Gamelunch prototype (HGKZ workshop) was designed and developed by Stefano Delle Monache and Stefano Fumagalli (Music Conservatory of Como) under the supervision of Pietro Polotti and Davide Rocchesso. The first version of the wooden table was designed and realized by Simone Lueling (HGKZ). Physically-based sound models inherited from the SOb project were made available as Max/MSP externals by Stefano Papetti. 7. REFERENCES  P. Dourish. Where the Action Is: The Foundations of Embodied Interaction. MIT Press Cambridge, MA, USA, 2001.  F. J. Varela, E. Thompson, and E. Rosch. The Embodied Mind: Cognitive science and human experience. MIT Press, Cambridge, MA, USA, 1991.  C. Cadoz. Le geste, canal de communication homme/machine. La communication instrumentale. Tecnique et science de l'information, vol. 13(1):31 -61, 1994.  D. Rocchesso, R. Bresin, and M. Fernstr6m. Sounding Objects. IEEE Multimedia, vol. 10(2):42 -52, April 2003.  D. Rocchesso and F. Fontana, editors. The Sounding Object. Mondo Estremo, 2003. http://www soundobject.org/.  M. Chion. Guide des Objets Sonores, Pierre Schaeffer et la recherche musicale. Buchet/Chastel, Paris, 1983. Transl. by J. Dack/C. North, 1995. http: //www. ears.dm.ac. uk/spip php? rub rique203.  P. Schaeffer. Traite des objets musicaux. Le Seuil, Paris, 1977.  P. Schaeffer, G. Reibel, and B. Ferreyra. Solfrge de l'objet sonore. Book with 3 CDs. Ina-GRM, Paris, 1998.  J. M. Adrien. The missing link: Modal synthesis. In G. De Poli, A. Piccialli, and C. Roads, editors, Representations of Musical Signals, pp. 269-297. MIT Press, Cambridge, MA, 1991.  J. O. Smith III. Principles of digital waveguide models of musical instruments. In M. Kahrs and K. Brandeburg, editors, Applications ofDSP to Audio and Acoustic, pp. 417-466. Kluwer Academic Publishers, 2002.  D. W. Marhefka and D. E. Orin. A compliant contact model with nonlinear damping for simulation of robotic systems. IEEE trans, on systems, man, and cybernetics - Part A: systems and humans, vol. 29(6):566-572, November 1999.  P. Dupont, V. Hayward, B. Armstrong, and F. Altpeter. Single State Elasto-Plastic Friction Models. IEEE trans. on automatic control, vol. 47(5):787-792, June 2002.  Kees van den Doel. Physically-based Models for Liquid Sounds. ACM Transactions on Applied Perception, vol. 2(4):534-546, October 2005.  Hochschule fir Gestaltung und Kunst Ziirich. Interaction Design Program Acoustic Display and Sound Design / Sound in Interaction. Winter 2006 -7. http: //sonic.wikispaces.com.  K. Franinovic, D. Hug, Y. Visell. Sound Embodied: Explorations of Sonic Interaction Design for Everyday Objects in a Workshop Setting, Proceedings of the International Conference on Auditory Display (ICAD), Montreal, Canada, 2007.  S. Jorda, M. Kaltenbrunner, G. Geiger, and R. Bencina. The reacTable. International Computer Music Conference (ICMC), 2005, Barcelona, Spain. http: //tg.upf. edu/react able/?related.  F. Gmeiner. The Table Recorder: Instrument for everyday life's patterns. http://www.fregment.com/table.  A. De Gotzen and D. Rocchesso. "Peek-a-book: playing with an interactive book", International Conference on Auditory Display (ICAD), 2006, pp. 19-23, London, UK.  A. Crevoisier, P Polotti. A New Musical Interface Using Acoustic Tap Tracking. Mosart Workshop on Current Research Directions in Computer Music, Barcelona, Nov. 15-17, 2001. http://www.iua.upf.es/mtg/mosart/paper s/p26.pdf.  A. Crevoisier and P. Polotti. "Tangible Acoustic Interfaces and their Applications for the Design of New Musical Instruments", Proceedings of the New Interfaces for Musical Expression (NIME) conference, Vancouver, Canada, May 26-28, 2005. 44