~ICMC 2015 - Sept. 25 - Oct. 1, 2015 - CEMI, University of North Texas
Haptic Control of Multistate Generative Music Systems
Bret Battey
Music, Technology and Innovation
Research Centre,
De Montfort University
bbattey@dmu.ac.uk
Marinos Giannoukakis
Music, Technology and Innovation
Research Centre,
De Montfort University
gaidaro2@gmail.com
Lorenzo Picinali
Dyson School of
Design Engineering,
Imperial College London
l.picinali@imperial.ac.uk
ABSTRACT
Force-feedback controllers have been considered as a solution to the lack of sonically coupled physical feedback in
digital-music interfaces, with researchers focusing on instrument-like models of interaction. However, there has
been little research applied to the use of force-feedback
interfaces to the control of real-time generative-music systems. This paper proposes that haptic interfaces could enable performers to have a more fully embodied engagement
with such systems, increasing expressive control and enabling new compositional and performance potentials. A
proof-of-concept project is described, which entailed development of a core software toolkit and implementation of a
series of test cases.
1. INTRODUCTION
Most digital-music interfaces do not provide 'ergotic' coupling [1], an important means for informing effective and
affective shaping of music for the performer. Researchers
have investigated force-feedback interfaces as a means for
providing such coupling, focusing on emulation of instrumental interaction models such as piano actions, violin bowing or string plucking, often applied to physical modelling
synthesis, such as in [2-5]. Other indicative approaches consider how force-feedback could improve accuracy of control
of synthetic sound, such as improving pitch-selection accuracy on a continuous-pitch controller [6].
Yet digital-music performers often are not triggering individual events like a traditional instrumentalist. Instead, they
are controlling patterns and behaviours generated by computer algorithms (which in this paper will be referred to as
generative systems). These algorithms could range from
simple arpeggiation procedures to artificial-intelligence
driven response. Compared with the standard digital musical
instrument model, there are additional layers of technical
mediation and abstraction between the actions of the musician and the sounds generated.
Copyright: ~ 2015 Battey et al. This is an open-access article distributed
under the terms of the Creative Commons Attribution License 3.0 Unported, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original author and source are credited.
2. HYPOTHESES
The authors hypothesize that it is possible for a haptic interface to productively raise the performer's level of embodied,
non-conceptual engagement with generative-music systems.
'Productively' will here be defined as either increasing the
speed, accuracy or expressiveness of music control and/or
opening up new musical potentials.
At the simplest level, it seems reasonable to extrapolate
from existing research [7] and predict that force feedback,
employed to control parameters of a generative system, will
likely enable the musicians to increase their speed of learning and accuracy of control for those parameters to at least
some degree.
However, the authors propose that particular gains can
arise when there is a coherent perceptual parallel between
haptic qualities of the interface and resultant subjective sonic qualities. Performers could simultaneously feel and shape
the subjective-expressive tension of a musical texture, or
aspects of that texture, for example. Further, performers
could push or pull the system to other states, enabling musically useful linkage of physical tension and release to musical macrostructure. More speculatively, a generative music
system and its haptic interface could be seen to embody a
set of musical potentials and constraints through its system
of virtual physics.
The last two hypotheses have been inspired in part by
Lerdahl's concept of tonal pitch space [8] and Steve Larson's proposal that "we not only speak about music as if it
were shaped by musical analogs of physical gravity, magnetism, and inertia, but we also actually experience it in
terms of 'musical forces"' [9]. One could imagine a performer engaging with a force-feedback system with gravity
fields modelled on (for example) hierarchical models of
tension in harmonic progressions - potentially pushing the
system through a plane of resistance into a new key, whereupon the system reorders the haptic topology. Gesture studies of Indian khyal classical vocalists musicians also provide
a provocative model, revealing that some performers conceive of the performance of a raag as a type of path through
a "flexible but stable topology that singers explore through
both melodic and gestural action" [10] - suggesting a strong
conceptual link with the potentials of force-feedback interfaces.
The primary aim of the research described here was to establish proof-of-concept of the general claim through a se
- 98 -