ï~~A Cognitive Model in Design of Musical Interfaces
Anna Sofie Christiansen
University of Copenhagen
anna@cnmat.berkeley.edu
Abstract
Computer music interfaces present the performer/user with new options for controlling computer-generated
sounds. Design of real-time music interfaces or electronic instruments has previously often be carried out as a
secondary feature subordinated to the capabilities of thesound-processing software or the sound producing system.
In my paper I will suggest a model for instrumental interfaces in accordance with research in virtual reality,
cognition and dynamic theory to be applied in design of musical interfaces, in order to ensure to the performer a
sufficiently differentiated sound control to ensure that he can rely on the expressive means, enhancing the
performer's options for giving an individual and intuitive interpretation.
1 Background
In interactive computer music the field of sound
synthesis has often been given preference to that of
sound control. The consequences for the performance
of interactive computer music might be more severe
than we think. Standard interactive systems based on
MIDI are, till now, far from allowing us take full
advantage of sophisticated synthesis techniques for
real-time purposes.
The crucial point in design of interfaces is the
mapping between human actions on to the domain
of computer generated response. In this paper I will
take my point of departure in the direct gestural
modeling of sound. Human interaction with
traditional instruments shows that musical
expression is performed across several parameters of
sound [see, e.g., Rowe 1993]. Traditionally,
parameters such as dynamics, duration and pitch
have been computer-controllable, but expressiveness
requires also taking advantage of, e.g., differentiated
timbral variations within the single tone.
Norman and Laurel described the importance of
relying on inter-human concepts creating a
computational metaphorical approach in design of
the link between action and response in the
computer/human interaction. Thus, the mapping
could be conceptualized as a propositional
communication between human beings, the user and
the designer [B dker 1991]. The task to be
accomplished in the communication can be
considered as how to make explicit "that which has
been left implicit" [Schank & Abelson 1977]. The
prevailing model of interfaces has relied on the
assumption that coordination between the human
gesture and the computer's response can be
represented as a solely conceptual model. The nature
of interaction has thus focused on mental tasks,
ignoring totally the differentiated sensing of the
human body. [Johnson, 1987] emphasized the
significance of a non-propositional component in
human cognition, a component closely connected to
the human body.
2 Representing Direct Physical Action
In the following I will outline the representational
level of interfaces. The act of playing music on an
instrument involves a combination of cognitive as
well as physical actions. The act of sound
production is thus represented to the performer in
several domains, involving notation, style and
physical action. The interface involves the domain
of physical action in that it constitutes:
" A physical representation to a performer of
agencies that enables him to act upon a soundgenerating computer application.
The performer's actions are gleaned by a sensing
mechanism, and the software maps the performer's
physical and/or sonic gestures to cause events
generated by the computer. The correspondence
between the performer's physical actions and the
resulting sound thus requires a conceptualization
between cause and effect:
ICMC Proceedings 1996
259
Christiansen
0