ï~~Proceedings of the International Computer Music Conference 2011, University of Huddersfield, UK, 31 July- 5 August 2011
Users interact with a visual representation of the string
as well as its equation of motion. Gestural input plays
the string, driving an associated plucking model whose
equation of motion is also available. Through playing the
string and observing and changing the associated equations, users can build intuition about how the models work.
Models may be connected to one another and to external
sources such as microphones. Analysis tools allow closer
investigation of the behavior of the strings and of the variables in the associated equations.
Nothing in this technique prevents it from being used
with other physical models such as bars, plates, turbes,
or membranes. We have selected the string as a wellunderstood, relatively straightforward starting point.
In the remainder of this paper, we discuss several aspects of the Visible String model. The discussion is in the
context of our proof of concept implementation, which
runs in real time on standard Macintosh hardware.
2. BACKGROUND
As mentioned above, the visual MaxiMSP and PD languages provide many appealing ways to interact directly
with unit generator sound models. The Reactable [15] uses
similar kinds of objects but in a different, more concretely
graphical context.
The Alternate Reality Kit [9] provided users with direct ways to interact with physically simulated objects, including ways to modify the laws of physics on which the
simulation was based. However, the ARK did not include
the kinds of continuous models useful in musical simulation.
Schroeder et. al. [18] discussed a multimodal system
for interacting with physically based models. That system had gestural interaction similar to the Visible String,
but did not provide any way to work with the underlying
equations or to connect models visually.
Our system uses finite difference time domain models,
which are discussed at length in Bilbao's recent book [2].
Also of interest are his discussions of prepared piano synthesis [3] and a modular percussion environment [1] with
an implementation in Matlab.
3. STRING MODEL
We use a finite difference time domain representation for
our string model. FDTD representations require significant computational resources, but allow for input and output at any spatial point, or at many points simultaneously.
This makes them ideal for our needs of displaying animations of the strings' state and of making connections.
The model used in this paper is of a simple string with
basic damping. The string has the following equation of
motion
where 0 < x < L, with L the length of the string. The co
efficient c represents the speed of sound on the string and
b1 controls damping across all frequencies. The function
f represents contributions from external forces. In the notation used here, yv denotes the second time derivative of
Y.
A more complex model might involve frequency-dependent damping and stiffness terms:
2 biy +b3(v.)' - K) (2)
(2)
Such models could be used with the same techniques described here. Schroeder et al. [8] and Sosnick and Hu [10]
report the use of computationally intense models, including stiff strings and two-dimensional plates, in real-time
systems.
In the examples here, we use a simple raised-cosine
model for plucking. The plucking force term is given by
1 t -toi
f(xo, t)- -2A(cos2w 1) to <t < to +d, (3)
2 d
where A is the amplitude of the pluck and d the duration;
the pluck is at the fraction xo along the string and it begins
at time t0.
As with the string model, more complex models could
also be used. The use of models that include additional
damping, such as the one given by Cuzzucoli and Lombardo 141, raises interesting questions about how best to
implement modularity in the underlying simulation syste m. However, these questions do not affect the interaction issues we discuss here.
The string shown in Figure 1 has just been plucked. Dragging the mouse across a string plucks it at the corresponding location, with an amplitude proportional to the speed
that the mouse was moving. The string responds with both
sound and animation. The string's length (and position
on the screen) may be changed by dragging its endipoints.
When the string's length changes, its pitch also changes
correspondingly (longer strings have lower pitches, all other things being equal).
"Inspecting" the string shows its equation of motion
and some related quantities. Clicking on a variable (speed
of sound c, length L, fundamental frequency f, or damping constant b1) in the equation shows its value; the value
may be changed by dragging on it or by clicking and typing in a new value. The string responds immediately to
any changes.
This extends to the visible length of the string; changing the length L changes the distance the string stretches
on the screen. The string and its equation are linked in
the other direction as well; changing the string's length
changes L and the related fundamental frequency fo.
The plucking model attached to the mouse may be inspected as well, as shown in Figure 2. The variables of
the model may be changed in the inspector; plucks may
be triggered from the inspector as well. Plucking a string
through the gestural interface changes the position and
329