~Proceedings ICMCISMCI2014 14-20 September 2014, Athens, Greece
next to the images. In the case of 16mm film, the small size
requires the use of single-sprocket film, with the optical
soundtrack taking the place of the second set of sprocket
holes. The projector sonifies this soundtrack by means of
an optical sound head, shown in Figure 1. An exciter lamp
shines through the film onto a photocell, filtered by narrow horizontal slits on either side of the film. As the film
passes across this thin band of light, it produces a fluctuating voltage which is processed and output as the audio
signal [3]. Due to the need for continuous film speed when
producing sound, as opposed to the stopping and starting
required when projecting images, the optical sound pickup
in a 16mm projector is placed 26 frames ahead of the lens.
Thus, assuming a playback rate of 24 frames per second,
the audio on any point of an optical soundtrack will be
heard a little over a second before its adjacent image is
seen.
The use of horizontal slits and a single photocell within
the optical pickup means that a soundtrack can be represented on film in a variety of ways, provided that the average lightness along the horizontal axis at any point in time
is equivalent. This flexibility has given rise to a number
of optical soundtrack formats and applications, several of
which are shown in Figure 2. The conventional kinds are
variable area (2a) and variable density (2b) soundtracks,
with the more common variable area representing its information by fluctuating the width of a white waveform on a
black background, and variable density translating its values to differing shades of gray. The other two examples
show possible applications of the optical soundtrack for
image sonification: the first (2c) shows how whole frames
might be sonified, though they must be horizontally scaled
to fit into the smaller area and offset by 26 frames if they
are to be in sync with original images. The second example (2d) makes use of a 16mm widescreen format known
as Superl6, which extends the image into the area occupied by the optical soundtrack. While this approach allows for the sonification of only a small part of the image and produces a 26 frame offset between each frame
and its sonified output, several experimental films, such as
Roger Beebe's TB TX DANCE, have exploited these idiosyncrasies to great effect.
2.2 Historical experiments in optical sound
The visual depiction of sound on the physical medium of
film opened up a variety of new sound editing and synthesis possibilities. Many of the earliest experiments with optical sound revolved around the manipulation of recorded
sounds using new editing techniques afforded by the medium,
which had already been developed for the creation of motion pictures. Sounds could now be easily studied and
modified in a number of ways such as cutting, splicing,
and overlaying, all of which would be used years later by
pioneering electronic musicians working with tape [1].
Animators quickly realized the potential of the optical
soundtrack as a means of applying their skills to the creation of novel sounds. Early animated sound experiments
in the 1930s included the research at Leningrad's Scientific
Experimental Film Institute, as well as Oskar Fischinger's
211111
(a)
(b)
____111111 _______" m m" " m m" "
" " " "" " 11111 " " "" "
"*""ee""ee"1111
"""""""""""""g111111 *"@"@S"O"@
**"*"*"*"I"g"g" "
(c)
(d)
Figure 2. Examples of optical soundtracks: (a) variable
area, (b) variable density, (c) soundtrack made from camera images, (d) Superl6 images extending onto soundtrack
area
work documenting audiovisual links between the aural and
visual aspects of optical sound [1]. Filmmakers found that
by varying the positioning, shape, and exposure of sequenced
abstract patterns, they could predictably control the pitches
and amplitudes produced as well as effect changes in the
resulting timbres [2]. By the 1970s, Scottish-Canadian animator Norman McLaren elevated the practice of animating sound to new technical and artistic heights, developing
a set of templates for rapid production of several waveforms at different pitches and a variety of masks which
functioned as envelopes [4]. McLaren's 1971 piece Synchromy highlights the cross-modal nature of the process,
juxtaposing the optical patterns with their sonic results to
form a psychedelic audiovisual spectacle.
While Synchromy hints at the transmodal possibilities of
film and optical sound, filmmakers such as Guy Sherwin
and Lis Rhodes pushed the process to its limits by using the
same source material to create the image and sound. Their
works demonstrated and exploited the fact that anything
put on film could be sonified if placed on the optical soundtrack, from the gritty images in Sherwin's Musical Stairs to
the morphing abstract animations in Rhodes' Dresden Dynamo. Their work also reveals the limits of the process: all
images can be sonified, but not all information contained in
an image is communicated equally. Rhodes' piece Dresden
Dynamo from 1971 is a particularly powerful exposition of
the possibilities and limits of this technique, with her morphing abstract patterns allowing us to see gradual changes
in timbre, pitch, and amplitude. As the patterns evolve,
we also encounter the boundaries of the sonification process: the same pattern that produces a steady pitch at one
angle fades to nothingness as it rotates, only to gradually
reemerge as it comes back into alignment.
2.3 Relationship to Electronic Music
Many sound-on-film experiments paralleled and in some
cases predated similar technical developments in electronic
music. The early film sound montages naturally evoke
comparisons to the approaches later found in tape music
134 -