Page  451 ï~~COMPUTER BASED EXPERIMENTAL RESEARCH IN MUSIC PERCEPTION AND COGNITION AT THE CENTER FOR MUSIC RESEARCH, FLORIDA STATE UNIVERSITY Jack A. Taylor, 214 KMU, Center for Music Research Florida State University, Tallahassee, FL 32306 The Center for Music Research (CMR) at Florida State University is committed to research in computer based music instruction, sound synthesis/analysis, and music psychology research-- with an emphasis on music perception/cognition. The latter has shown exceptional growth nationally in recent years; and in keeping with this trend, CMR has developed strategies that include the use of three research models (psychoacoustics, psychological, and "true music") in the experimental study of functional music (composition, music listening, music performance, and music as therapy). These studies are implemented with six computer based research systems, some of which were developed by CMR staff. The six systems are described and illustrated with descriptions of four experiments currently underway at CMR. Description of the Center for Music Research. CMR was founded in 1979 as an official Florida State University research unit. It employs six full-time staff and two half-time persons, all of whom are skilled musicians and computer researchers. Equipment and facilities include a computer classroom containing a number of Atari 1040 ST computers interfaced to two Sun file servers (running UNIX 4.2 bsd), several laboratories (hardware, software, sound synthesis/analysis), a music perception/cognition laboratory suite, and two administrative offices. CMR also has several MS-DOS 386 machines, two Macintoshes, a NeXT computer, a Platypus computer, a variety of hard copy devices (plotter, laser printer), music input instruments (MIDI keyboards, wind controllers, frequency extractors, Yamaha Disklavier and MIE keyboard system), the usual audio and video equipment (including laserdisc systems), and a number of CMR-built interfaces such as a MIDI harp, several Hyasynth 18-voice FM synthesizers, and other custom interfaces. CMR Missions. Although the missions of CMR are broadly based, an early decision was made to emphasize three areas in music for the ensuing several years: computer-based music instruction, sound synthesis, and computer-based music psychology research. These endeavors are important not only for their inherent values, but also because they provide technical support for the School of Music, which has a faculty of 75 and a music major enrollment of 850. The mating of CMR to the School of Music is a natural one, in that CMR staff offers computer services, instruction, and research technology to faculty and students, thus supporting and advancing the School of Music's role as a national leader in the music world. Some of the fruits of this union are (1) the Computers in Music Certificate Program, consisting of a series of six courses in computer music technology and programming; (2) the Postdoctoral Program in Music Research, which allows visiting researchers to pursue projects of their choice in CMR laboratories; (3) special workshops in computer music applications for faculty; and (4) the tooling of custom hardware interfaces for faculty and graduate student research projects. The Rise of Music Psychology Research. The commitment to computer based instruction, sound synthesis, and music psychology research at CMR has remained strong over the years; but recently the latter has commanded special attention. This can be traced to the work of the seven very active music psychology faculty researchers in CMR and in the School of Music. In recent years, these researchers and their students have been especially prolific, and the volume and quality of their work has attracted both local and national attention. Of course, the rise of music psychology research is not indigenous to CMR. It is a young science--many researchers associate its birth with Carl Seashore's studies of musical aptitude and ability at the University of Iowa (Seashore, 1919)--and presently is enjoying a period of substantial national growth, particularly in the cognitive areas. This phenomenon correlates with a national trend toward cognitive science in general, with its special emphases on neurology, perception, and cognition. Thus it is no accident that cognitive science has become an important research area for CMR. Cognitive Research in Music. In order to provide a basis for discussing CMR's research, the three areas in cognitive science (sensation, perception and cognition) are briefly described here. Music cognition in general involves a natural progression that begins with the simple reception of musical stimuli and ends with the higher-level processes of forming musical concepts and remembering them. Sensation is the ICMC 451

Page  452 ï~~first level, since it involves both the neural coding of sound when it reachers the inner ear and the coding or recoding by nuclei as the messages travel along the length of the auditory nerve. Perception is the next level and can be defined as the decoding of the neural signals when they reach the brain's cortex. Perception generally is considered a conscious act, whereas sensation consists mainly (but not totally) of autonomic responses to the musical stimuli. Cognition is the final destination of musical stimuli (although some researchers argue for an intermediate phenomenon called perceptual learning, where the new neural codes are matched with old ones, the result being confirmation or alteration of the old codes). It is here where incoming musical data are compared (discriminated) against what already is known. Concepts are formed or altered, and both short- and long-term memory play significant roles in developing concepts and learning strategies. Music Perception and Cognition Research at CMR. In CMR's early work with music perception and cognition, we simply developed some ideas for experiments, determined their feasibility, then ran them, analyzed the results, and wrote a report on the study. But as the years went by--and as our research potential broadened in terms of both cognitive science and technology, it became clear that a more systematic approach to research is essential. Specifically, it was evident that we needed a rationale for the kinds of research that we would undertake--in other words, a research strategy that specifies the dimensions of our research efforts. Then given a strategy, we would be able to utilize the resources of CMR in creating computer based systems that best meet the needs of these (and hopefully future) projects. Of course, we needed to face the problems of research administration (when and where to run the experiments, how to find and secure subjects of various descriptions and ages, CMR staff responsibilities for each experiment, etc.), but that is an issue that will not be discussed in this paper. Instead, it is more relevant to describe CMR's research strategy, its computer-based research systems, and then a few of the actual research projects that have stemmed from our particular approach to music perception and cognition research. CMR's Research Strategy. Part of developing a strategy for any endeavor is "knowing one's field." Music has two broad music dimensions: functional, which includes the activities of composing, performing, music listening, and using music as therapy; and theoretical, the analysis of music structures and examination of the music phenomenon in historical, sociological, and anthropological contexts. It is true that all these subdimensions have their psychological components, but the functional areas represent the essence of "music making" and in CMR's strategical framework have priority for research. Given the functional subdimensions, the next problem is a decision about what research designs are appropriate. The answer to this problem is not straightforward, because research in music perception and cognition is a young science and real musical models have not yet evolved. Music researchers have been using the psychoacoustic and psychological models, but both can be criticized for their limited use with music designs. For example, the psychoacoustic model--where thresholds and just-noticeable differences in pitch and loudness are measured and compared to their physical correlates of frequency (Hz) and intensity (dB)-- has been under fire for its focus on what many consider to be isolated sounds and not "real" music. Furthermore, the standard psychological perception (and memory) model--which alternates "trials" of paired musical events (such as two melodies) then asks subjects to detect in which way(s) the second member of the pair differs from the the first member (or perhaps simply if the second member is different than the first)--also has been criticized as being rather distant from real musical listening experiences. But there is another model--one that still is evolving and shows promise as a "true music" design. It has the advantage of allowing the researcher to observe subjects in actual musical settings (that is, listening to music, performing, or composing), without artificially interrupting their musical involvement for collection of data. This model owes a great deal to computing technology, since its evolution is not possible without using computers to control the experimental environment, and to collect and analyze the data. These three models are very different in both design and data analysis, but at CMR we believe that all three are relevant to music perception and cognition research, and we have gone to some trouble to apply them appropriately in our work. CMR's general strategy, then, is to design and implement experiments in music perception and cognition, applying (at least) the three experimental models described above in researching the music/human interactions of compositing music, performing it, listening to it, and using music as therapy. In order to accomplish these goals, we have acquired some useful hardware and also have developed special computer-based research systems, described below. Computer Based Music Perception and Cognition Research Systems at CMR. Currently six systems are available. Three of them are complete experimental research studios in the sense that they are used ICMC 452

Page  453 ï~~in creating stimuli, digitally storing them, presenting them to the subjects, and controlling the experimental environment in general: (1) a NeXT computer with a sampler interface (Digital Ears), which because of its friendly interface and DSP hardware/software, is almost an "all-in-one" music perception/cognition laboratory instrument; (2) an Atari 1040ST system, which CMR staff member Anders Franzen programmed to be an "intelligent timer/controller," meaning that with its user-friendly interface the researcher can easily write simple or complex experimental sequences or timings (which includes control of MIDI devices and other machines, such as slide projectors); and (3) another Atari 1040ST-controlled MIDI system (developed by James Falzone) with sequencing hardware and software that allows real time control of up to 24 music tracks--and is interfaced with subject response stations (Atari 1040 ST's) that allow keyset responses (or responses using the mouse) to be synchronized with the music tracks. The remaining three systems are music analysis devices: (1) a MIDI-based frequency extraction unit (Roland VP-70) that calulates the fundamental frequencies (at a rate of up to 200 samples per second) and relative intensities of single-line live or recorded music (applications software written by CMR staff member David Madole); (2) a "Continuous Response Digital Interface" (CRDI), which is a box (A/D converter) with a slider switch that allows the subject to input a continuous response to an MS-DOS computer (which has software that will format the data and perform simple statistical analysis on them) by moving the slider--in response to music, for example (created by CMR staff Eitaro Kawaguchi and Anders Franzen); and (3) the Cro-Magnon system, a sophisticated sound analysis and synthesis package, running on a Macintosh II computer interfaced to a Platypus (PPS) music synthesizer/computer, developed by Ralph David Hill and described elsewhere in these Proceedings (the PPS was created at the Computer- based Education Research Laboratory at the University of Illinois by Lippold Haken and Kurt Hebel). Space does not permit illustrations of all these six systems with actual experiments. However, four examples (currently underway at CMR) are given below. Three are studies in the area of music listening, and the fourth is a study of the musical performer. These experiments are identified according to the three research models described earlier: psychoacoustical, psychological, and "true musical." Examples of Computer Based Research in Music Perception and Cognition at CMR. One of our most recent psychoacoustic studies (a dissertation project by Kim Walls) examines accent, a musical phenomenon that has received scant attention by researchers. Kim is using a sampled snare drum sound and is asking subjects to detect just-noticeable differences in pairs of snare drum sounds. In addition, she is asking her subjects to judge the accent, or intensity levels of those same sounds in non-patterned rhythmic sequences. Little is known about the perception of accents in music; that is, the degrees of intensity differences available to the individual as he or she listens to music. Kim's research is the first in a series designed to answer this question. She developed the musical stimuli on a NeXT machine, using a snare drum sound sampled from the Digital Ears interface; as a matter of fact, the entire experiment will be stored on hard disk and presented to the subjects in real time from the disk through a pair of calibrated headphones. Subjects will be tested for general auditory acuity in advance of the experiment itself (which will begin in early fall). The psychological model has proved to be a valuable paradigm for seeking answers to some basic perceptual and cognitive questions in music listening. Our most recent research (Taylor, Walls, and Barry) deals with the perception of tonal motion; specifically, the well-known (but as yet untested) theory that each scale tone perceptually "leans" or "points" to another scale tone (e.g., ti moves up to do, sol moves up or down to do). Over 500 subjects were tested in six experiments that required them to compare 7-tone melody pairs--the second melody being identical to the first melody, with the exception of a slightly mistuned scale degree. Subjects had to identify the mistuned note, or scale degree in the second melody, and the correctness of their responses was taken as a measure of the pointing characteristic of the scale degree (that is, to which scale degree the tone pointed to, and also the "strength" of its pointing). The tone immediately following the mistuned scale degree (the "pointed to" tone) was varied according to certain strategies, and the remaining tones of the melodic context surrounding the mistuned tone were altered in pitch across each of the six experiments. Preliminary results show that re, mi, and to some extent ti "point" as predicted by the theoretical literature (that is, re and mi move down to do, whereas i moves up). But perhaps more interestingly, the pointing strength of all scale degrees was influenced to a smaller or greater extent by their surrounding contexts--with the greatest influence coming from the tones immediately preceding the scale degree in question. Although this experiment was not controlled ICMC 453

Page  454 ï~~by a computer, the melodies were synthesized, and data analysis was generated on a MS-DOS 386 machine, using a powerful statistics and graphics package. The "true music" model described earlier perhaps has the most interesting and far-reaching implications for music research. Using the Atari 1040ST MIDI system (with sequencing software) described earlier, we (Taylor, Walls, and Falzone) wanted to study the cultural phenomenon of subliminal perception. There is controversy regarding the influence of a subliminal message; that is, if a message is not heard consciously by the listener, can it still be "felt" by the listener at a deeper level--and somehow influence his/her overt and/or covert actions? We assigned 90 music majors to two experimental groups and one control group, then asked them listen to a 4-minute, 16 track composition. They had to move a cursor (using a mouse) either left or right from a center position on a computer screen in order to indicate the tempo of the music they were hearing (mouse movement was synchronized with the music ). If they believed the music was increasing or decreasing, they had to move the mouse right or left, respectively, in proportion to the tempo change. Otherwise, they were not to move the cursor. In both experimental groups, a cassette-taped voice was mixed with white noise and this combination was embedded in the music at a "subliminal" level; that is, below the conscious auditory threshold of the subjects (while they were listening to music, of course). The music was played at a constant tempo, but in one group the voice repeatedly told its listeners, "The music is speeding up, it's going faster;" whereas the voice in the other experimental group cautioned the listener that "The music is geting slower; it's slowing down." A subliminal message was not given to the control group. This study is still underway, but preliminary data show that the subliminal messages have no effect on perception of tempo changes. However, we intend to continue this line of research, since various subject populations and ratios of music, voice, and white noise mix need to be tested. The MIDI system allows us to accurately control (and measure) these power ratios and to synchronize subject responses with the synthesized music. The final example is another "true music" design, because its environment is reasonably natural, in terms of the musical setting, and also because data collection does not interfere with the subject's musical involvement. In this study, we (McArthur, Taylor) are attempting to understand both the perceptual and cognitive processing of music notation as keyboard musicians attempt to play (sightread) new music. It is known that musicians "buffer ahead" when playing music from notation; that is, they rapidly glance back and forth between the music being performed and the about-to-be performed music--and store upcoming notes in a kind of buffer memory. But few details exist about how this music reading system works. In the experiment, subjects play unfamiliar piano compositions on a Yamaha KX-88 velocitysensitive keyboard from scores projected on a back-lit screen which is located at normal reading level on a piano. They hear two measures of metronome clicks, then another measure of clicks at a different pitch level. The latter signals the subject to begin playing on the next measure. At a predetermined place in the performance, the projector turns off (music disappears), and the subject must continue to play as long as possible. The music is timed to disappear at four strategic places: in the middle of a tonal phrase, at the end of an ambiguous (atonal) phrase, at the end of a tonal phrase, and at the end of an ambiguous (atonal) phrase. We expect that the number and accuracy of the tones subjects play after the music disappears will be linked to these variables. The Atari 1040ST timing/control system integrates the experiment from a program that controls and times the input and output of MIDI data. The slide projector, keyboard, and synthesizer (metronome) are interfaced to the Atari with a MIDI merge box. Perhaps the nicest feature of this system is the user interface: it is easy for the researcher to interact with the program in designing the timing sequences and controls systems for almost any experiment in music perception and cognition. The Future. It should be mentioned that at CMR we do not consider our computer-based research systems completed. They are evolving--and in many ways we think they are just primitive tools compared to what they will become. We also anticipate the creation of new tools as warranted by our research goals and experimental designs in music. The marriage of technology and music perception/cognition research is a happy one; and we expect great progress in these two young sciences during the next several years. Seashore, C. E. (1919). The psychology of musical talent. New York: Silver-Burdett. ICMC 454