Who's Playing - The Computer's Role in Musical PerformanceSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact email@example.com to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 131 ï~~Who's Playing? The Computer's Role in Musical Performance Alan Belkin, Facult6 de Musique, Universite de Montreal, C.P. 6128, succursale A, Montreal, Quebec, H3C 3J7. The increasing sophistication of computer intervention in live performance leads to questions about the nature of performance itself. Traditionally a performer simply transmitted a composer's work to an audience, acting as an intermediary. Computer intervention can both extend the role of the performer and distance him from the audience, since he can control an enormous variety of sound in ways that can become increasingly indirect. This paper is an attempt to classify the ways performers can use the computer, starting from the acoustic model, ranging to newer formulations which blur or even eradicate the lines between composer, performer, and audience; it focuses on the musical voice of each participant in this chain of communication. The substantial use of computers in live performance has only been possible for a few years, due to the development of inexpensive hardware fast enough to process sounds and gestures in real time. The musical possibilities - and the challenges - of the new medium are only beginning to be understood, and I would like to explore some of them today. In the traditional chain of musical communication typical of western music, a composer wrote a work, which was recreated by a performer, using an instrument the performer spent years mastering. The object of this performance was a (relatively) passive listener. While a performer might occasionally compose or improvise, and while a listener might at times perform for himself, the stages between conception and reception remained quite distinct. Up to the listener, each link in the musicmaking chain of communication - composer, performer, instrument - had a characteristic "voice", that was preserved through the succeeding links. The composer's voice, or style - an identifiable complex of musical preferences - remained clear even when colored by the performer's personal manner of playing. The performer's style was imposed on a familiar instrument. Each classical instrument itself has a characteristic, immediately recognizable voice, intimately tied to its construction. The computer - or rather computer software - can drastically alter this chain of communication. Using varied methods of sound synthesis and algorithms which radically transform the performer's and/or composer's input, each musician can now have a completely different voice, and indeed can even change voices repeatedly within the same piece. This radically fluid intervention between composer and audience forces us to rethink the notion of performance. I will attempt a rough classification of various ways the computer can participate in live performance, from acting as an enhanced traditional instrument, to mediating a fluid restructuring of the performance environment, ICMC 131
Page 132 ï~~creating exotic "virtual realities". I will also say a few words about the musical implications of these situations. I should state that I attach no value judgement to any of these modes. All depends on the richness, profundity, and artistic integrity of the result: fine music may result from either the simplest or the most complex use of the computer, since in a performance, the latter is a tool for musical expression, and not an end in itself. Probably the most naive way the computer can be used in performance is simply to follow the classical paradigm: a performer plays an existing piece, using a synthesizer to simulate an acoustic instrument as faithfully as possible. In this situation the differences from the traditional model are subtle: the way the instrument responds to the performer becomes moderately programmable, and the performer may profit from the fact that the computer can at least allow him to change voices between pieces, thereby including on the same program, say, a violin sonata and a cello suite. Within each piece, however, his voice is constant and familiar, and there is no blurring of roles for anyone in the chain. The only special demand on the performer may be a mild modification of his technique to adapt to a slightly different interface. In a variation on this approach, the piece may be written for a sound not of acoustic origin, but which preserves intact the notion of instrumental timbre as relatively constant, as well as the performer's classical technique. Here too, the composer's and the performer's roles remain identical to what they would be in a Beethoven sonata. If we take a further step away from the traditional model, we come a new possibility: instead of using one fairly constant timbre for a piece, the composer lets the performer control a large and diverse group of sounds. As with the organ, the performer leads a sort of one-man orchestra, and the limits on what can emerge are set less by the instrument than by the performer's body. Of course, the range of sounds available is larger than with the organ, and there is also the possibility of using varied physical interfaces to control the sounds. Indeed, providing familiar sounds with new modes of production may ultimately modify the sounds themselves. A piano sound provided with vibrato through aftertouch creates a new instrument - and a new technique for the performer to master. This issue of new approaches for the performer takes us to the center of the problem of using him at all in computer-mediated music. The performer's ability to subtly control aspects of sound with finely tuned physical gestures, and to hear delicate nuances in the resulting music is cultivated from years of training. If this sensitivity and training are not to be wasted, the performer must be allowed to use his strengths in meaningful musical ways. The performer's gestures may be translated by the computer to affect novel aspects of the sound. If this is sensitively applied, the resulting performance can have a new vibrancy and liveliness. This probably requires a special training period for the performer, at least for his ear, if not for his technique. In all the models examined so far, as long as the sounds are localized in space to correspond with the performer's physical position, and as long as there is visual correlation between his gestures and what is heard, his voice, while very rich, remains identifiable as coming from one human agent. Once this model is extended to a group of performers, things get more complex. Even with clear spatial and gestural definition of each participant, sounds can be shared. Performers can at times have the same voice, and at other ICMC 132
Page 133 ï~~times contrast with each other. While this is familiar even from a string quartet, the range of sounds available, and the possibility of gradually evolving one sound into another are new. Since the conversational aspect of chamber music is a large part of its attraction, what becomes of this sort of dialogue when the lines separating the participants begin to blur? To make this situation work, one would have to create a musical context where this blurring became a musical advantage rather than a source of confusion.. If we alter the previous situation by removing direct spatial correlation between performer and sound, things get more confused. Either the visual, gestural play becomes the focus of the listener's attention, making the event more spectacle than music, or the performer's perceived role begins to change, and with it the listener's relationship to him. This relationship, which is central to the concertgoer's expectations, and which indeed constitutes the main difference between live and recorded music, deserves serious consideration. When I go to a live concert I enter into communication with the performer, and through him, with the composer. If the performer becomes anonymous or seems disconnected from what I hear, I will find him irrelevant. This sense of disconnection actually typifies the next mode of using the computer: the performer acts as a trigger, setting off some prerecorded sequence of events. In its simplest form, this common scenario seems to me artistically flawed: if the performer has nothing substantial to do, why put him in the visual spotlight? What really is his contribution? And if his role is so mechanical, and has so little real influence on the shape of the result that even a primitive machine could do the job, why elect a human for the task? Of course, this rather passive scenario can be dramatized by setting it off against the more traditional one, where the performer adds more meaningfully to the musical result. This music-plus-one approach suggests a new twist to the old concerto idea: the performer opposes not a group, but a potent invisible, mysterious sound source, which may at times overwhelm him, or at other times be reduced to nothing more than a faded mirror of himself. None of the models so far discussed have challenged the composer's primary role in the chain of communication. Some current software development aims at doing just this. In both experimental and commercial software there is interest in widening the gap between input and output, by applying more imaginative processing to the data that comes in from the performer, or even from the compose and/or programmer. In the first such case, the performer plays previously composed music, but the computer transforms this data simultaneously, successively, or both. The performer's gestures are substantially magnified and/or distorted by the computer. The major problem here is formal. Such substantial intervention must neither seem random, nor become too predictable, or the overall interest of the work will suffer. The more the computer can be made into an intelligent improviser, the more interesting the results. If the alogorithm used keeps the input recognizable, and if it allows the performer to predict and control the work's overall trajectory, this approach can lead to multiple realizations of the same piece, equally valid but varying each time in detail. A performer of this type requires experience in leading and responding to improvisation, like a jazz musician. If the musical result is to seem well integrated, he may have to make major formal decisions on-the-fly, thereby going even beyond the jazz model. The training of such performers is a wholly ICMC 133
Page 134 ï~~unexplored field; indeed, even the criteria for training them are largely undefined. We need to know more about what makes satisfactory auditory form, no small challenge in itself. As the computer does more of the composer's job, this problem becomes even more acute. There is pressing need for rich, varied, supple, largescale formal algorithms, that can also integrate detail in meaningful ways. The daunting prospect of trying to rationalize and quantify criteria of musical quality on this scale is perhaps the greatest challenge of all those discussed so far, and is also probably the greatest reason for keeping humans directly involved in performance at all, other than the fact that making music is fun! The remaining ways of using the computer in performance call into question not only the roles of performer and composer, but even that of the listener. Indeed such terms begin to lose their meaning when, instead of attending a concert, the listener enters an environment filled with surprizing potential - musical or otherwise - that is also responsive to the person entering. The listener here becomes a performer himself, influencing the outcome in an active way. The enormous difference is that he is untrained, and therefore unlikely to produce effects of great refinement on his own, unless the composer-programmer has only allowed for effects of great interest, beauty, and subtlety - easier said than done! And what of the "form" of the resulting "work"? These sorts of interactive environments, affecting other senses as well as hearing, thus engendering entire virtual realities, are the most farflung possibilities currently visible. Their full exploration involves areas outside music: psychology of perception, environmental design, simulation of all sensory inputs, and so on. While it is exciting to consider these radical alternatives, which will doubtless become quite common in the future, it seems to me that significant artistic achievement is still far away, since too much ground work remains to be done. The rethinking of the traditional concert environment is not exclusive to computer music. Reevaluating the performance situation has been proposed in other contexts - John Cage comes to mind - and usually is tied to a philosophy of what music should or should not encompass. Perhaps this is the real challenge of such redefinition: moving away from the entrenched traditional model is only a first step; the real work is in finding ways to enhance and enrich and connect with people through music, rather than to assault, diminish, or bore them. To the extent that music is life-enhancing, and to the extent that the musician respects its strength, its richness, and its complexity, the resulting works will move, speak, and transform our experience in exciting ways. ICMC 134