Composition of Data and Process Models: a Paralogical Approach to Human/Computer InteractionSkip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Please contact firstname.lastname@example.org to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Page 00000001 Composition of Data and Process Models: a Paralogical Approach to Human/Computer Interaction Michael Hamman University of Illinois at Urbana-Champaign m-hamman @uiuc.edu Abstract In this paper, I propose to integrate a dialectical view of interaction in acoustic research and composition made with computers. In doing so, I first present some basic issues related to the notion of representation and to human/computer interaction. Then I present aspects of an ongoing experiment in designing a software system for sound and music composition in real-time in which an attempt is made at transcending some of these dualistic notions of design and interaction. 1. Introduction A representation constitutes a deliberate attempt to orient a human in the appropriation of a particular point of view. It possesses, as such, epistemological significance. An interface is an assembly of correlated representations which together manifest a "task environment." Composition theory views compositional procedure as a "life cycle," the crux of which focuses upon the epistemological stage during which materials are generated, assembled, and interpreted according to as-yet emergent compositional design criteria . Within computer music systems, however, the epistemological stage is typically covered over by vague notions reflecting strong historical and cultural imperatives. As a result, the "interface" becomes a set of conventions and metaphors which one system exports for the consumption by another. We name the consuming system "The User." This dualistic view of interaction masks the community of representations that are encapsulated in the objects and principles by which it is constituted and the effects those representations manifest within a User's behavior. In the end, the User has little or no direct access to the epistemologies which are embodied in those representations: her/his interactions become a game of Blind Man's Bluff. 2. Representations A representation is a performance which orients a cognitive system with respect to a signal. The performance of a representation requires the establishment of at least one analogism: the convergence of a desired aspect of an input signal upon the bearing of the mechanism by which an output signal is generated. The construction of representations constitutes an important activity in computer music: as an activity it elicits the generation of linguistic structures that delineate cultural unities of observers through the appropriation of a methodology. The cultural unities so delineated orient a domain of interaction which a human might have with respect to an imagined (and eventually realized) object. The extrapolation of observable features from a signal requires the presence of a particular ensemble of representations. For instance, we can say that a signal has this or that amplitude and frequency behavior only when it is observed within the context of a particular instrumentation. When a particular instrumentation becomes the prevailing means for observing a signal, cultural practice often collapses the distinguishing principle with the input signal from which distinguished features are extrapolated. By this means an "artifact" (sound, musical work, analysis, etc.) becomes identified with the representations that are encapsulated within the tools used in its generation. 3. Human/Computer Interaction With the computer, a composer can explicitly define the representations which constitute a relevant task environment and can, as such, participate in structuring the dynamics of a potential interaction. By such an
Page 00000002 Figure 1: Task Environment activity, a composer can begin to "engineer" the epistemological framework with respect to which her/his actions are circumscribed. The objective here is to constitute the potential for complexity required for the presentation of a particular thought. According to more common notions of human/computer interaction, however, the goal of the human-computer interface is to hide the complexity of interaction through a dramatization of similarities. Through such a dramatization, the performances of a human are interpreted according to an expected performance. The expectation of a performance manifests a particular history which, while belonging to the culture at large, may not be a history which the composer wants. And yet, in the very automaticity of her/his performance as "User", the invisible appropriation of such history becomes unnoticeable. It is when such an appropriation is unnoticeable that its influence is most profoundly manifested. Jean-Francois Lyotard  uses the term paralogical to describe a discourse which brackets the epistemological framework with respect to which a particular language "game" operates. Such a language game is one which circumscribes human activity in the sciences, the arts, etc. As an epistemological framework, it emphasizes the aspect of hermeneutic play which manifests itself within a language. Such a discourse, when directed at the composition of human/computer interaction, engenders a "political disturbance of the Subject," orienting it toward "an engagement with a materially different Other" . Composers have been among the leading advocates for such a "paralogical" approach to human/computer interaction. From Hiller's MUSICOMP, Xenakis' ST program, Koenig's PR1/2 and SSP, Brun's SAWDUST, and Berg's PILE to, more recently, systems such as Modalys , Manifold Controller , and resNET  - to name only a few - composers have sought ways in which the computer can be used to problemetise the task lowest level::top level (sound desig grammar...middle level constraints Figure 2: Process Model environment and, as such, bring about an as-yet unexpected performance. 4. Modeling a Task Environment Orpheus is a software environment, currently under development, for sound computation and music composition. As an environment, Orpheus supports the design of generative and interactive structures by which sound and music might be composed and modeled. Such structures enable both real time and non-real time modes of interaction. Real time modes of interaction are manifested through direct manipulation (i.e. graphical representations with which objects can be changed in real-time) of inside-time acoustic and musical models. Non-real time modes of interaction are manifested either through direct manipulation of objects by which insidetime sound and music processes and data are defined, or through the specification of algorithms by means of which both inside-time and outside-time sound and musical processes and data are defined. This scenario is captured in figure 1. 5. The Data Model Orpheus defines a task environment which combines composition of sound with composition of musical form. As such, it becomes possible to take, simultaneously, a "top-down," "bottom-up," and "middle-out" approach to music composition (Figure 2). The data model specifies three levels of design: low level, median level, and high level. These are arbitrary divisions which are, to a great extent, neutralized within the implementation of the data model. At the lowest level--the level of sound composition--a composer defines SoundObjects. SoundObjects are structures which encapsulate parameter data with respect
Page 00000003 to a sound computation model. SoundObjects function as templates, or prototypes, which define generative characteristics on the basis of which actual sonic morphologies might be realized. Each actualized morphology represents a "variant" of the SoundObject. SoundObjects can be programmed to spawn other SoundObjects. When a SoundInstance is spawned from another, a SpawnFilter is invoked. A SpawnFilter defines a set of constraints according to which a child SoundObject inherits properties from a parent. Such constraints are defined both with respect to the parent SoundObject and to higher-level constraints; they determine things like the relation in time to the parent (both onset time and duration) as well as timbral features. SoundOb ject pawnFilter SoundObject A ProcessObject defines a set of interrelations among SoundObjects. It is to the ProcessObject that a SpawnFilter points to and from which it obtains data related to global features to be propogated over some field of time. The interrelations defined by a ProcessObject relate to time, density, and general timbral properties to be reflected over the course of an unfolding of an aggregate of events. A ProcessObject is initialized with minimal data. Its characteristic structure becomes more and more particularized as actual acoustic events, which it generates, are unfolded. So, just as a ProcessObject determines the unfolding of a particular SoundObject (and its various spawnings), so too do the acoustical events actualized under that SoundObject deterimine the gradual assumption of structure on the part of the ProcessObject. In the following example, two ProcessObjects are shown. One contains two SoundObjects, while the other contains only one. As shown in the example, overlapping aggregates of events (indicated by the dotted circles) are generated. The arrows connecting events indicate the spawning, fr-om one event, of another. At the highest level, exist GlobalSyntax objects. A GlobalSyntax object contains a kernel syntax according to which a larger scale morphology evolves. As is the case with ProcessObjects, a GlobalSyntax is at first barely sketched out (based on initial data specified by the composer, or based on data inherited from a parent GlobalSyntax object). As the larger morphology which the GlobalSyntax defines evolves, the GlobalSyntax is itself "filled in" according to the specificity of actuated occurrences with ProcessObjects and SoundObjects. The overall infrastructure can be depicted as follows: GlobalSyntax Proces sOb ject SpawnFilter SndObject This infr-astructure represents one large organization which evolves by virtue of initial data, a syntax structure (both defined by a composer), and data defined through real-time interaction, which I will now briefly discuss.
Page 00000004 6. The Process Model The process model circumscribes a set of interactive tools which a composer uses in the formulation and investigation of acoustical morphologies. There are two general approaches: (1) composition by instruction and (2) composition by "direct manipulation." Composition by instruction allows the specification of outside-time structures. By this means a composer formulates initial structures by which GlobalSyntaxes, ProcessObjects, SpawnFilters and SoundObjects are defined. Composition by direct manipulation allows the composer to construct and experiment with inside-time structures. These are particularly related to the unfolding of instantiated morphologies. At the level of sound composition, for instance, a SoundObject is composed, first, through creation of a SndConfiguration. A SndConfiguration defines a control interface to a sound synthesis or other musical/acoustical rendering agent. Such a control interface is defined in terms of the parameters by which that agent is controlled in real time. The principal model of interaction is a type of high dimensional controller: configuration under control references a sound synthesis rendering engine, then the movement recorded defines a SoundObject prototype, which forms the basis of a SoundObject. If, by contrast, the configuration under control references a pool of SndObjects, for instance, then the movement recorded will define a ProcessObject. 7. Conclusion Orpheus joins with other computer music and software synthesis systems in which the very task environment itself constitutes a composible medium. Here the composer is enabled in composing data structures and interactive agents in which common dualistic notions of musical form and of human/computer interaction are, at least to some extent, transcended. Toward this end, it tries to be a question rather than an answer. References  Choi, I., Bargar, R., and Goudeseune, C., 1995. "A manifold interface for a high dimensional control space." Proceedings of the 1995 International Computer Music Conference. San Francisco: Computer Music Association, pp. 385-392.  Docherty, T., 1993. "Postmodernism: Introduction." in Postmodernism: A Reader, ed. Docherty. New York: Columbia University Press. an T. N1  Hamman, M., 1994. "Dynamically Configurable Feedback/Delay Networks: A Virtual Instrument Composition Model." Proceedings of the 1994 International Computer Music Conference. San Francisco: Computer Music Association, pp. 394-397.  Laske, 0. E., 1989. "Composition Theory: An Enrichment of Music Theory." Interface 18, pp. 45 -49.  Lyotard, J.-F., 1984. The Postmodern Condition: A Report on Knowledge, transl. Bennington, G., and Massumi, B. Minneapolis: University of Minnesota Press.  Morrison, J. D., and Adrien, J.-M., 1993. MOSAIC: A Framework for Modal Synthesis." Computer Music Journal 17(1), p. 45-56. By placing the mouse pointer over one of the nodes and moving that node around, those nodes attached to the moving node are "dragged" in relation to the moving node according to the weight with respect to which it is "attached" to the moving node. Each node, each arc, and the weight according to which an arc attaches two nodes can be defined by the composer. Moreover, particular movements can be "recorded" and saved for future reference. If the