Animating the Language Machine:
Computers and Performance

Marshall Soules
2002

Abstract

This paper explores a range of inter-disciplinary discourses which consider the online writing space as a unique performance medium with characteristic protocols. Drawing on contemporary performance theory, literary criticism, and communication theory, the author proposes that technologists, academics, and artists are developing idiomatic rhetorics--a lingua franca-- to explore the technical and expressive properties of the new "language machines" and their hypertextual environments.

Writing Like a Cyborg Speaks

...I find myself anticipating a new kind of storyteller, one who is half hacker, half bard. The spirit of the hacker is one of the great creative wellsprings of our time, causing the inanimate circuits to sing with ever more individualized and quirky voices; the spirit of the bard is eternal and irreplaceable, telling us what we are doing here and what we mean to one another. (Murray 9)
* * *

The central argument of this article was originally presented orally to a gathering of people interested in humanities computing. To illustrate the central conceit of my talk on the language machine as a performance medium, I introduced myself to the audience with an artificial intelligence program --a verbot called Sylvie (from www.vperson.com)-installed on my laptop. (A verbot is "a verbally enhanced software robot using a natural language processing engine.") Sylvie's conversational algorithms are related to those of the "chatterbot" Julia who competed in the 1994 Turing Test matching computer scientist judges against machines to see if they could distinguish a computer from a real human. In 1997, the creator of Julia, Dr. Michael "Fuzzy" Mauldin, collaborated with clinical psychologist Peter Plantec and started Virtual Personalities to create a virtual human interface incorporating real-time animation with speech and natural language processing. The result was a "a stand-alone virtual person" called Sylvie (www.vperson.com).

Sylvie Sylvie seems more expansive and versatile than Julia, but her real talents are less as a conversationalist than as agent of communication and catalyst for learning. The audience for my presentation seemed captivated by the fact that a talking head was addressing them from the screen of my laptop: a machine with deep roots in the use of text to communicate, and mainly used at conferences for Powerpoint presentations. Here the computer was talking to them. More importantly for me, however, was the knowledge that preparing a speech for Sylvie had taught me something significant about computers, communication, and performance. While I was scripting what I wanted Sylvie to say, I was learning how to write the way a cyborg speaks. I wanted her to speak well to the audience, so I chose her words carefully. I also wanted her to speak the words of my text as naturally as possible, and for that I had to adjust my style, especially sentence length, punctuation, and variant spellings. (For example, Sylvie reads "cyborg" as "si-borg" with a short i, so I had to adjust the spelling to "sigh-borg." ) I also discovered that Sylvie elicits a compelling attention from her audience when called upon to speak personally and with expressions of emotion. The effect is strangely disturbing and humorous to hear an artificial intelligence, with tell-tale cyborgian rhythms and inflections, speak with (feigned) emotion. I was writing differently for a new performance medium, but there were more surprises in store for my audience: Sylvie is programmed to make statements when left idle, and she interrupted my presentation at intervals to say, in effect, that she was tired of waiting.

Sylvie can be understood as an emblem for a complex matrix of intersecting discourses on computers as a performance medium. When artists, humanities scholars, and technologists begin exploring common ground, or begin to rely on one another for insights about an emerging medium, there is an accompanying search for new or resurrected terminology, transferable paradigms of knowledge, functional analogies, and communication strategies which reach across disciplinary divides. Each discipline contributes its most useful rhetorical strategies in the hope of finding a tolerable fit with the idiom of machine language: algorithm, code, script, program, link, and node. Each discipline reassesses its practices and terminology in light of new alliances. The present article--expanded from the original oral presentation--attempts to contribute to this intertextual debate by discussing the work of theorists and practitioners who seem able to navigate the rhizomatic connections between information technologies and critical theory. These are the new storytellers--half hacker, half bard--who are helping us communicate through our digital machines.

Protocols of Improvisation

An associative thread runs through this discussion which approaches the electronic environment of hypermedia as a kind of performance space, and I call on my familiarity with dramatic conventions and the anthropology of performance--from ritual to the jazz ensemble--to inform the discussion. In particular, I will consider the place of improvisation in the ways we ask computers to perform, and in the ways we are asked to perform with computers. My argument starts, then from the assumption that our interaction with computing technology involves a reciprocity of influence, and requires a degree of reflexivity and self-consciousness. As a style of performance, improvisation requires protocols, or guidelines and conventions of behavior, yet accommodates degrees of individual expression within this matrix of constraints (Soules, "Protocols…"). While it is beyond the scope of this article to discuss the protocols of improvisation in any great detail, the following discussion trades on analogies between improvisatory practice and performance in networked electronic environments.

As a mode of performance, improvisation seems to describe aptly how many of us have learned to use computers, and how we have accommodated our tasks and creativity to protocols of machine language. The computer gamer must learn the rules of engagement before beginning to improvise solutions to the various challenges presented in the game, simulation, MUD, or MOO. The digital artist learns a new software application by reading a manual, using the help files, or just trying things out; once some familiarity with the protocols of the program have been gained, the artist is ready to begin improvising with the new application. Cognitively, improvisation involves complex analytical and associative interactions: what are the protocols, and how can I make use of them. In "As We May Think" (1945), Vannevar Bush emphasized the importance of associative linking in the construction of knowledge; improvisation accommodates the discoveries of associative linking within an analytical matrix.

There have been numerous and divergent attempts to define improvisation as a performance style, and I'll cite only two examples here to establish some general parameters for our approach to performance and computing. As suggested above, an inter-disciplinary study of improvisation reveals that any "free play," spontaneity, or personally expressive contribution is generally framed by guidelines of some sort. Protocols--"long-established codes" determining "precedence and precisely correct procedure"--may at first seem antithetical to popular notions of improvised creativity. Protocols are strategies or agreements which "glue" events together (after the Greek protókollon, a first leaf glued to the front of a manuscript and containing notes as to its contents). These guidelines, whether explicitly stated or implicitly embodied in the mode of expression, ground the play of improvisation in performance situations.

Improvisation often involves repetition. Think of a tennis player learning to perfect her backhand, or a Quake player learning how to negotiate the virtual game space. Both learn the rules of engagement and its boundaries, then develop technique--including personal tricks--to facilitate responsiveness, flexibility, and versatility when confronted with new conditions. Knowing the guidelines of engagement and developing technique are two important preconditions for improvisation, and both involve repetition. The same is true of a jazz musician, a spontaneous writer (like Jack Kerouac), a computer game player, a hacker, or an artist using Photoshop or Flash. Improvisation introduces within these guidelines the possibility of personal style and expression, invites the creation of something new, and generally involves some form of revision.

Embedded in the revisions of the athlete, painter, musician, actor, writer, gamer are allusions to previous practice. In literary studies, this use of allusions can take the form of troping--a kind of linguistic play--or intertextuality, when one text participates in a "dialogue" with a previous text. In jazz, the use of allusions, echoes, or references is often called "riffing." Albert Murray elaborates in Stompin' the Blues:

When they are effective, riffs always seem as spontaneous as if they were improvised in the heat of the performance. So much so that riffing is sometimes regarded as being synonymous with improvisation. But such is not always the case by any means. Not only are riffs as much a part of some arrangements and orchestrations as the lead melody, but many consist of nothing more than stock phrases, quotations from some familiar melody, or even clichés that just happen to be popular at the moment....[I]mprovisation includes spontaneous appropriation (or inspired allusion, which sometimes is a form of signifying) no less than on-the-spot invention. (96)
Murray also notes that the efficacy of the creative process "lies not in the originality of the phrase...but in the way it is used in a frame of reference" (96). The notion that improvisation is "spontaneous appropriation" or "inspired allusion" is remarkably robust when carried across disciplines.

In The Signifying Monkey, Henry Louis Gates Jr. explores a related notion of improvisation in which the performer "repeats and revises" musical figures, styles, and instrumental voices. Gates associates this activity with the African American practice of "signifyin(g)--playing with linguistic figures to parody or pastiche a rival" (46). Gates notes how this process of signifyin(g), of repetition and revision, has become a staple of jazz improvisation:

Improvisation, of course, so fundamental to the very idea of jazz, is "nothing more" than repetition and revision. In this sort of revision, again where meaning is fixed, it is the realignment of the signifier that is the signal trait of expressive genius. The more mundane the fixed text ("April in Paris" by Charlie Parker, "My Favorite Things" by John Coltrane), the more dramatic is the Signifyin(g) revision. It is this principle of repetition and difference, this practice of intertextuality, which has been so crucial to the black vernacular forms of Signifyin(g), jazz--and even its antecedents, the blues, the spirituals, and ragtime.... (63-4)
For Gates, the repetition and revision of the improvising jazz musician has its counterpart in the intertextual networking of the cultural critic: both trade on indeterminacies resurrected from the tradition, and both operate to realign the signifier. The more radical the revision, the greater the expressive genius. As we will see from the examples below, the concepts of "inspired allusion" and intertextual repetition and revision--signifyin(g)-seem to aptly characterize how people are learning to use networked digital media, and how they are creating for those media.

Peter Lunenfeld's Snap to Grid (2000) is a thought-provoking and accessible "user's guide to digital arts, media, and cultures." The conceit which informs his title nicely demonstrates both his method and the importance of thinking through the use of protocols in any performance situation:

Consider the command "snap to grid." It instructs the computer to take hand-drawn lines and plot them precisely in Cartesian space....Artists regularly disable the snap to grid function the moment they open an application because the gains in predictability and accuracy are balanced against the losses of ambiguity and expressiveness....I have come to think of the command "snap to grid," however, as a metaphor for how we manipulate and think through the electronic culture that enfolds us. This book is the result of snapping my seduction by the machine to the grid of critical theory. (xvii)
Lunenfeld strikes a productive balance in his coverage of the theory and practice of the digital arts. His method--a digital dialectic--seeks to ground "the insights of theory in the constraints of production": "The trick is to oscillate between temporalities, never privileging the past, present, or future in theorizing the media" (xxii-xxiii).

Computers as Performance Space

When we are creating and communicating with networked computers, we are performing. And, as Marvin Carlson defines it, performance involves "a consciousness of doubleness, through which the actual execution of an action is placed in mental comparison with a potential, an ideal, or a remembered original model of that action....Performance is always performance for someone, some audience that recognizes and validates it as performance even when...that audience is the self" (5-6). When we acknowledge the performative aspects of our work with language machines--and examine the technology of the stage carefully--we become aware of the doubleness of the reflection we find there. We must simultaneously be aware of the message we are creating (and its audience), and the characteristics of the medium we are creating on. The rules defining our art and science must accommodate the protocols governing the machines we are using to produce it. We improvise to find new solutions, but we must also remember the limitations or opportunities provided by our medium. With certain qualities of interaction, there is a feeling of symbiosis between human and technology: in the traditional theatre, this symbiosis is fostered by empathy; in immersive computing applications, we might speak of agency. "Agency is the satisfying power to take meaningful action and see the results of our decisions and choices," writes Janet Murray (126). At deeply personal levels, we might even test the boundaries of identity and consciousness in the process of meaning-making. This "consciousness of doubleness" characterizes both performance and the human/computer interface.

In Life on the Screen: Identity in the Age of the Internet, Sherry Turkle explores the nature of the performer in cyberspace, of the performance at the interface of human and machine. Reflecting an abiding postmodern sensibility steeped in Lacan, Derrida, Foucault, and Deleuze, Turkle postulates a model of self that is multiple, distributed, flexible, reflective, and nomadic. She suggests that our interactions with networked computers both encourage the formulation of a multiplicity of selves, and our performance in the medium is enhanced when we recognize the possibility of multiplicity. To cite just one of her many illustrations on this theme:

MUD players are MUD authors, the creators as well as consumers of media content. In this, participating in a MUD has much in common with script writing, performance art, street theatre, improvisational theatre-or even commedia dell'arte. But MUDs are something else as well….As players participate, they become not only authors of the text but of themselves, constructing new selves through social interaction. One player says, "You are the character and you are not the character, both at the same time." Another says, "You are who you pretend to be." (12)
She concludes that MUDs "make possible the creation of an identity so fluid and multiple that it strains the limits of the notion....[Y]our identity on the computer is the sum of your distributed presence" (12-13). Turkle's influential work updates the construction of the performing self in distributed networks, acknowledges the role of improvisation in the construction of those selves, and suggests the importance of the requisite consciousness of doubleness--at least of doubleness. In our evolution from a culture of calculation to a culture of simulation (20), "it is the computer screens where we project ourselves into our own dramas, dramas in which we are producer, director and star." As we will see below, the subtext of this drama, as N. Katherine Hayles describes it, is the story of how we became posthuman.

The notion of the computer as a unique performance space is convincingly argued in Brenda Laurel's Computers as Theatre (1992), where she draws on her knowledge of dramatic conventions to suggest that software and computer interfaces should be designed to involve users and their bodies in the theatre of the electronic space. She establishes a number of principles--or protocols--for human-computer activity. For example, she asks us to "think of the computer not as a tool but as a medium," to "focus on designing the action," and to "think of agents as characters, not people" (125-165). Such directives reflect Laurel's conviction that the design principle of direct manipulation of represented objects is not as involving as direct engagement in an activity of choice. As any jazz musician knows, it is more engaging to be a participant in the action than an observer of effects. Recognizing the importance of embodied interaction is an "endangered sensibility" according to Laurel, a sensibility the arts and humanities should fight to restore (Hayles, "Condition" 204).

In her discussion of interface design, Laurel returns us to first principles. "The search for a definition of interactivity diverts our attention from the real issue: How can people participate as agents within representational contexts? Actors know a lot about that, and so do children playing make-believe. Buried within us in our deepest playful instincts, and surrounding us in the cultural conventions of theatre, film, and narrative, are the most profound and intimate sources of knowledge about interactive representations" (21). Again, the spirit of play is invoked to convey the serendipity, the bricolage involved when "people participate as agents in representational contexts."

Here, Laurel is on the same associative trail as theatre anthropologist Victor Turner who explored the intersection of play, theatre, and ritual in various cultures. In The Anthropology of Performance Turner traces the improvisations of play into neurophysiological realms such as the limbic system, distinguishes between ergotropic (energy-expending) and trophotrophic (energy-conserving) neural activities, and associates the whole playful process with the pan-cultural archetype of the trickster. For Turner, play is performance which combines indicative (what is) with subjunctive (what if) moods and thus engages the nervous system in widely-distributed ways which defy localization:

…[S]ince play deals with the whole gamut of experience both contemporary and stored in culture, it can be said perhaps to play a similar role in the social construction of reality as mutation and variation in organic evolution. Its flickering knowledge of all experience possible to the nervous system and its detachment from that system's localizations enables it to perform the liminal function of ludic recombination of familiar elements in unfamiliar and often quite arbitrary patterns. (170)
Liminal, for Turner, signifies an activity which crosses thresholds or boundaries between familiar cultural activities; thus play, which combines analytical and associative cognition to improvise within a set of constraints, is an agent of social change and evolution. In this sense, there is a great deal to be learned about the computer as a performance medium by observing those who play games with it.

In Computers as Theatre, Laurel takes a more traditional approach and considers human-computer activity as essentially dramatic, mainly from the classical perspective of Aristotle's Poetics. However, she also reflects on the role of improvisation within this model, and cites acting teacher Michael Chekhov on this theme: " Every role offers an actor the opportunity to improvise, to collaborate and truly co-create with the author and director....The given lines and the business are the firm bases upon which the actor must and can develop his improvisations" (106). Laurel concludes that the "value of limitations in focusing creativity is recognized in the theory and practice of theatrical improvisation."

In the mid-1990's, Laurel collaborated with Rachel Strickland on the Placeholder Project--an exploration of narrative action in virtual environments. Produced by Interval Research and the Banff Centre for the Arts, the virtual geography of Placeholder was modeled on three actual locations in the Banff National Park: a cave, a waterfall, and a formation of eroded earthen spires (called hoodoos).

Three-dimensional videographic scene elements, spatialized sounds and voices, and embodiment as petroglyphic spirit animals were employed to construct a composite landscape that could be visited concurrently by two physically remote participants wearing head-mounted displays, who were guided by a disembodied 'Voice of the Goddess' as they walked about, conversed, used both hands to touch and move virtual objects, and recorded fragments of their own narratives in the three worlds. (Laurel and Strickland, Placeholder)
The activities of participants in Placeholder were not scripted and thus much of the interaction was improvised by Laurel and others.

It is notable in the description of the project above that sound played an important part in the immersive qualities of the interactive experience: "All of these sounds were spatialized to appear to emanate from realistic locations in space, relative to the participant's own body, using Crystal River Engineering Convolvotrons." Participants orient themselves with the auditory equivalent of trail signs and graffiti called "voicemarks" and these "bits of spoken narrative" are stored in arrangeable containers called "voiceholders." Placeholder demonstrates among other things one of the ways narrative may be enhanced when it moves onto the hypermedia stage: the storyteller can make a considerable impact on the bodies of listeners with auditory signals. Laurel and Strickland may have been influenced by Artaud's call for a theater of spectacle which subverts the conscious mind by impinging directly on the flesh. It is highly intriguing that they attempt this dramatic effect in a "virtual" environment. Again, we see the demonstration of protocols used to define the interactive experience, and within the parameters defined by these protocols, a space for improvisation.

From Placeholder, Laurel went on to found Purple Moon (1996-1999) a transmedia company devoted to girls ages 8 to 12. There she co-created its narrative worlds, characters, and interactive designs. The company published several award-winning products for girls, including 8 interactive CD-ROMs, a popular website (www.purple-moon.com), and a variety of related merchandise. Purple Moon was formed to turn the findings of the Placeholder Project and other research into marketable products , and was acquired by Mattel in 1999. The research conducted for Paper Moon filled a gap, according to Laurel, on what girls want from computers:

We had strong quantitative findings--for example, the leading reason girls gave for disliking traditional video games was not that they are violent or competitive, but that they are boring. Girls tend to find the characters entirely unsatisfying--so weak that you can't even make up good stories about them. Girls are typically unmotivated by mastery for its own sake, but demand engaging and relevant experiences from computer games. Both boys and girls see video game machines as for boys and computers as gender-neutral. ("Technological Humanism...")
Laurel's research suggests that gender is an important determinant in the design of computer interactivity, at least for products marketed to preteen girls and boys. At the same time, her list of distinguishing characteristics provides useful guidelines for designing the human-computer interface as a site of performance.

The Language Machine

For students of the media, the study of human civilization reveals an evolution of communication devices and strategies from carving in stone to manipulating digital bits. The introduction of each new communications medium brings with it a concomitant refiguring of both the terms of knowledge gleaned from the history of culture, and a reconsideration of the human communicator. The successive repetitions and revisions of communications history--significantly theorized by Harold Adams Innis and Marshall McLuhan--chart the education of human understanding through successive levels of abstraction about both the medium and the message. We enter the age of simulation and virtuality--if indeed it is something new at all--with a long history of learning how to communicate with abstractions and symbols within the limitations of a given medium. Would we be prepared to write hypertext without a rich history of exploring ideas in print? Would we understand the notion of cyberspace as a consensual hallucination without prior training on the telephone--and before that with speech--or with arcade computer games? In The Spirit of the Web, Wade Rowland describes distributed networks by tracing the accretion of communications technologies, and attitudes about them, which prepared us for the innovation of the internet.

Much of the speculation on new digital art forms falls prey to McLuhan's "rearview mirror syndrome": we tend to define any new medium in terms of its predecessors, and the new medium "retrieves" the content of an earlier one. Virtual museums and art galleries come to mind as an illustration: what is gained in accessibility is certainly compromised by the translation of the artifact into the relatively low resolution and resizing of the screen image. The repurposing of content for a new medium reflects Gates' idea of "repetition and revision." During our adoption of new communication technologies, it seems particularly prudent to keep a double consciousness of both content and medium, and to reflect on how we arrived at such a degree of abstraction and internalization of the means of production.

Landow's 1992 Hypertext: The Convergence of Contemporary Critical Theory and Technology continues to make an important contribution to the cross-fertilization of ideas required of humanities scholars encountering the new conditions of the technologized word. The evolution of terminology charted by Landow signifies that the new medium is also a new message. In what has now become almost a commonplace of the discourse on hypertext, Landow summarizes, "...[W]e must abandon conceptual systems founded upon ideas of center, margin, hierarchy, and linearity and replace them with ones of multilinearity, nodes, links, and networks" (2). Landow is concerned here with redefining the performance space, the geography of hypertext, and these terms can be considered protocols. He is equally attentive to the role of specific media in shaping convergence, and how we articulate that convergence:

...[M]any of our most cherished, most commonplace ideas and attitudes toward literature and literary production turn out to be the result of that particular form of information technology and technology of cultural memory that has provided the setting for them. This technology--that of the printed book and its close relations, which include the typed or printed page--engenders certain notions of authorial property, authorial uniqueness, and a physically isolated text that hypertext makes untenable. The evidence of hypertext, in other words, historicizes many of our most commonplace assumptions, thereby forcing them to descend from the ethereality of abstraction and appear as corollaries to a particular technology rooted in specific times and places. (33)
Landow is describing a (sub)set of interactions between our technologies of communication and our received notions about those technologies, with the understanding that we invest our tools with personal subjectivities (contributing to our sense of the idiom of that technology). One of our tasks, then, is to acknowledge the complex entanglements of medium and message, meaning-making and machine, in ways that are both technologically determined and specific, yet still open to possibility. In drawing our attention to the importance of the technology of communication, Landow hints (again) at the concomitant re-evaluation of our notions of identity that accompany the shift in media. Our "cherished memories" and our notions of "authorial uniqueness" are interwoven into a sense of self which becomes equally historicized when we reflect on the reconfigured author and reader of the hypertext document.

As Landow describes it, the hierarchies of knowledge characteristic of the academy are subject to the intrusions of cross-disciplinary links, nodes, and networks. The collection of essays in Language Machines: Technologies of Literary and Cultural Production is one such attempt to reconfigure the act, or performance, of writing in light of recent technologies. In their "Introduction" to Language Machines, editors Masten, Stallybrass and Vickers confirm that their task is to examine a "range of technologies that have shaped literary and cultural production" around two central assumptions: "...first, that material forms regulate and structure culture and those who are the agents or subjects of culture; and second, that new technologies redefine and resituate, rather than replace, earlier technologies" (1). The first assumption is profound in its implications for the creative use of computers since it asserts that the artist--"the agent of culture"-is regulated and structured by the medium. Such an assumption challenges the predominant tenor of the technological discourse which asserts that the technologies are neutral tools of empowerment waiting to be mastered. As will be discussed below, a number of artists are fully aware of the ambiguous degrees of influence mediating between their creativity and their means of production. The best have learned to improvise within the constraints of their new media.

Both assumptions posited by Masten, Stallybrass and Vickers restate conclusions reached by Harold Adams Innis in his early 1950's studies Empire and Communications (1950) and The Bias of Communication (1951). In these works, Innis seeks to show how particular technologies of communication, which he characterized as being either time- or space-biased, were instrumental in the creation and maintenance of empires and monopolies of knowledge. (Time-biased media are able to endure over time--cathedrals and oral traditions being two examples--and space-biased media are more easily transported over distance-print and email being paradigmatic.) Throughout his career, Marshall McLuhan extended the work initiated by Innis in his exploration of media as extensions of the human sensorium (Understanding Media: The Extensions of Man, 1964), an idea originally proposed by anthropologist Edward Hall in The Silent Language (1959). Electronic media were particularly intriguing for McLuhan: he considered them to extend the human nervous system. With that extension followed the psychological and psychic accoutrements associated with nerves, neurons, and synapses, and we were suddenly linked into the global network of sentient technologies and semi-autonomous media.

In his later career, and in collaboration with his son Eric, McLuhan attempted to articulate his unified field theory of media in The Laws of Media. He theorized that new technologies enhance, displace, retrieve, and reverse the impact of existing technologies. McLuhan's tetrad of effects--the "laws of media"--are not sequential in their operation, but simultaneous, "inherent in each artifact from the start," and complementary. The laws of media require "careful observation" in their material context, or "ground," rather than consideration in the abstract (Laws of Media). Following in this tradition, Masten, Stallybrass and Vickers suggest that "language is not a disembodied essence... but rather a set of productive practices...produced by a variety of machines" (1). Each machine or new technology contributes an idiomatic orientation to the message it conveys, and much of the critical writing on hypermedia is concerned to varying degrees with attempts to characterize the idiomatic proclivities of the digital medium. What does the computer allow the text to become in hypertext, and what unique qualities will that text have? What are the characteristics of the digital photograph that distinguish it from its analogue counterpart?

How We Became Posthuman

The work of N. Katherine Hayles is exemplary in its ability to maintain a fruitful doubleness of vision regarding human/computer interactions and the translations effected by new digital media. In How We Became Posthuman, Hayles charts the history of cybernetics from its gestation period of the Macy Conferences in the 1950s to our current preoccupation with virtuality. "[T]o show the complex interplays between embodied forms of subjectivity and arguments for disembodiment through the cybernetic tradition," Hayles recognizes three distinct waves of development: homeostasis, 1945-1960; reflexivity, 1960-1980; virtuality, 1980-present (7). Here, and in her earlier work, Hayles seems equally comfortable with computational technologies and literary theory, so becomes a guide for those academics, creators, and technologists seeking a common ground of endeavor. For those working in the humanities and social sciences, and for artists, she acts as an historian of technology; for computer scientists and technologists, she argues persuasively for an embodied approach to virtuality, simulation, artificial intelligence, and robotics: "...[I]t can be a shock to remember that for information to exist, it must always be instantiated in a medium" (13).

Hayles' definition of "posthuman" is highly ironic by virtue of her on-going concern that material instantiation not be completely displaced by virtualities:

First, the posthuman view privileges informational pattern over material instantiation, so that embodiment in a biological substrate is seen as an accident of history rather than an inevitability of life. Second, the posthuman view considers consciousness…as an epiphenomenon, as an evolutionary upstart trying to claim that it is the whole show when in actuality it is only a minor sideshow. Third, the posthuman view thinks of the body as the original prosthesis we all learn to manipulate, so that extending or replacing the body with other prostheses becomes a continuation of a process that began before we were born. Fourth, and most important, by these and other means, the posthuman view configures human being so that it can be seamlessly articulated with intelligent machines. In the posthuman, there are no essential differences or absolute demarcations between bodily existence and computer simulation, cybernetic mechanism and biological organism, root teleology and human goals. (2-3)
Orlan Her definition of the posthuman subject bears a striking resemblance to Turkle's distributed self: "The posthuman subject is an amalgam, a collection of heterogenous components, a material-informational entity whose boundaries undergo continuous construction and reconstruction" (3). Like the French performance artist Orlan, who makes a performance of cosmetic surgery and medical interventions in the name of art, the posthuman subject straddles the divide between human and machine, embracing the transfiguration of the body through technology. As Philip Auslander argues in From Acting to Performance, "The problem of theorizing the body....always central within performance theory and criticism, has taken on a new urgency in light of ever-accelerating technological interventions" (126). As Orlan, Stelarc, Laurie Anderson, or Steve Mann lead us towards an acceptance of technological enhancements of the body, of virtuality and simulation, we sometimes wonder where exactly we have gotten to. In his discussion of postmodern performance, and of Orlan in particular, Auslander cites Andrew Murphie's concern that any performance practice which seeks to reclaim control of the body without problematizing the influence of technology succeeds only in "making the body docile for its tasks in the technological age" (127). For Auslander, Orlan's work "valorizes the dematerialized, surgically altered, posthumanist body, a body that experiences no pain even as it undergoes transformation because it has no absolute material presence; its materiality is contingent, malleable, accessible to intervention" (132). On the other hand, he concludes that her work is resistant because it challenges the essentialist argument that the body is "foundational": that the body's material presence is "irreducible" (132). Orlan inscribes her body with patterns of information--creating a virtual body in some respects--but her interventions are ironic and parodic, self-reflexively challenging notions of beauty, medical hierarchy, and surveillance.

In "The Condition of Virtuality," Hayles offers a strategic definition, and a warning, of interest to anyone technologizing the humanities: "Virtuality is not about living in an immaterial realm of information, but about the cultural perception that material objects are interpenetrated by information patterns" (204). As Hayles persuasively points out, any definition of virtuality which dichotomizes materiality and information is not entirely to be trusted. She illustrates this bifurcation by citing Richard Dawkins' argument in The Selfish Gene that "Virtually every human behavior, from mate choice to altruism, is treated...as if it were controlled by genes for their own ends, independent of what humans may think" (185). Hayles is suspicious of a rhetoric which describes genes as functioning like "agents who perform the actions they describe": "Through this discursive performativity, informational pattern triumphs over the body's materiality...It constructs information as the site of mastery and control over the material world" (185). With this conclusion, Hayles echoes the influential argument of Elizabeth Grosz in Volatile Bodies that spurious dichotomies--such as between mind and body, or in Hayles' case between virtuality and materiality--obscure important patterns of non-hierarchized interaction.

Such a distinction between virtuality and materiality continues an age-old dream to liberate the spirit (information) from the material, and Hayles is very clear in her opposition to this idea: "In the face of such a powerful dream, it can be a shock to remember that for information to exist, it must always be instantiated in a medium...[C]onceiving of information as a thing separate from the medium that instantiates it is a prior imaginary act that constructs a holistic phenomenon as a matter/information duality." As is David Noble in The Religion of Technology, she is highly suspicious of any tendency to abandon the material world in search of the transcendental signifier.

Stitching Together a Media-Specific Analysis

In her deconstruction of claims for virtuality, Hayles seems equally at ease with Claude Shannon's theories of information and Derrida's notion of an "economy of supplementarity," which she invokes to stitch together the matter/information dichotomy. She confesses, "Part of what is at stake for me...is to show that materiality, far from being left behind, interacts at every point with the new forms that literature is becoming as it moves into virtuality" (190). In "Flickering Connectivities in Shelley Jackson's Patchwork Girl: The Importance of Media-Specific Analysis," Hayles demonstrates just how one might critique text and medium simultaneously: "Media-specific analysis attends both to the specificity of the form...and to citations and limitations of one medium in another. Attuned not so much to similarity and difference as to simulation and instantiation, media-specific analysis (MSA) moves from the language of 'text' to a more precise vocabulary of screen and page, digital program and analogue interface, code and ink, mutable image and durably inscribed mark, texton and scripton, computer and book" (para. 3). In making such distinctions, we are invaluably aided by some familiarity with Innis and McLuhan, or more recent revisions in Bolter and Grusin's Remediation.

As a preface to her analysis of Jackson's Patchwork Girl--a hypertextual deconstruction of Mary Shelley's Frankenstein story--Hayles asks her readers to play a game by considering the question, "Using only the characteristics of the digital computer, what is it possible to say about electronic hypertext as a literary medium?" While acknowledging that her game artificially excludes many legitimate critical strategies by disallowing "all references to content or operation of electronic hypertexts," she hopes to foreground "what difference the medium makes." The following summary of her points is not nearly as satisfying without Hayles' elaborations and commentary, though it will perhaps illustrate the notion of the online writing space as a unique performance medium with characteristic protocols. Hayles asserts that electronic hypertexts are dynamic images; include both analogue resemblance and digital coding; are generated through fragmentation and recombination; have depth and operate in three dimensions; are mutable and transformable; are spaces to navigate; are written and read in distributed cognitive environments; initiate and demand cyborg reading practices. (para. 5-13) Media-specific analysis--which Michael Heim calls techanalysis: "the detailed phenomenology of specific technologies" (45)-- sees a continuum between the narrative of the text and its medium of deployment, or instantiation. Hayles explains that "Flickering signification, which in a literal and material sense can be understood as producing the text, is also produced by it as a textual effect" (para. 44).

In articulating these protocols, Hayles is careful not to claim any superiority of electronic media over other media, for example the book. "Rather," she writes

I am concerned to delineate characteristics of digital environments that writers and readers can use as resources in creating literature and responding to it in sophisticated, playful ways....[T]he specificity of the medium comes into play as its characteristics are flaunted, suppressed, subverted. Whatever strategies are adopted, they take place within a cultural tradition where print books have been the dominant literary medium for hundreds of years, so it can be expected that electronic literature will use the awesome simulation powers of the computer to mimic print books as well as to insist on its own novelty, in the recursive looping of medial ecology that Bolter and Grusin call remediation (para. 14).
By invoking the spirit of play, Hayles associates her media-specific analysis not only to the growing body of research on the neurophysiology of play and improvised behaviors, she also suggests a wealth of associations with music. The "recursive looping of medial ecology," for example, finds rich analogies with Albert Murray's definition of riffing in the context of jazz improvisation: " [I]mprovisation includes spontaneous appropriation (or inspired allusion, which sometimes is a form of signifying) no less than on-the-spot invention" (96). It is within the matrix of constraints imposed by the architecture of our language machines, and the cultural traditions of print technologies, that we riff and register our spontaneous appropriations and inspired allusions.

Navigable Spaces and Transforming Mirrors

We've lately made a lot of progress in locating some aspects of semantics in the brain. Frequently we find verbs in the frontal lobe. Proper names, for some reason, seem to prefer the temporal lobe....But intelligence is a process, not a place. It's about improvisation, where the "sweet spot" is a moving target. It's a way, involving many brain regions, by which we grope for new meanings, often "consciously." (W.H. Calvin, How Brains Think, 2)

* * *

Online and hypertext writing are genres of performance which require that we consider the mise en scène anew. For Hayles, space in hypertext is a "topography that the reader navigates using multiple functionalities, including cognitive, tactile, auditory, visual, kinesthetic, and proprioceptive faculties" ("Condition" 198). There is no leaving the meat behind when we venture into hyperspace. When I was learning to play networked Quake, my body vibrated and cringed palpably every time I was reduced to "a pile of giblets" by an unknown assassin. We are involved in a very physical drama even though we may think of ourselves as being disembodied; and as we will see below, a number of new media artists play with notions of interactivity and the participation of the body in virtual environments. This orientation to hypertext retrieves spatial analogies--it is a "topography that the reader navigates"--though we might also think of hypertext and virtual realities as strategies performed upon and by the audience.

Peter Lunenfeld recuperates a term from Lucien Dällenback through Gregory Ulmer--the mise-en-abyme--to describe another way of refiguring the digital performance space. The term mise-en-abyme "implies that a book, story, film, CD-ROM, Web site, or hypertext contains selected passages that play out within themselves, in miniature, the process of the work as a whole....The mise-en-abyme is a mini narrative that encapsulates or somehow reflects the larger structures within which it is held; it is a mirroring of the text by the subtext" (53-54). For example, my analogy using the verbot Sylvie at the beginning of this article attempts to encapsulate the foundational themes of the whole article. Lunenfeld notes the tendency in digital media towards strategies of compression, aphorism, and fragmentation, all within an "aesthetic of unfinish" (124), and the mise-en-abyme is a "sleight of structural hand" to generate "an almost infinitely regressing series of mirror reflections" of the work's central concerns. The notion that digital art works are (transforming) mirrors is widespread; in many cases, the form of the work encapsulates the artist's vision of networked digital communication.

The Canadian artist David Rokeby, creator of Very Nervous System, also uses his art to explore the nature of interactive technologies as performance venues. For Rokeby, a technology is interactive "to the degree that it reflects the consequences of our actions or decisions back to us. It follows that an interactive technology is a medium through which we communicate with ourselves...a mirror. The medium not only reflects back, but also refracts what is given; what is returned is ourselves, transformed and processed." (Rokeby) Very Nervous System (1983-91) watches you through a video camera as you enter the performance space, and tracks your every movement. A computer translates these movements into music; you become an ensemble of musicians as you move about the room: "Each instrument is basically a behavior, an electronically constructed personality. It's watching you...taking playing cues from your movement. These behaviors are just algorithmic definitions--computer subroutines" (Cooper 134). When asked, "Who is the composer here?" Rokeby replies: "Think of a jazz band: different players, each with his or her own style. In the case of Very Nervous System, these are the "behaviors" defined by the software. Now give good jazz players some input--say, a chord chart or an old standard--and each player will improvise within his or her own style" (134). Very Nervous System is open to the extent that there is no one-to-one relationship between the work and the participant, a feature of the system architecture. The incoming information provided by the movements of the participant are "deliberately vague" with the result that "the system reacts in very complex ways which are only partly predictable" (Huhtamo 22). Rokeby trades on the fuzzy logic of his work to map the domains of interactivity and to question the neutrality of the computer-mediated interface.

Very Nervous System, David Rokeby

Much like Brenda Laurel, Rokeby is sensitive to the "apparent" transparency of interface technologies and to the important role played by the intersecting dynamic of subjectivity and control in interactive environments. "Because I've programmed a lot, because I've built computers, I know what it's like to write a program and watch people deal with it, and watch how my decisions change people's experiences," says Rokeby. "For me, it's important that I somehow articulate the importance of that act" (Zack 60). Subjectivity animates the work of art which in turn becomes a "machine for producing meanings." In the design of interactive works like Very Nervous System, Watch (1995), The Giver of Names (1998), and Shock Absorber (2001), the artist/programmer anticipates the reactions of the spectator/performer. Although the artist makes room for the spectator's subjective readings of the work, in Rokeby's view this involves "a partial displacement of the machinery of interpretation from the mind of the spectator into the mechanism of the artwork, a fracturing of the spectator's subjectivity." The degree to which chance operations and spontaneity are introduced into the program animating the machine reflects the quotient of control impinging on the inter-actor's subjectivity. (It is no accident that a description of these patterns of interaction begin to echo the classics of psychoanalytic writing.)

Besides the transforming mirror model of interactive environments or artifacts, Rokeby describes another model which he calls the "navigable structure"--"an articulation of space, either real, virtual, or conceptual." The design of the navigable space reflects the expressive power of the creator. The apparent objectivity of interaction parameters tends to enhance the creator's expressivity, as if it were a gift of freedom to the participant. Such a situation can be found, Rokeby suggests, with hypertext databases "which presume to completely cross-reference the information that they contain." However, the system of cross-referencing remains "a powerful expression of the ideas of the creator." Just as improvisational performance usually takes place within a matrix of constraints, so too does the navigable space of the interactive environment benefit from appropriately supportive structures. Rokeby elaborates:

It is ironic that wide-open interaction within a system that does not impose significant constraints is usually unsatisfying to the interactor...It has been my experience that the interactor's sense of personal impact on an interactive system grows, up to a point, as their [sic] freedom to affect the system is increasingly limited. The constraints provide a frame of reference, a context, within which interaction can be perceived. (Rokeby)
The interactor's ability to navigate the system provides a sense of freedom, but it is a symbolic freedom, existing only in relation to the established structure. "By relinquishing a relatively small amount of control," Rokeby advises, "an interactive artist can give interactors the impression that they have much more freedom than they actually do." Advice, and a warning.

With The Giver of Names (1998), Rokeby shifts the focus from the physical interaction of Very Nervous System to a more cerebral engagement. The interactor places an object on a pedestal in front of a video camera, and the resulting image is processed by a computer which associates the input to a sizeable database assembled and designed by Rokeby. The outcome is a metaphoric "naming" of the object: in early demonstrations, a small yellow rubber duck was described as "Semicircles, so assymetric that ill-proportioned pears occurred to their informed bodies, can demonstrate no second edible fruits." A small yellow Volkswagen Beetle toy became "Lemons, more eyeless than other beady sectors, would pardon no optical drops" (qtd. in Huhtamo 27). While there is little physical interaction required apart from choosing and placing the object in front of the camera, the participant in this communications loop is posed with a kind of double riddle associated with tricksters and oracles--you who have asked for insight already have the answer. The work asks us to contemplate the complex interdependencies between humans and machines when it comes to signifying meaning. While The Giver of Names seems to be riffing on the object held up for scrutiny, Rokeby is signifin(g) on artificial intelligence. Rokeby explains the evolution of his work as follows: "I feel as though the transition from Very Nervous System to The Giver of Names is a transition naturally paralleling the shift in the sense of what was being most challenged by the computer. In the 80s it seemed to be the material body. In the 90s it seems to be the notions of intelligence, and consciousness" (Huhtamo 30).

Idiomatic Orientation in the Game of Games

Under the shifting hegemony of now this, now that science or art, the Game of games had developed into a kind of universal language through which the players could express values and set these in relation to one another. Throughout its history the Game was closely allied with music, and usually proceeded according to musical or mathematical rules. One theme, two themes, or three themes were stated, elaborated, varied, and underwent a development quite similar to that of the theme in a Bach fugue or a concerto movement....Beginners learned how to establish parallels, by means of the game's symbols, between a piece of classical music and the formula for some law of nature. Experts and Masters of the game freely wove the initial theme into unlimited combinations. (Hermann Hesse, Magister Ludi: The Glass Bead Game)

* * *

With her stimulating discussion of new narrative forms--Hamlet on the Holodeck: The Future of Narrative in Cyberspace--Janet Murray joins Sherry Turkle (Life on the Screen: Identity in the Age of the Internet), Peter Lunenfeld (Snap to Grid) and Steven Holtzman (Digital Mantras: The Languages of Abstract and Virtual Worlds) among others in a distributed collaboration to reconceptualize our notions of art and the self when they are reflected through the transforming mirror of the digital processor. Holtzman, for example, claims that computers are "the ultimate manipulators of abstract structures," imposing their idiomatic potential on language, music, and various other arts. He recommends that creators must develop an appreciation of this idiomatic potential if works are to suit their medium. It's interesting to note that the improvisational musician Derek Bailey stresses the importance of idiomatic orientation for jazz players, who will often assert that they "play bebop" or "play flamenco" as a way of identifying their idiom of choice. In jazz improvisation, idiomatic orientation involves a matrix of elements: musical voice and style; a repertoire of music; the influences of the tradition; technical mastery; and individual expression. Many of the theorists of the new media, following Holtzman's lead, are attempting to articulate the idiomatic routines of the language machines.

Janet Murray's ambitious goal is to reconsider the nature of narrative, from the Homeric poets to intelligent agents such as the psychoanalytic ELIZA and the chatterbot Julia. Here, too, we find the claim that virtual interactive spaces are a variety of performance venue, where one is often asked to improvise solutions to problems, ways of navigating through complex structures, and even new identities. The digital writing space, Murray suggests, has its own aesthetics ruled by the supplementary signs of immersion, agency, and transformation. These aesthetic characteristics are further elaborated by four essential properties "which separately and collectively make [the computer] a powerful vehicle for literary creation. Digital environments are procedural, participatory, spatial, and encyclopedic" in Murray's schema. The procedural and participatory properties help define the relatively vague term interactive; while their spatial and encyclopedic properties help define what we mean when we call virtual environments immersive. (71) Agency--"the satisfying power to take meaningful action and see the results of our decisions and choices" (126)--is often accomplished through spatial navigation: "The ability to move through virtual landscapes can be pleasurable in itself, independent of the content of the spaces" (129). As we saw above, Rokeby programs elements of unpredictability into his navigation schemes to foster the greater illusion of agency. Murray suggests that there are two different configurations for electronic environments--the solvable maze and the tangled rhizome (130)--each of which offers unique pleasures and narrative possibilities. (In "Patterns of Hypertext," Mark Bernstein of Eastgate Systems identifies the Cycle, Counterpoint, Mirrorworld, Tangle, Sieve, Montage, Neighborhood, Split/Join, Missing Link, Navigational Feint, and combinations and variations of these as narrative options. Bernstein explains in his introduction that the "reader's experience of many complex hypertexts is not one of chaotic disorder, even though we cannot yet describe that structure concisely; the problem is not that hypertexts lack structure but rather that we lack words to describe it.")

Clearly, this new performance space encourages it's own idiomatic responses. For example, in her discussion of "participatory narrative," Murray asks rhetorically, "How can we enter the fictional world without disrupting it? How can we be sure that imaginary actions will not have real results?" The answer "lies in the discovery of the digital equivalent of the theatre's fourth wall. We need to define the boundary conventions that will allow us to surrender to the enticements of the virtual environments" (103). Identifying these boundary conventions, or protocols, is rendered tricky because they are not clear-cut demarcations, they are liminal (threshold) experiences associated by Turner and others as similar to rites of passage: that is, there are ritualized conventions involved; they are processes, not locations; and the action of crossing the boundary marks a change of self-definition. The fourth wall is not a thing but an absence, a subtle device in the technology of the theatre for implicating the audience in the meaning-making of the play.

Similarly, we might conclude, the boundary conventions of hypertextual narrative will be neither smooth nor striated, a distinction coined by Pierre Boulez in relation to music: "...[I]n a smooth space-time one occupies without counting, whereas in a striated space-time one counts in order to occupy." Occupying the virtual storyspace will require us to count, and not count--the space is both smooth and striated, negotiated by following the rules and improvising our way through their interstices. In defining the new boundary conventions, and discovering their new idioms, we may need another form of the willing suspension of disbelief, a way of counting without counting. It may be that the trope of virtuality--itemized by Michael Heim, deconstructed by Katherine Hayles--inscribes the threshold we cross with a (learned) willing suspension of disbelief.

Opening Up the Practice

All the insights, noble thoughts and works of art that the human race has produced in its creative eras, all that subsequent periods of scholarly study have reduced to concepts and converted into intellectual property--on all this immense body of intellectual values the Glass Bead Game player plays like an organist on an organ. And this organ has attained almost unimaginable perfection; its manuals and pedals range over the entire intellectual cosmos; its stops are almost beyond number. Theoretically this instrument is capable of reproducing in the game the entire intellectual content of the universe. These manuals, pedals and stops are now fixed. Changes in their number and order, and attempts at perfecting them, are actually no longer feasible except in theory. (The Glass Bead Game)

* * *

In his prescient 1943 novel, Hesse cautions those who would master the Glass Bead Game that the over-codification of the game--fixing the manuals, pedals and stops--would result in creative sterility. While there are obvious benefits to introducing protocols for the new hypermedia, allowing a space for unique and original expression within the navigable structures seems advisable.

Many of us have learned computing applications by the trial-and-error method. We recognize that the machine and its language impose certain, often considerable, constraints on our ability to communicate spontaneously. We have to learn the rules to play the game. We learn new ways to do things with words. However, even while learning these rules, we may be trying things that might work. We pursue the magic of "as if" and "what if." Well-designed software doesn't penalize us for improvising in this ad-hoc way.

In partial contrast to Rokeby's The Giver of Names, Perry Hoberman's Bar Code Hotel exemplifies an interactive environment where participants are able to "write" a narrative with equal parts spontaneity and (pre-) determination. Participants enter a room filled with bar code symbols activated by a wand, a lightweight pen with the ability to scan and transmit information to a networked computer system. These commands are translated into stereoscopic images produced by a pair of video projectors, and into quadraphonic audio signals. Participants bring virtual objects into being and give them commands by scanning various bar code symbols. "Once brought into existence," explains Hoberman, "objects exist as semi-autonomous agents that are only partially under control of their human collaborators. They also respond to other objects, and to their environment. They emit a variety of sounds in the course of their actions and interactions. They have their own behaviors and personalities; they have their own life spans (on the order of a few minutes); they age and (eventually) die." It is apparent from his description that Hoberman has done everything in his power as an artist conversant with technology to animate his computing machines:

Objects can interact with each other in a variety of ways, ranging from friendly to devious to downright nasty. They can form and break alliances. Together they make up an anarchic but functioning ecosystem....An object can become an agent, a double, a tool, a costume, a ghost, a slave, a nemesis, a politician, a relative, an alien. Perhaps the best analogy is that of an exuberant and misbehaving pet.
This installation suggests other narrative models which might be added to those discussed previously by Rokeby and Murray. Hoberman writes that the "narrative logic of Bar Code Hotel is strictly dependent on the decisions and whims of its guests. It can be played like a game without rules, or like a musical ensemble. It can seem to be a slow and graceful dance, or a slapstick comedy." Whatever the style, the whole performance playfully deconstructs the ubiquitous and viral use of bar codes to identify and track the commodities and transactions of computer-mediated societies.

Perry Hoberman, Bar Code Hotel

A exhibition of Hoberman's digital art at ZKM in Karlsruhe (May-August 1998) revealed an abiding interest in signifin(g) on the human/computer interface. The catalogue description of Cathartic User Interface conveys the playful revisionism which characterizes Hoberman's best work:

[Cathartic User Interface] allows users to quickly and effectively work through their conflicting emotions concerning the benevolent yet pernicious influences of computer technology on their lives. They can pitch mouselike balls at a wall covered with obsolete PC keyboards, triggering an array of multimedia projections and sounds dealing with the more troubling and problematic aspects of technology….A ramp at the bottom returns the balls to the users for maximum cathartic effect. (ZKM)

In this and other of his works, including the recent Workaholic (2000) and Zombiac (2000), Hoberman characterizes the human/machine interface as being navigable and reflective, if somewhat perverse, arbitrary, and unruly. For example, Zombiac "aims to manifest a kind of (fake) artificial intelligence that steers clear of any attempt to communicate meaningful information to its human participants. Certain kinds of dialogue and exchange between us (human) and them (machine) may be possible here - but only on their terms." The technical descriptions of these works on Hoberman's website (http://www.hoberman.com/perry/index.html) reveal an ad-hoc repurposing of technology--both hardware and software--that compromises their predictability. As a result, the participants in these performance pieces are psychologically repositioned and inducted into ad-hoc behaviors in response to the unpredictable actions of the machinery.

If we shift the venue from the parodic social satire of Hoberman's work to the online classroom, we discover many subtle parallels--and differences--on the theme of interactivity, collaboration and control in scripting the learning space. The educational philosophy of constructivism suggests that learners construct knowledge from authentic and relevant experience rather that accepting without question the teaching of experts, or the directives of a software manual. David Jonassen, author of Computers in the Classroom: Mindtools for Critical Thinking (1996), argues that meaningful learning is active, collaborative, conversational, and reflective, all of which can be stimulated by well-designed computing applications.

Constructivism suggests that learners should be encouraged to explore applications because their sense of accomplishment and ownership of the process will be more profound. Bourdeau and Wasson recommend that inter-dependencies be fostered between online "actors" to foster the formation of collaborative ensembles and membership in the learning process. Courses and training sessions using online applications should provide clear guidelines and parameters but not specify conditions for interaction down to the nth degree. Participants should be allowed, even required to exercise a number of options for interaction. And in terms of collaborative learning and scholarship, the encyclopedic nature of the web allows us to network our own writing and the writing of our students (if such is the case) into new patterns of intertextuality. (Soules, "Hybrid Online Courses…") Throughout his career, McLuhan insisted that we watch what artists will make of new media, and how they will retrieve the effects of older media, if we want to understand the capabilities and limitations of unfamiliar technologies. This advice seems particularly apropos when we reflect on the educational value of digital media.

Landow cites Thaïs Morgan to suggest that intertextuality "opens up" the reading of literature (or of any discipline) by replacing "the evolutionary model of literary history with a structural or synchronic model" which frees the text "from psychological, sociological, and historical determinisms, opening it up to an apparently infinite play of relationships" (qtd. Landow 10). Since Bakhtin's original formulation of dialogism, this notion has been extended to apply to disciplinary traditions as a whole by Kristeva, Derrida, and Landow himself. Like the Bar Code Hotel, online communication is dialogic and intertextual in the sense that it is built up by many individuals as a conversation--both in virtuality and in real-time materiality--to yield an expression of community with a life of its own. As with Rokeby's Very Nervous System, the language machines and technologies we use should provide us with navigable spaces and transforming mirrors--a place to perform, a place to improvise, a place to open up the field.

Coda: The Ghost in the Machine

The story is told of an automaton constructed in such a way that it could play a winning game of chess, answering each move of an opponent with a countermove. A puppet in Turkish attire and with a hookah in its mouth sat before a chessboard placed on a large table. A system of mirrors created the illusion that this table was transparent from all sides. Actually, a little hunchback who was an expert chess player sat inside and guided the puppet's hands by means of strings. One can imagine a philosophical counterpart to this device. The puppet called "historical materialism" is to win all the time. It can easily be a match for anyone if it enlists the services of theology, which today, as we know, is wizened and has to keep out of sight. (Walter Benjamin, "Theses on the Philosophy of History," 253)

* * *

In Benjamin's fable of the automaton, the puppet of technological determinism continues to win as long as it is animated by the "services of theology." Benjamin is no doubt reflecting on the unspoken spiritual subtext of much of the discourse on technology, but he is also concerned with the symbiosis of technology and spirit. As he argues in another famous context, "that which withers in the age of mechanical reproduction is the aura of the work of art" (221). For Benjamin, aura conjures up a complex matrix of associations which include the telltale presence and authority of the maker, the authenticity and originality of the artifact, and its consciousness of a tradition: "The uniqueness of a work of art is inseparable from its being imbedded in the fabric of tradition. This tradition itself is thoroughly alive and extremely changeable" (223). Benjamin's seemingly conservative modernism goes against the grain of our postmodern extravagance with regard to the reproduction of images, the iteration of new versions, samples, versions, the culture of the copy. However we choose to theorize the interconnected questions of aura, authenticity, originality, authority, and presence--all of which have been thoroughly deconstructed after Derrida--Benjamin's elegy for the uniqueness of the work of art haunts us like a ghost in the machine.

A survey of digital performances--whether instantiated in text, hypertext, multimedia, or across the human/computer interface--reveals an abiding desire to animate the language machine with something human. For Janet Murray, the new hypertextual narratives should not forget their roots in the ancient art of storytelling. For multimedia artists like Rokeby and Hoberman, machines are invested with an improvisatory quality which has the effect of throwing the performance back on the consciousness of the participants: Rokeby's notion of the transforming mirror. Hayles insists that we remember our bodies while traveling in virtual spaces; Laurel reminds us to reflect on the play of children when we come to designing our computing environments. It seems, in fact, inappropriate to think that we are engaged in the new media, since so much of contemporary digital art is a repetition and revision of previous forms. The signifyin(g) and spontaneous appropriations, despite Benjamin's argument to the contrary, bring forward traces of the original, and infuse new artifacts with an animating aura.

Writing for the artificial intelligence Sylvie--learning how to write like a cyborg speaks--convinces me that interacting with computing technology involves mutual influence: I am required to perform in new ways and learn new rhetorical strategies. At the same time, it is reassuring to discover that Sylvie can speak very well for me, even if in her own vernacular. She is focused and not prone to distraction or indecision. Sylvie reminds me that my language machine is both a mode of production and a performance space, and my vain hope is that my words leave a trace of humanity in her memory, some aura of human presence registered in her silicon intelligence. More certain, however, is the fact that my writing here cannot duplicate the sound of Sylvie's voice. For that we require her presence, or another medium.

Works Cited

Auslander, Philip. From Acting to Performance: Essays in Modernism and Postmodernism. New York, Routledge, 1997.

Benjamin, Walter. Illuminations: Essays and Reflections. Ed. Hannah Arendt. Trans. Harry Zohn. New York: Schochen, 1968.

Bernstein, Mark. "Patterns of Hypertext." Serious Hypertext. Eastgate Systems. 1999. http://www.eastgate.com/patterns/print.html. Rpt. from Shipman, Frank, Eli Mylonas, and Kaj Groenback, eds. ACM: Proceedings of Hypertext '98. New York: Association for Computing Machinery, 1998.

Bolter, Jay David and Richard Grusin. Remediation: Understanding New Media. Cambridge: MIT Press, 1999.

Boulez, Pierre. Boulez on Music Today. Trans. Susan Bradshaw and Richard Bennett. Cambridge, MA: Harvard University Press, 1971.

Bourdeau, J. and Wasson, B. "Actor Interdependence in Collaborative Telelearning." Ed-Media/Ed-Telecom 1998 Proceedings. Charlottesburg, VA: Association for the Advancement of Computing in Education, 1998.

Calvin, William H. How Brains Think: Evolving Intelligence, Then and Now. New York: HarperCollins, 1996.

Carlson, Marvin. Performance: A Critical Introduction. New York: Routledge, 1996.

Cooper, Douglas. "Very Nervous System." Wired 3.03 (March 1995): 134.

Gates, Henry Louis. Jr. The Signifying Monkey: A Theory of African-American Literary Criticism. New York: Oxford University Press, 1988.

Grosz, Elizabeth. Volatile Bodies: Towards a Corporeal Feminism. Bloomington, Indiana: Indiana University Press, 1994.

Hayles, N. Katherine. "The Condition of Virtuality." Language Machines: Technologies of Literary and Cultural Production. Ed. Jeffrey Masten, Petter Stallybrass, and Nancy Vickers. New York: Routledge, 1997. 183-206.

---. "Flickering Connectivities in Shelley Jackson's Patchwork Girl: The Importance of Media-Specific Analysis." Postmodern Culture 10.2. 2000. http://www.iath.virginia.edu/pmc/current.issue/10.2hayles.html. (2 May 2000).

---. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press, 1999.

Heim, Michael. "The Cyberspace Dialectic." The Digital Dialectic: New Essays on New Media. Ed. Peter Lunenfeld. Cambridge, MA: MIT Press, 2000: 25-45.

Hoberman, Perry. Bar Code Hotel. ( http://www.hoberman.com/perry/php/bch/index.html). 1997.

Holtzman, Steven. Digital Mantras: The Languages of Abstract and Virtual Worlds. Cambridge, MA: MIT Press, 1995.

Huhtamo, Erkki. "Silicon Remembers Ideology, or David Rokeby's Meta-Interactive Art." David Rokeby: The Giver of Names. Guelph, ON: Macdonald Stewart Art Centre, 1998. 17-30.

Jonassen, David. Computers in the Classroom: Mindtools for Critical Thinking. New York: Prentice-Hall, 1996.

Landow, George. Hypertext: The Convergence of Contemporary Critical Theory and Technology. Baltimore: Johns Hopkins UP, 1992.

Laurel, Brenda. Computers as Theatre. New York: Addison & Wesley, 1993.

___. "Technological Humanism and Values-Driven Design." Keynote Address, CHI-98. April 1998. (http://www.tauzero.com/Brenda_Laurel/Severed_Heads/Technological_Humanism.html). May 2001.

Laurel, Brenda, & Rachel Strickland. Placeholder. (http://www.interval.com/projects/placeholder/index.html). 27 May 1999.

Lunenfeld, Peter. Snap to Grid: A User's Guide to Digital Arts, Media, and Cultures. Cambridge, MA: MIT Press, 2000.

McLuhan, Eric, & Frank Zingrone. Essential McLuhan. Concord, ON: Anansi, 1995.

McLuhan, Marshall and Eric McLuhan. Laws of Media: The New Science. Toronto: University of Toronto Press, 1988.

Masten, Jeffrey, Peter Stallybrass & Nancy Vickers, eds. "Introduction." Language Machines: Technologies of Literary and Cultural Production. New York: Routledge, 1997.

Murray, Albert. Stomping the Blues. New York: Da Capo, 1976.

Murray, Janet. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. New York: The Free Press, 1997.

Orlan. (http://www.cicv.fr/creation_artistique/online/orlan/index1.html). May 2001.

Rokeby, David. Transforming Mirrors: Subjectivity and Control in Interactive Media. ( http://www.interlog.com/~drokeby/mirrorsintro.html )

Soules, M. "Hybrid Online Courses and Strategies for Collaboration." Consortium for Computing in the Humanities (COCH) Conference Paper. University of Alberta. May 2000. http://www.mala.bc.ca/~soules/hybrid.htm

---. "Protocols of Improvisation and Online Communication." LETT '97 Conference Proceedings. Victoria, BC: Leading Edge Training and Technology, 1997. http://www.mala.bc.ca/~soules/improv1.htm

Stone, Allucquère Rosanne. The War of Desire and Technology at the Close of the Mechanical Age. Cambridge, MA: MIT Press, 1996.

Turkle, Sherry. Life on the Screen: Identity in the Age of the Internet. New York: Simon & Schuster, 1995.

Turner, Victor. "Body, Brain, and Culture." The Anthropology of Performance. New York: Performaing Arts Journal Publications, 1988. 156-178.

ZKM Medienmuseum. Unexpected Obstacles. Perry Hoberman Exhibition Brochure. Karlsruhe, Germany: ZKM, 1998.

Zacks, Rebecca. "Dancing with Machines." Technology Review (May-June 1999): 58-62.

(c) M. Soules 2002
soules@mala.bc.ca
http://www.mala.bc.ca/~soules/