Live Algorithms and The Future of Music
|
May 2007 |
In recent years, the computer has assumed a central role in artistic practice. Digital technology now serves as a critical site for interdisciplinary exploration, encouraging the blurring of boundaries between art forms. Increasingly, new imaginings of history, culture, and human practice are finding the computer at their center. In turn, these new imaginings are being driven by the advent of new and more powerful forms of computer interactivity that challenge traditional conceptions of human identity and physicality.
At the uninterrogated core of common notions of interactivity, as it is practiced in the digital domain, we find the primordial human practice of improvisation. Perhaps the most important unacknowledged lesson the interactive digital arts have taught us concerns the centrality of both improvisation and interactivity to the practice of everyday life. What is more difficult to divine for these arts, as well as the theorizations that attend them, is the nature of the relationship between the two concepts.
To begin, we must interrogate the theoretical and historical discourses that mediate our encounters with computers, including those that condition our cultural understanding of both improvisation and interactivity. Increasingly, theorizing the nature of these two practices is becoming an interdisciplinary affair, centering on how meaning is exchanged in real-time interaction. Such studies, combining the insights of artists, cultural theorists and technologists, will be crucial to the development of new conceptions of digitally driven interactivity.
Canonical new media histories tend to date the advent of interactivity in artmaking to the mid-1980s.1 However, anyone who remembers the period when “multimedia” did not refer to computers may find ironic the historical lacuna separating the notion of interactivity now on offer from the practices that arose in the computer music communities beginning in the early 1970s. This early period produced a number of “interactive” or “computer-driven” works, representing a great diversity of approaches to the question of what interaction was and how it affected viewers, listeners, and audiences.
By the early 1960s, magnetic tape-based music composition was known to offer possibilities for precise control of time and sound, but was also criticized as insensitive to real-time nuances of human expressivity. To many, making electronic music live, in real-time in front of audiences, would revitalize the paradigm of the composer-performer, long abandoned in the West. However, improvisation, a primary practice of the European composer-performer since antiquity, had been unceremoniously dumped from Western music’s arsenal of practice by the late 19th Century.
The recrudescence of real-time music making in the American classical music of the 1950s not only explored chance as a component of composition, but also rekindled aesthetic contention around the nature, purpose, structure, and moral propriety of improvised forms and practices. According to cultural historian Daniel Belgrad, these debates were part of an emerging “culture of spontaneity” that crucially informed the most radical American artistic experimentation in the mid-20th century, from the Beats and Abstract Expressionism to the transgressive new music of Charlie Parker, Thelonious Monk, and the musical New York School of John Cage, David Tudor, Morton Feldman, Earle Brown, and Christian Wolff.2
By May of 1968, “freedom” was on both the political and the musical agenda in Europe and the United States. Improvisation was widely viewed as symbolic of a dynamic new approach to social order that would employ spontaneity to unlock the potential of individuals, and to combat oppression by hegemonic political and cultural systems. The rise of “free jazz” in the United States was widely connected, both in Europe and the United States, with challenges to racism and the social and economic order, generally.
As one group of young American expatriates living in Rome, including electronicists Richard Teitelbaum and Alvin Curran, founded the important free improvisation group Musica Elettronica Viva (literally, “Live Electronic Music”),3 two of their stateside colleagues, David Behrman and Gordon Mumma, implicitly advanced the radical idea of a musical composition that could exist purely and entirely in hardware. In this period, scores by the two composers, where they existed at all, often consisted only of a circuit diagram, accompanied by a set of sketchy instructions. The late-1960s live electronic music of both composers also drew explicitly on the practice of improvisation. In Behrman’s Runthrough, performers wielding flashlights interacted in real time with a matrix of photocells connected to a Behrman-built synthesizer.4 In Mumma’s Hornpipe, a “cybersonic” console consisting of an analog computer transformed the sounds of Mumma’s extended horn improvisations in real time.5
In this kind of live electronic work, the “structure” of the piece encompassed both the performance and an interactive environment facilitated by the devices themselves. Unlike conventional scores, the electronics were explicitly conceived as one element in an overall environment that was only partially specified in advance. The composition as a whole articulated a kind of dialogue with the outside world, where real-time music making was central to the realization of the work.
Even as the practice of computer music in the 1970s at US academic and corporate institutions such as Bell Labs, Stanford University, the University of Illinois, and the University of California, San Diego, as well as the Pierre Boulez-founded Institut de Recherche et Coordination Acoustique/Musique in Paris continued to support the magnetic tape model, the advent of the new, relatively portable mini- and microcomputers signaled a cultural shift in 1970s contemporary music in which improvisative musical practices were being reasserted, if not uncontroversially embraced. These forces led to a new medium that composer Joel Chadabe, one of the earliest pioneers, later called “interactive composition.”
The early “interactive composing” instruments, constructed by Chadabe, University of Illinois professor Salvatore Martirano and others, “made musical decisions as they responded to a performer, introducing the concept of shared symbiotic control of a musical process.”6 These features of the new software-driven landscape blurred the boundaries between human and machine music-making and called conventional notions of human identity into question, while establishing a critical space to explore communication not only, or even primarily, between people and machines, but between people and other people.
Salvatore Martirano’s massive “Sal-Mar Construction” was played by the composer in a live performance, using over 300 touch switches to direct the flow of sound-producing signals. As Martirano later told Chadabe, he was not so much in control of the device as a partner with it. “Control was an illusion,” he remembered. “But I was in the loop. I enabled paths. Or better, I steered. It was like driving a bus.”7 David Behrman, who had worked with composer John Cage and choreographer Merce Cunningham, began to create elegiac pieces for improvising instrumentalists and “melody-driven electronics,”8 while his younger associates Rich Gold, John Bischoff, and Jim Horton, working in and around Oakland’s Mills College, fashioned early networks of microcomputer music machines that interacted with each other to create music collectively.9 The practice of improvisation was crucial to the nature and practice of this work. Chadabe’s later observation that Behrman’s work “was electronic, but it had the feeling of improvised music” stood in sharp contrast and direct challenge to pan-European contemporary music’s widespread disavowal of improvisation.
In consonance with the perceived need for interactive computer music to combine sonorous and sensuous experiences with critical spaces for considering the nature of human interaction, in the last few years an important marker of the growth of these technological practices has been the Live Algorithms for Music (LAM) research network,10 an initiative created in 2004 by computer scientist Tim Blackwell and composer Michael Young of Goldsmiths College in London. According to Young and Blackwell, LAM is conceived as “an inter-disciplinary community of musicians, software engineers and cognitive scientists,” sharing and furthering the goal of investigating “autonomous computers in music.” A series of Live Algorithms conferences have included research papers and performance contributions from musicians (electronic and instrumental), composers, artists, software engineers and researchers in computer science, cognitive science, robotics and mathematicians.
According to Young and Blackwell, LAM’s vision foregrounds “the development of an artificial music collaborator. This machine partner would take part in musical performance just as a human might; adapting sensitively to change, making creative contributions, and developing musical ideas suggested by others. Such a system would be running what we call a ‘live algorithm’.”
Blackwell and Young define a “live algorithm” by its features:
To be sure, the musical implications of “machine intelligence” animated many early forays into interactive music making. As discourses surrounding AI began to diffuse in the early 1990s, however, the emphasis shifted toward a complex set of aesthetic, philosophical, social, historical, and culturally oriented questions, situated at the crossroads of computer science, the arts, and the humanities. Thus, work on live algorithms for music has implications for evolutionary computation and artificial life, swarm intelligence, chaotic dynamics, cellular automata, neural networks, and the area of machine consciousness more generally.
Accompanying this interdisciplinary orientation has been a renewed theorization of the practice of improvisation. But why study improvisation? Among the many findings of the residency on improvisation I co-led at the University of California’s Humanities Research Institute in 2002 were these:
In a globalized environment, improvisation functions as a key element in emerging postcolonial forms of aesthetics and cultural production. In addition, improvisation mediates cross-cultural, transnational and cyberspatial (inter)artistic exchanges that produce new conceptions of identity, history and the body, as well as fostering socialization, enculturation, cultural formation and community development. Finally, the improvisative production of meaning and knowledge provides models for new forms of social mobilization that foreground agency, personality and difference, and that engage history, memory, agency, and self-determination.
Any practice for which such expansive claims could be seriously entertained would seem to be one that should be studied widely, in depth and with great alacrity, with the vision that the study of improvisation could present a new animating paradigm for scholarly inquiry in many fields in the humanities, arts, and social sciences. In fact, significant work on improvisation is already taking place in anthropology, sociology, architecture, cognitive science, music cognition and psychology, cultural studies, dance, gender studies, linguistics, literary criticism, music education and music therapy, musicology and ethnomusicology, organizational studies, philosophy, aesthetics, political science, theatre and performance studies--and many other fields. Most recently (2007), an interdisciplinary team of researchers led by literary scholar Ajay Heble and philosopher Eric Lewis will be pursuing a major research initiative in “Improvisation, Community, and Social Practice,” with the support of a multi-year grant from Canada’s Social Sciences and Humanities Research Council (SSHRC). This research team, of which I am part, will certainly pursue new ways of theorizing, informed by contemporary practices of improvisation in the arts that include technology as a central component.12
For LAM networkers, improvisation becomes a central component in a conception of “strong” interactivity, as distinct from “weakly interactive” or “reflex” systems in which, for instance, “incoming sound or data is analysed by software and a resultant reaction (e.g., a new sound event) is determined by pre-arranged processes” that “might also utilise stochasticity to effect surprise.” In contrast to systems that manifest “an illusion of integrated performer-machine interaction, feigned by the designer,” the strong interactivity of a live algorithm, as described by Blackwell and Young, is characterized by properties analogous to those found in human performance, e.g., “autonomy, innovation, idiosyncrasy and comprehensibility.”
Strong interactivity depends on instigation and surprise as well as response. Individual decision-making is immediate, necessary and basic; when to play or not, when to modify activity in any number of parameters (loudness, pitch, tone quality), when to imitate or ignore another participant, when to ‘agree’ the performance is concluding. When to make a decision. And why. Without the capacity to innovate, listeners would lose the belief that the LA was truly engaged with the performance instead of merely accompanying it. The iterative, generative, idiosyncratic world of algorithmic organisation must be accessed, but the mechanical and the predictable must be avoided. It is the ability to innovate that distinguishes automation from autonomy.13
Young and Blackwell feel that strong interactivity “is exemplified in the human-only practice of ‘free’ improvisation.” In this regard, LAM research consists of “a marrying of algorithmic music, live electronics and free improvisation,” and my own activity as composer since the late 1970s exemplifies this approach. In my most widely performed piece, Voyager, originally programmed by me in 1987 and extensively updated since that time, improvisors are engaged in dialogue with a computer-driven, interactive improvisor. A set of algorithms analyzes aspects of a human improvisor’s performance in real time, using that analysis to guide another set of algorithms that blend complex responses to the musician’s playing with independent musical behavior.
In Voyager, the improvised musical encounter is modeled as a negotiation between improvising musicians, some of whom are people, others not; the program does not need to have real-time human input to generate music. In this kind of live-algorithmic model of music-making, decisions taken by the computer have consequences for the music that must be taken into account by the human improvisors, an aesthetic of variation and difference that is clearly at variance with the information retrieval and control paradigm that late capitalism has found useful in the encounter with interactive multimedia and hypertext discourses.14
Crucially informing this work, as well as the LAM orientation more generally, is the important British strain in post-1965 improvised music. One of the most influential exponents of this experimental musical practice, the late British guitarist Derek Bailey, produced the most frequently cited book on improvisation (regardless of field), Improvisation: Its Nature and Practice in Music. First published in 1978, Bailey’s book presented a forceful, historically and ethnographically supported case for the centrality of improvisation to musical practice. For Bailey, and for many others, improvisation persists as a primary means for the articulation of artmaking, and in this light, the study of improvisation becomes crucial to the understanding of the expressive culture of our time.15
As with Bailey’s music and that of his fellow first-generation free improvisors, this book directly challenges Western art music’s anti-improvisation orthodoxies. In fact, such challenges have proved to be particularly necessary to clear away some of the cultural presuppositions that informed much prior research into the practice. Sociologist Alfred Schutz, in his 1964 meditation on “Making Music Together,” already saw that “the system of musical notation is…accidental to the social relationship prevailing among the performers. This social relationship is founded upon the partaking in common of different dimensions of time simultaneously lived through by the participants.”16
Here, Schutz performs a critical shift in disconnecting improvisation from a mystificatory, Romantic connection with artmaking. Rather, the clear implication is that improvisation engages agency, history, memory, identity, and embodiment. In this way, we can recognize that these purely musical questions have their analogues in similar issues surrounding the practice of everyday life itself. When Blackwell and Young insist that free improvisation “rejects top-down organisation (a priori agreements, explicit or tacit) in favour of open, developing patterns of behaviour,” we are in the presence of a musical and interactional aesthetic that has become enlisted as a metaphor for larger social and political questions of identity and social organization.
In this respect, LAM participants might make common cause with Schutz’s observation that “a study of the social relationships connected with the musical process may lead to some insights valid for many other forms of social intercourse.” Here, the combination of technology with the arts and humanities may become a trenchant site for the exploration of these critically important issues. At the same time, technologically imbued music making itself becomes a critical tool with which to analyze contemporary critical, cultural, historical, and social issues whose importance cuts across fields. Thus, the centrality of music study to contemporary public intellectual discourse is powerfully reasserted.
Unfortunately, there are still no easy or stable definitions for the dynamics of human interaction that LAM researchers are exploring. Thus, it would be difficult to negotiate a single, overarching meaning for improvisation from the vast array of possible definitions. Rather, the totality of the compendium of knowledge developed by placing scholars and researchers in virtual dialogue would eventually inscribe the outlines of an articulated, emergent definition of improvisation, drawn from multiple fields and thereby moving beyond the preoccupations of any one. What we can say for now is that improvisation must be open–that is, open to inputs, open to contingency; a real-time, real-world mode of production. Thus, in the 21st Century, scholarly work on improvisation, like improvisation itself, is international and multicultural, proceeding from a panoply of theoretical positions that revise existing histories and construct new historiographies.
My conclusion here is that the direct study of improvisation will be vital to the production of new ways of using information technology - not only in the arts, but also across the board. Human identity, particularly in negotiation with new technologies, is continually reinscribed through processes of interactivity and improvisation, thereby demonstrating its centrality to our birthright as human beings. Thus, the interdisciplinary study of how meaning is exchanged in real-time interaction will be crucial to the development of new user interfaces, new forms of art, more sophisticated interactive computer applications, and much more - simply because improvisation is not only what people do when they play jazz or bluegrass, but also what they are doing when they play video games, surf the Net, or decide how to cross Main Street.